STT-tensorflow/tensorflow/lite/tools/pip_package
Dmitry Kovalev 6aa31d629f Update setup.py and setup_with_bazel.py for tflite_runtime package
There is no runtime dependency on pybind and "has_ext_modules=lambda: True" is needed to generate proper platform-dependent wheel name (e.g. "cp37-cp37m-macosx_10_15_x86_64" instead of "py3-none-any").

PiperOrigin-RevId: 344033512
Change-Id: I1795039d319e1d54715067bc2a876e2c0901ba26
2020-11-24 04:59:37 -08:00
..
debian
build_pip_package_with_bazel.sh Fixups for pip_package on Windows 2020-09-30 17:53:08 -07:00
build_pip_package.sh Add '--std=c++11' to compile native part of tflite_runtime and fix issues introduced by https://github.com/tensorflow/tensorflow/pull/36690 2020-02-26 04:13:36 -08:00
Dockerfile pybind11-dev is necessary to build interpreter_wrapper_pybind11.cc 2020-04-13 18:31:35 -07:00
Makefile Add '--std=c++11' to compile native part of tflite_runtime and fix issues introduced by https://github.com/tensorflow/tensorflow/pull/36690 2020-02-26 04:13:36 -08:00
MANIFEST.in
README.md Fixups for pip_package on Windows 2020-09-30 17:53:08 -07:00
setup_with_bazel.py Update setup.py and setup_with_bazel.py for tflite_runtime package 2020-11-24 04:59:37 -08:00
setup.py Update setup.py and setup_with_bazel.py for tflite_runtime package 2020-11-24 04:59:37 -08:00
update_sources.sh

Building TensorFlow Lite Standalone Pip

Many users would like to deploy TensorFlow lite interpreter and use it from Python without requiring the rest of TensorFlow.

Steps

To build a binary wheel run this script:

sudo apt install swig libjpeg-dev zlib1g-dev python3-dev python3-numpy
pip install numpy pybind11
sh tensorflow/lite/tools/make/download_dependencies.sh
sh tensorflow/lite/tools/pip_package/build_pip_package.sh

That will print out some output and a .whl file. You can then install that

pip install --upgrade <wheel>

You can also build a wheel inside docker container using make tool. For example the following command will cross-compile tflite-runtime package for python2.7 and python3.7 (from Debian Buster) on Raspberry Pi:

make BASE_IMAGE=debian:buster PYTHON=python TENSORFLOW_TARGET=rpi docker-build
make BASE_IMAGE=debian:buster PYTHON=python3 TENSORFLOW_TARGET=rpi docker-build

Another option is to cross-compile for python3.5 (from Debian Stretch) on ARM64 board:

make BASE_IMAGE=debian:stretch PYTHON=python3 TENSORFLOW_TARGET=aarch64 docker-build

To build for python3.6 (from Ubuntu 18.04) on x86_64 (native to the docker image) run:

make BASE_IMAGE=ubuntu:18.04 PYTHON=python3 TENSORFLOW_TARGET=native docker-build

In addition to the wheel there is a way to build Debian package by adding BUILD_DEB=y to the make command (only for python3):

make BASE_IMAGE=debian:buster PYTHON=python3 TENSORFLOW_TARGET=rpi BUILD_DEB=y docker-build

Alternative build with Bazel (experimental)

There is another build steps to build a binary wheel which uses Bazel instead of Makefile. You don't need to install additional dependencies. This approach can leverage TF's ci_build.sh for ARM cross builds.

Normal build for your workstation

tensorflow/lite/tools/pip_package/build_pip_package_with_bazel.sh

Optimized build for your workstation

The output may have a compatibility issue with other machines but it gives the best performance for your workstation.

tensorflow/lite/tools/pip_package/build_pip_package_with_bazel.sh native

Cross build for armhf Python 3.5

tensorflow/tools/ci_build/ci_build.sh PI-PYTHON3 \
  tensorflow/lite/tools/pip_package/build_pip_package_with_bazel.sh armhf

Cross build for armhf Python 3.7

tensorflow/tools/ci_build/ci_build.sh PI-PYTHON37 \
  tensorflow/lite/tools/pip_package/build_pip_package_with_bazel.sh armhf

Cross build for aarch64 Python 3.5

tensorflow/tools/ci_build/ci_build.sh PI-PYTHON3 \
  tensorflow/lite/tools/pip_package/build_pip_package_with_bazel.sh aarch64

Cross build for aarch64 Python 3.8

tensorflow/tools/ci_build/ci_build.sh PI-PYTHON38 \
  tensorflow/lite/tools/pip_package/build_pip_package_with_bazel.sh aarch64

Native build for Windows

bash tensorflow/lite/tools/pip_package/build_pip_package_with_bazel.sh windows

Enable TF OP support (Flex delegate)

If you want to use TF ops with Python API, you need to enable flex support. You can build TFLite interpreter with flex ops support by providing "--define=tflite_pip_with_flex=true" to Bazel.

Here are some examples.

Normal build with Flex for your workstation

CUSTOM_BAZEL_FLAGS=--define=tflite_pip_with_flex=true \
  tensorflow/lite/tools/pip_package/build_pip_package_with_bazel.sh

Cross build with Flex for armhf Python 3.7

CI_DOCKER_EXTRA_PARAMS="-e CUSTOM_BAZEL_FLAGS=--define=tflite_pip_with_flex=true" \
  tensorflow/tools/ci_build/ci_build.sh PI-PYTHON37 \
  tensorflow/lite/tools/pip_package/build_pip_package_with_bazel.sh armhf

Usage

Note, unlike tensorflow this will be installed to a tflite_runtime namespace. You can then use the Tensorflow Lite interpreter as.

from tflite_runtime.interpreter import Interpreter
interpreter = Interpreter(model_path="foo.tflite")

This currently works to build on Linux machines including Raspberry Pi. In the future, cross compilation to smaller SOCs like Raspberry Pi from bigger host will be supported.

Caveats

  • You cannot use TensorFlow Select ops, only TensorFlow Lite builtins.
  • Currently custom ops and delegates cannot be registered.