STT-tensorflow/tensorflow/lite/tools/pip_package
TensorFlower Gardener a78be1a36b Merge pull request #38904 from Leslie-Fang:lesliefang/fix_tflite_pybind_build_error
PiperOrigin-RevId: 315492031
Change-Id: Ief19d4fea49987b99984ee7b75d71f68a75f2871
2020-06-09 09:03:41 -07:00
..
debian Update debian/changelog for tflite-runtime package. 2019-10-18 16:21:57 -07:00
build_pip_package_with_bazel.sh Add a way to build TFLite PIP package with Flex support 2020-05-28 18:07:13 -07:00
build_pip_package.sh Add '--std=c++11' to compile native part of tflite_runtime and fix issues introduced by https://github.com/tensorflow/tensorflow/pull/36690 2020-02-26 04:13:36 -08:00
Dockerfile pybind11-dev is necessary to build interpreter_wrapper_pybind11.cc 2020-04-13 18:31:35 -07:00
Makefile Add '--std=c++11' to compile native part of tflite_runtime and fix issues introduced by https://github.com/tensorflow/tensorflow/pull/36690 2020-02-26 04:13:36 -08:00
MANIFEST.in
README.md Merge pull request #38904 from Leslie-Fang:lesliefang/fix_tflite_pybind_build_error 2020-06-09 09:03:41 -07:00
setup_with_bazel.py Build pip_package with Bazel 2020-05-19 22:19:00 -07:00
setup.py Merge pull request #38904 from Leslie-Fang:lesliefang/fix_tflite_pybind_build_error 2020-06-09 09:03:41 -07:00
update_sources.sh Update tflite-runtime docker build. 2019-10-18 18:42:11 -07:00

Building TensorFlow Lite Standalone Pip

Many users would like to deploy TensorFlow lite interpreter and use it from Python without requiring the rest of TensorFlow.

Steps

To build a binary wheel run this script:

sudo apt install swig libjpeg-dev zlib1g-dev python3-dev python3-numpy
pip install numpy pybind11
sh tensorflow/lite/tools/make/download_dependencies.sh
sh tensorflow/lite/tools/pip_package/build_pip_package.sh

That will print out some output and a .whl file. You can then install that

pip install --upgrade <wheel>

You can also build a wheel inside docker container using make tool. For example the following command will cross-compile tflite-runtime package for python2.7 and python3.7 (from Debian Buster) on Raspberry Pi:

make BASE_IMAGE=debian:buster PYTHON=python TENSORFLOW_TARGET=rpi docker-build
make BASE_IMAGE=debian:buster PYTHON=python3 TENSORFLOW_TARGET=rpi docker-build

Another option is to cross-compile for python3.5 (from Debian Stretch) on ARM64 board:

make BASE_IMAGE=debian:stretch PYTHON=python3 TENSORFLOW_TARGET=aarch64 docker-build

To build for python3.6 (from Ubuntu 18.04) on x86_64 (native to the docker image) run:

make BASE_IMAGE=ubuntu:18.04 PYTHON=python3 TENSORFLOW_TARGET=native docker-build

In addition to the wheel there is a way to build Debian package by adding BUILD_DEB=y to the make command (only for python3):

make BASE_IMAGE=debian:buster PYTHON=python3 TENSORFLOW_TARGET=rpi BUILD_DEB=y docker-build

Alternative build with Bazel (experimental)

There is another build steps to build a binary wheel which uses Bazel instead of Makefile. You don't need to install additional dependencies. This approach can leverage TF's ci_build.sh for ARM cross builds.

Native build for your workstation

tensorflow/lite/tools/pip_package/build_pip_package_with_bazel.sh

Cross build for armhf Python 3.5

CI_DOCKER_EXTRA_PARAMS="-e CI_BUILD_PYTHON=python3 -e CROSSTOOL_PYTHON_INCLUDE_PATH=/usr/include/python3.5" \
  tensorflow/tools/ci_build/ci_build.sh PI-PYTHON3 \
  tensorflow/lite/tools/pip_package/build_pip_package_with_bazel.sh armhf

Cross build for armhf Python 3.7

CI_DOCKER_EXTRA_PARAMS="-e CI_BUILD_PYTHON=python3 -e CROSSTOOL_PYTHON_INCLUDE_PATH=/usr/include/python3.7" \
  tensorflow/tools/ci_build/ci_build.sh PI-PYTHON37 \
  tensorflow/lite/tools/pip_package/build_pip_package_with_bazel.sh armhf

Cross build for aarch64 Python 3.5

  CI_DOCKER_EXTRA_PARAMS="-e CI_BUILD_PYTHON=python3 -e CROSSTOOL_PYTHON_INCLUDE_PATH=/usr/include/python3.5" \
  tensorflow/tools/ci_build/ci_build.sh PI-PYTHON3 \
  tensorflow/lite/tools/pip_package/build_pip_package_with_bazel.sh aarch64

Cross build for aarch64 Python 3.7

CI_DOCKER_EXTRA_PARAMS="-e CI_BUILD_PYTHON=python3 -e CROSSTOOL_PYTHON_INCLUDE_PATH=/usr/include/python3.7" \
  tensorflow/tools/ci_build/ci_build.sh PI-PYTHON37 \
  tensorflow/lite/tools/pip_package/build_pip_package_with_bazel.sh aarch64

Enable TF OP support (Flex delegate)

If you want to use TF ops with Python API, you need to enable flex support. You can build TFLite interpreter with flex ops support by providing "--define=tflite_pip_with_flex=true" to Bazel.

Here are some examples.

Native build with Flex for your workstation

CUSTOM_BAZEL_FLAGS=--define=tflite_pip_with_flex=true \
  tensorflow/lite/tools/pip_package/build_pip_package_with_bazel.sh

Cross build with Flex for armhf Python 3.5

CI_DOCKER_EXTRA_PARAMS="-e CUSTOM_BAZEL_FLAGS=--define=tflite_pip_with_flex=true \
  -e CI_BUILD_PYTHON=python3 -e CROSSTOOL_PYTHON_INCLUDE_PATH=/usr/include/python3.5" \
  tensorflow/tools/ci_build/ci_build.sh PI-PYTHON3 \
  tensorflow/lite/tools/pip_package/build_pip_package_with_bazel.sh armhf

Usage

Note, unlike tensorflow this will be installed to a tflite_runtime namespace. You can then use the Tensorflow Lite interpreter as.

from tflite_runtime.interpreter import Interpreter
interpreter = Interpreter(model_path="foo.tflite")

This currently works to build on Linux machines including Raspberry Pi. In the future, cross compilation to smaller SOCs like Raspberry Pi from bigger host will be supported.

Caveats

  • You cannot use TensorFlow Select ops, only TensorFlow Lite builtins.
  • Currently custom ops and delegates cannot be registered.