Minor doc TF Lite doc changes

PiperOrigin-RevId: 292541517
Change-Id: Id9620cdfa6b6ee1132ff609d37b4364c9f1bae06
This commit is contained in:
Khanh LeViet 2020-01-31 07:30:34 -08:00 committed by TensorFlower Gardener
parent 6b0350800f
commit 9a565656e2
6 changed files with 86 additions and 46 deletions

View File

@ -1,8 +1,8 @@
# Converter command line reference
This page describes how to use the [TensorFlow Lite converter](index.md) using
the command line tool. The preferred approach for conversion is using the
[Python API](python_api.md).
the command line tool. However, The[Python API](python_api.md) is recommended
for the majority of cases.
Note: This only contains documentation on the command line tool in TensorFlow 2.
Documentation on using the command line tool in TensorFlow 1 is available on
@ -10,20 +10,26 @@ GitHub
([reference](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/r1/convert/cmdline_reference.md),
[example](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/r1/convert/cmdline_examples.md)).
[TOC]
## High-level overview
The TensorFlow Lite Converter has a command line tool `tflite_convert` which
supports basic models. Use the `TFLiteConverter` [Python API](python_api.md) for
any conversions involving quantization or any additional parameters (e.g.
The TensorFlow Lite Converter has a command line tool named `tflite_convert`,
which supports basic models. Use the [Python API](python_api.md) for any
conversions involving optimizations, or any additional parameters (e.g.
signatures in [SavedModels](https://www.tensorflow.org/guide/saved_model) or
custom objects in
[Keras models](https://www.tensorflow.org/guide/keras/overview)).
## Usage
The following flags specify the input and output files.
The following example shows a SavedModel being converted:
```bash
tflite_convert \
--saved_model_dir=/tmp/mobilenet_saved_model \
--output_file=/tmp/mobilenet.tflite
```
The inputs and outputs are specified using the following commonly used flags:
* `--output_file`. Type: string. Specifies the full path of the output file.
* `--saved_model_dir`. Type: string. Specifies the full path to the directory
@ -31,30 +37,33 @@ The following flags specify the input and output files.
* `--keras_model_file`. Type: string. Specifies the full path of the HDF5 file
containing the `tf.keras` model generated in 1.X or 2.X.
The following is an example usage.
To use all of the available flags, use the following command:
```
tflite_convert \
--saved_model_dir=/tmp/mobilenet_saved_model \
--output_file=/tmp/mobilenet.tflite
```bash
tflite_convert --help
```
In addition to the input and output flags, the converter contains the following
flag.
The following flag can be used for compatibility with the TensorFlow 1.X version
of the converter CLI:
* `--enable_v1_converter`. Type: bool. Enables user to enable the 1.X command
line flags instead of the 2.X flags. The 1.X command line flags are
specified
[here](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/r1/convert/cmdline_reference.md).
## Additional instructions
## Installing the converter CLI
### Building from source
To obtain the latest version of the TensorFlow Lite converter CLI, we recommend
installing the nightly build using
[pip](https://www.tensorflow.org/install/pip):
In order to run the latest version of the TensorFlow Lite Converter either
install the nightly build using [pip](https://www.tensorflow.org/install/pip) or
```bash
pip install tf-nightly
```
Alternatively, you can
[clone the TensorFlow repository](https://www.tensorflow.org/install/source) and
use `bazel`. An example can be seen below.
use `bazel` to run the command:
```
bazel run //tensorflow/lite/python:tflite_convert -- \

View File

@ -6,8 +6,9 @@ following example.
<a class="button button-primary" href="https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification/android">Android
image classification example</a>
For an explanation of the source code, you should also read
[TensorFlow Lite Android image classification](https://github.com/tensorflow/examples/blob/master/lite/examples/image_classification/android/EXPLORE_THE_CODE.md).
Read
[TensorFlow Lite Android image classification](https://github.com/tensorflow/examples/blob/master/lite/examples/image_classification/android/EXPLORE_THE_CODE.md)
for an explanation of the source code.
This example app uses
[image classification](https://www.tensorflow.org/lite/models/image_classification/overview)
@ -100,11 +101,10 @@ or you may wish to make local changes to TensorFlow Lite.
#### Install Bazel and Android Prerequisites
Bazel is the primary build system for TensorFlow. To build with Bazel, it and
the Android NDK and SDK must be installed on your system.
Bazel is the primary build system for TensorFlow. To build with it, you must
have it and the Android NDK and SDK installed on your system.
1. Install the latest version of Bazel as per the instructions
[on the Bazel website](https://bazel.build/versions/master/docs/install.html).
1. Install the latest version of the [Bazel build system](https://bazel.build/versions/master/docs/install.html).
2. The Android NDK is required to build the native (C/C++) TensorFlow Lite
code. The current recommended version is 17c, which may be found
[here](https://developer.android.com/ndk/downloads/older_releases.html#ndk-17c-downloads).
@ -176,3 +176,35 @@ dependencies {
compile(name:'tensorflow-lite', ext:'aar')
}
```
##### Install AAR to local Maven repository
Execute the following command from your root checkout directory:
```sh
mvn install:install-file \
-Dfile=bazel-bin/tensorflow/lite/java/tensorflow-lite.aar \
-DgroupId=org.tensorflow \
-DartifactId=tensorflow-lite -Dversion=0.1.100 -Dpackaging=aar
```
In your app's `build.gradle`, ensure you have the `mavenLocal()` dependency and
replace the standard TensorFlow Lite dependency with the one that has support
for select TensorFlow ops:
```
allprojects {
repositories {
jcenter()
mavenLocal()
}
}
dependencies {
implementation 'org.tensorflow:tensorflow-lite-with-select-tf-ops:0.1.100'
}
```
Note that the `0.1.100` version here is purely for the sake of
testing/development. With the local AAR installed, you can use the standard
[TensorFlow Lite Java inference APIs](../guide/inference.md) in your app code.

View File

@ -28,10 +28,9 @@ improve:
TensorFlow Lite works with a huge range of devices, from tiny microcontrollers
to powerful mobile phones.
Key Point: The TensorFlow Lite binary is smaller than 1MB when all supported
operators are linked (for 32-bit ARM builds), and less than 300KB when using
only the operators needed for supporting the common image classification models
InceptionV3 and MobileNet.
Key Point: The TensorFlow Lite binary is smaller than 300KB when all supported
operators are linked, and less than 200KB when using only the operators needed
for supporting the common image classification models InceptionV3 and MobileNet.
## Get started

View File

@ -26,8 +26,7 @@ TensorFlow Lite offers native iOS libraries written in
[Swift](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/swift)
and
[Objective-C](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/objc).
To get started quickly writing your own iOS code, we recommend using our
[Swift image classification example](https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification/ios)
Start writing your own iOS code using the [Swift image classification example](https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification/ios)
as a starting point.
The sections below demonstrate how to add TensorFlow Lite Swift or Objective-C
@ -57,16 +56,16 @@ There are stable releases, and nightly releases available for both
version constraint as in the above examples, CocoaPods will pull the latest
stable release by default.
You can also specify a version contraint. For example, if you wish to depend on
You can also specify a version constraint. For example, if you wish to depend on
version 2.0.0, you can write the dependency as:
```ruby
pod 'TensorFlowLiteSwift', '~> 2.0.0'
```
This will ensure the latest available 2.x.y version of `TensorFlowLiteSwift` pod
is used in your app. Alternatively, if you want to depend on the nightly builds,
you can write:
This will ensure the latest available 2.x.y version of the `TensorFlowLiteSwift`
pod is used in your app. Alternatively, if you want to depend on the nightly
builds, you can write:
```ruby
pod 'TensorFlowLiteSwift', '0.0.1-nightly'

View File

@ -9,8 +9,8 @@ Since the set of TensorFlow Lite operations is smaller than TensorFlow's, not
every model is convertible. Even for supported operations, very specific usage
patterns are sometimes expected, for performance reasons. We expect to expand
the set of supported operations in future TensorFlow Lite releases. Additional
ops can be included by [using select TensorFlow ops](ops_select.md), at the cost
of binary size.
ops can be included by [using select TensorFlow ops](ops_select.md), at
the cost of binary size.
The best way to understand how to build a TensorFlow model that can be used with
TensorFlow Lite is to carefully consider how operations are converted and
@ -18,9 +18,9 @@ optimized, along with the limitations imposed by this process.
## Supported types
Most TensorFlow Lite operations target both floating-point (float32) and
quantized (uint8, int8) inference, but many ops do not yet for other types like
tf.float16 and strings.
Most TensorFlow Lite operations target both floating-point (`float32`) and
quantized (`uint8`, `int8`) inference, but many ops do not yet for other types
like `tf.float16` and strings.
Apart from using different version of the operations, the other difference
between floating-point and quantized models lies in the way they are converted.
@ -1141,8 +1141,8 @@ Outputs {
}
```
And these are TensorFlow Lite operations that are present but not ready for
custom models yet:
The following TensorFlow Lite operations are present, but not ready for custom
models:
* `CALL`
* `CONCAT_EMBEDDINGS`

View File

@ -16,10 +16,11 @@ To quickly run TensorFlow Lite models with Python, you can install just the
TensorFlow Lite interpreter, instead of all TensorFlow packages.
This interpreter-only package is a fraction the size of the full TensorFlow
package and includes only the code required to run inferences, such as the
package and includes the bare minimum code required to run inferences with
TensorFlow Lite—it includes only the
[`tf.lite.Interpreter`](https://www.tensorflow.org/api_docs/python/tf/lite/Interpreter)
class. This small package is ideal when all you want to do is execute `.tflite`
models.
Python class. This small package is ideal when all you want to do is execute
`.tflite` models and avoid wasting disk space with the large TensorFlow library.
Note: If you need access to other Python APIs, such as the [TensorFlow Lite
Converter](../convert/python_api.md), you must install the [full TensorFlow