Fix broken links and spelling errors
PiperOrigin-RevId: 236786449
This commit is contained in:
parent
98c4548b4b
commit
b71b8ecf64
@ -64,19 +64,19 @@ tflite_convert \
|
||||
--saved_model_dir=/tmp/saved_model
|
||||
```
|
||||
|
||||
[SavedModel](https://www.tensorflow.org/guide/saved_model#using_savedmodel_with_estimators)
|
||||
[SavedModel](https://www.tensorflow.org/guide/saved_model.md#using_savedmodel_with_estimators)
|
||||
has fewer required flags than frozen graphs due to access to additional data
|
||||
contained within the SavedModel. The values for `--input_arrays` and
|
||||
`--output_arrays` are an aggregated, alphabetized list of the inputs and outputs
|
||||
in the [SignatureDefs](https://www.tensorflow.org/serving/signature_defs) within
|
||||
in the [SignatureDefs](../../serving/signature_defs.md) within
|
||||
the
|
||||
[MetaGraphDef](https://www.tensorflow.org/guide/saved_model#apis_to_build_and_load_a_savedmodel)
|
||||
[MetaGraphDef](https://www.tensorflow.org/saved_model.md#apis_to_build_and_load_a_savedmodel)
|
||||
specified by `--saved_model_tag_set`. As with the GraphDef, the value for
|
||||
`input_shapes` is automatically determined whenever possible.
|
||||
|
||||
There is currently no support for MetaGraphDefs without a SignatureDef or for
|
||||
MetaGraphDefs that use the [`assets/`
|
||||
directory](https://www.tensorflow.org/guide/saved_model#structure_of_a_savedmodel_directory).
|
||||
directory](https://www.tensorflow.org/guide/saved_model.md#structure_of_a_savedmodel_directory).
|
||||
|
||||
### Convert a tf.Keras model <a name="keras"></a>
|
||||
|
||||
|
@ -38,7 +38,7 @@ The following flags specify optional parameters when using SavedModels.
|
||||
Specifies a comma-separated set of tags identifying the MetaGraphDef within
|
||||
the SavedModel to analyze. All tags in the tag set must be specified.
|
||||
* `--saved_model_signature_key`. Type: string. Default:
|
||||
[DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants).
|
||||
`tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`.
|
||||
Specifies the key identifying the SignatureDef containing inputs and
|
||||
outputs.
|
||||
|
||||
|
@ -241,8 +241,8 @@ interpreter.allocate_tensors()
|
||||
In order to run the latest version of the TensorFlow Lite Converter Python API,
|
||||
either install the nightly build with
|
||||
[pip](https://www.tensorflow.org/install/pip) (recommended) or
|
||||
[Docker](https://www.tensorflow.org/install/docker), or
|
||||
[build the pip package from source](https://www.tensorflow.org/install/source).
|
||||
[Docker](https://www.tensorflow.org/install/docker.md), or
|
||||
[build the pip package from source](https://www.tensorflow.org/install/source.md).
|
||||
|
||||
### Converting models from TensorFlow 1.12 <a name="pre_tensorflow_1.12"></a>
|
||||
|
||||
|
@ -3,7 +3,7 @@
|
||||
|
||||
This document describes how to build TensorFlow Lite iOS library. If you just
|
||||
want to use it, the easiest way is using the TensorFlow Lite CocoaPod releases.
|
||||
See [TensorFlow Lite iOS Demo](demo_ios.md) for examples.
|
||||
See [TensorFlow Lite iOS Demo](ios.md) for examples.
|
||||
|
||||
|
||||
## Building
|
||||
|
@ -11,17 +11,17 @@ detailed documentation for the topic or file a
|
||||
The TensorFlow Lite converter supports the following formats:
|
||||
|
||||
* SavedModels:
|
||||
[TFLiteConverter.from_saved_model](convert/python_api.md#exporting_a_savedmodel_)
|
||||
[TFLiteConverter.from_saved_model](../convert/python_api.md#exporting_a_savedmodel_)
|
||||
* Frozen GraphDefs generated by
|
||||
[freeze_graph.py](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py):
|
||||
[TFLiteConverter.from_frozen_graph](convert/python_api#exporting_a_graphdef_from_file_)
|
||||
[TFLiteConverter.from_frozen_graph](../convert/python_api.md#exporting_a_graphdef_from_file_)
|
||||
* tf.keras HDF5 models:
|
||||
[TFLiteConverter.from_keras_model_file](convert/python_api#exporting_a_tfkeras_file_)
|
||||
[TFLiteConverter.from_keras_model_file](../convert/python_api.md#exporting_a_tfkeras_file_)
|
||||
* tf.Session:
|
||||
[TFLiteConverter.from_session](python_api#exporting_a_graphdef_from_tfsession_)
|
||||
[TFLiteConverter.from_session](../convert/python_api.md#exporting_a_graphdef_from_tfsession_)
|
||||
|
||||
The recommended approach is to integrate the
|
||||
[Python converter](convert/python_api.md) into your model pipeline in order to
|
||||
[Python converter](../convert/python_api.md) into your model pipeline in order to
|
||||
detect compatibility issues early on.
|
||||
|
||||
#### Why doesn't my model convert?
|
||||
@ -69,7 +69,7 @@ bazel run //tensorflow/lite/tools:visualize model.tflite visualized_model.html
|
||||
#### Why are some operations not implemented in TensorFlow Lite?
|
||||
|
||||
In order to keep TensorFlow Lite lightweight, only certain operations were used
|
||||
in the converter. The [Compatibility Guide](tf_ops_compatibility.md) provides a
|
||||
in the converter. The [Compatibility Guide](ops_compatibility.md) provides a
|
||||
list of operations currently supported by TensorFlow Lite.
|
||||
|
||||
If you don’t see a specific operation (or an equivalent) listed, it's likely
|
||||
@ -78,34 +78,34 @@ GitHub [issue #21526](https://github.com/tensorflow/tensorflow/issues/21526).
|
||||
Leave a comment if your request hasn’t already been mentioned.
|
||||
|
||||
In the meanwhile, you could try implementing a
|
||||
[custom operator](custom_operators.md) or using a different model that only
|
||||
[custom operator](ops_custom.md) or using a different model that only
|
||||
contains supported operators. If binary size is not a constraint, try using
|
||||
TensorFlow Lite with [select TensorFlow ops](using_select_tf_ops.md).
|
||||
TensorFlow Lite with [select TensorFlow ops](ops_select.md).
|
||||
|
||||
#### How do I test that a TensorFlow Lite model behaves the same as the original TensorFlow model?
|
||||
|
||||
The best way to test the behavior of a TensorFlow Lite model is to use our API
|
||||
with test data and compare the outputs to TensorFlow for the same inputs. Take a
|
||||
look at our [Python Interpreter example](convert/python_api.md) that generates
|
||||
look at our [Python Interpreter example](../convert/python_api.md) that generates
|
||||
random data to feed to the interpreter.
|
||||
|
||||
## Optimization
|
||||
|
||||
#### How do I reduce the size of my converted TensorFlow Lite model?
|
||||
|
||||
[Post-training quantization](performance/post_training_quantization.md) can be
|
||||
[Post-training quantization](../performance/post_training_quantization.md) can be
|
||||
used during conversion to TensorFlow Lite to reduce the size of the model.
|
||||
Post-training quantization quantizes weights to 8-bits of precision from
|
||||
floating-point and dequantizes them during runtime to perform floating point
|
||||
computations. However, note that this could have some accuracy implications.
|
||||
|
||||
If retraining the model is an option, consider
|
||||
[Quantization-aware training](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/quantize/README.md).
|
||||
[Quantization-aware training](https://github.com/tensorflow/tensorflow/tree/r1.13/tensorflow/contrib/quantize).
|
||||
However, note that quantization-aware training is only available for a subset of
|
||||
convolutional neural network architectures.
|
||||
|
||||
For a deeper understanding of different optimization methods, look at
|
||||
[Model optimization](performance/model_optimization.md).
|
||||
[Model optimization](../performance/model_optimization.md).
|
||||
|
||||
#### How do I optimize TensorFlow Lite performance for my machine learning task?
|
||||
|
||||
@ -113,7 +113,7 @@ The high-level process to optimize TensorFlow Lite performance looks something
|
||||
like this:
|
||||
|
||||
* *Make sure that you have the right model for the task.* For image
|
||||
classification, check out our [list of hosted models](models.md).
|
||||
classification, check out our [list of hosted models](hosted_models.md).
|
||||
* *Tweak the number of threads.* Many TensorFlow Lite operators support
|
||||
multi-threaded kernels. You can use `SetNumThreads()` in the
|
||||
[C++ API](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/interpreter.h#L345)
|
||||
@ -124,12 +124,12 @@ like this:
|
||||
Networks API, call
|
||||
[`UseNNAPI`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/interpreter.h#L343)
|
||||
on the interpreter. Or take a look at our
|
||||
[GPU delegate tutorial](performance/gpu.md).
|
||||
[GPU delegate tutorial](../performance/gpu.md).
|
||||
* *(Advanced) Profile Model.* The Tensorflow Lite
|
||||
[benchmarking tool](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/benchmark)
|
||||
has a built-in profiler that can show per-operator statistics. If you know
|
||||
how you can optimize an operator’s performance for your specific platform,
|
||||
you can implement a [custom operator](custom_operators.md).
|
||||
you can implement a [custom operator](ops_custom.md).
|
||||
|
||||
For a more in-depth discussion on how to optimize performance, take a look at
|
||||
[Best Practices](performance/best_practices.md).
|
||||
[Best Practices](../performance/best_practices.md).
|
||||
|
@ -35,7 +35,7 @@ by suggesting contextually relevant messages. The model is built specifically fo
|
||||
memory constrained devices, such as watches and phones, and has been successfully
|
||||
used in Smart Replies on Android Wear. Currently, this model is Android-specific.
|
||||
|
||||
These pre-trained models are [available for download](models.md).
|
||||
These pre-trained models are [available for download](hosted_models.md).
|
||||
|
||||
### Re-train Inception-V3 or MobileNet for a custom data set
|
||||
|
||||
@ -63,24 +63,24 @@ the framework. See
|
||||
to create file for the custom model.
|
||||
|
||||
TensorFlow Lite currently supports a subset of TensorFlow operators. Refer to
|
||||
the [TensorFlow Lite & TensorFlow Compatibility Guide](tf_ops_compatibility.md)
|
||||
the [TensorFlow Lite & TensorFlow Compatibility Guide](ops_compatibility.md)
|
||||
for supported operators and their usage. This set of operators will continue to
|
||||
grow in future Tensorflow Lite releases.
|
||||
|
||||
## 2. Convert the model format
|
||||
|
||||
The [TensorFlow Lite Converter](convert/index.md) accepts the following file
|
||||
The [TensorFlow Lite Converter](../convert.md) accepts the following file
|
||||
formats:
|
||||
|
||||
* `SavedModel` — A `GraphDef` and checkpoint with a signature that labels
|
||||
input and output arguments to a model. See the documentation for converting
|
||||
SavedModels using [Python](convert/python_api.md#basic_savedmodel) or using
|
||||
the [command line](convert/cmdline_examples.md#savedmodel).
|
||||
SavedModels using [Python](../convert/python_api.md#basic_savedmodel) or using
|
||||
the [command line](../convert/cmdline_examples.md#savedmodel).
|
||||
* `tf.keras` - A HDF5 file containing a model with weights and input and
|
||||
output arguments generated by `tf.Keras`. See the documentation for
|
||||
converting HDF5 models using
|
||||
[Python](convert/python_api.md#basic_keras_file) or using the
|
||||
[command line](convert/cmdline_examples.md#keras).
|
||||
[Python](../convert/python_api.md#basic_keras_file) or using the
|
||||
[command line](../convert/cmdline_examples.md#keras).
|
||||
* `frozen tf.GraphDef` — A subclass of `tf.GraphDef` that does not contain
|
||||
variables. A `GraphDef` can be converted to a `frozen GraphDef` by taking a
|
||||
checkpoint and a `GraphDef`, and converting each variable into a constant
|
||||
@ -154,9 +154,9 @@ the arguments for specifying the output nodes for inference in the
|
||||
|
||||
### Full converter reference
|
||||
|
||||
The [TensorFlow Lite Converter](convert/index.md) can be
|
||||
[Python](convert/python_api.md) or from the
|
||||
[command line](convert/cmdline_examples.md). This allows you to integrate the
|
||||
The [TensorFlow Lite Converter](../convert.md) can be
|
||||
[Python](../convert/python_api.md) or from the
|
||||
[command line](../convert/cmdline_examples.md). This allows you to integrate the
|
||||
conversion step into the model design workflow, ensuring the model is easy to
|
||||
convert to a mobile inference graph.
|
||||
|
||||
@ -195,15 +195,15 @@ The open source Android demo app uses the JNI interface and is available
|
||||
[on GitHub](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/java/demo/app).
|
||||
You can also download a
|
||||
[prebuilt APK](http://download.tensorflow.org/deps/tflite/TfLiteCameraDemo.apk).
|
||||
See the <a href="./demo_android.md">Android demo</a> guide for details.
|
||||
See the <a href="./android.md">Android demo</a> guide for details.
|
||||
|
||||
The <a href="./android_build.md">Android mobile</a> guide has instructions for
|
||||
The <a href="./android.md">Android mobile</a> guide has instructions for
|
||||
installing TensorFlow on Android and setting up `bazel` and Android Studio.
|
||||
|
||||
### iOS
|
||||
|
||||
To integrate a TensorFlow model in an iOS app, see the
|
||||
[TensorFlow Lite for iOS](ios.md) guide and <a href="./demo_ios.md">iOS demo</a>
|
||||
[TensorFlow Lite for iOS](ios.md) guide and <a href="./ios.md">iOS demo</a>
|
||||
guide.
|
||||
|
||||
#### Core ML support
|
||||
@ -218,9 +218,9 @@ devices. To use the converter, refer to the
|
||||
### ARM32 and ARM64 Linux
|
||||
|
||||
Compile Tensorflow Lite for a Raspberry Pi by following the
|
||||
[RPi build instructions](rpi.md) Compile Tensorflow Lite for a generic aarch64
|
||||
[RPi build instructions](build_rpi.md) Compile Tensorflow Lite for a generic aarch64
|
||||
board such as Odroid C2, Pine64, NanoPi, and others by following the
|
||||
[ARM64 Linux build instructions](linux_aarch64.md) This compiles a static
|
||||
[ARM64 Linux build instructions](build_arm64.md) This compiles a static
|
||||
library file (`.a`) used to build your app. There are plans for Python bindings
|
||||
and a demo app.
|
||||
|
||||
@ -253,7 +253,7 @@ tflite_quantized_model=converter.convert()
|
||||
open(“quantized_model.tflite”, “wb”).write(tflite_quantized_model)
|
||||
```
|
||||
|
||||
Read the full documentation [here](performance/post_training_quantization.md)
|
||||
Read the full documentation [here](../performance/post_training_quantization.md)
|
||||
and see a tutorial
|
||||
[here](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/tutorials/post_training_quant.ipynb).
|
||||
|
||||
@ -268,4 +268,4 @@ Another benefit with GPU inference is its power efficiency. GPUs carry out the
|
||||
computations in a very efficient and optimized manner, so that they consume less
|
||||
power and generate less heat than when the same task is run on CPUs.
|
||||
|
||||
Read the tutorial [here](performance/gpu) and full documentation [here](performance/gpu_advanced).
|
||||
Read the tutorial [here](../performance/gpu.md) and full documentation [here](../performance/gpu_advanced.md).
|
||||
|
@ -3,7 +3,7 @@
|
||||
The following is an incomplete list of pre-trained models optimized to work with
|
||||
TensorFlow Lite.
|
||||
|
||||
To get started choosing a model, visit <a href="./">Models</a>.
|
||||
To get started choosing a model, visit <a href="../models">Models</a>.
|
||||
|
||||
Note: The best model for a given application depends on your requirements. For
|
||||
example, some applications might benefit from higher accuracy, while others
|
||||
@ -13,7 +13,7 @@ models to find the optimal balance between size, performance, and accuracy.
|
||||
## Image classification
|
||||
|
||||
For more information about image classification, see
|
||||
<a href="image_classification/overview.md">Image classification</a>.
|
||||
<a href="../image_classification/overview.md">Image classification</a>.
|
||||
|
||||
### Quantized models
|
||||
|
||||
@ -50,7 +50,7 @@ Graph.
|
||||
|
||||
Note: Performance numbers were benchmarked on Pixel-2 using single thread large
|
||||
core. Accuracy numbers were computed using the
|
||||
[TFLite accuracy tool](../tools/accuracy/ilsvrc.md).
|
||||
[TFLite accuracy tool](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/accuracy/ilsvrc).
|
||||
|
||||
### Floating point models
|
||||
|
||||
@ -108,7 +108,7 @@ BIG core.
|
||||
## Object detection
|
||||
|
||||
For more information about object detection, see
|
||||
<a href="object_detection/overview.md">Object detection</a>.
|
||||
<a href="../models/object_detection/overview.md">Object detection</a>.
|
||||
|
||||
The object detection model we currently host is
|
||||
**coco_ssd_mobilenet_v1_1.0_quant_2018_06_29**.
|
||||
@ -119,7 +119,7 @@ model and labels</a>
|
||||
## Pose estimation
|
||||
|
||||
For more information about pose estimation, see
|
||||
<a href="pose_estimation/overview.md">Pose estimation</a>.
|
||||
<a href="../models/pose_estimation/overview.md">Pose estimation</a>.
|
||||
|
||||
The pose estimation model we currently host is
|
||||
**multi_person_mobilenet_v1_075_float**.
|
||||
@ -130,7 +130,7 @@ model</a>
|
||||
## Image segmentation
|
||||
|
||||
For more information about image segmentation, see
|
||||
<a href="segmentation/overview.md">Segmentation</a>.
|
||||
<a href="../models/segmentation/overview.md">Segmentation</a>.
|
||||
|
||||
The image segmentation model we currently host is **deeplabv3_257_mv_gpu**.
|
||||
|
||||
@ -140,7 +140,7 @@ model</a>
|
||||
## Smart reply
|
||||
|
||||
For more information about smart reply, see
|
||||
<a href="smart_reply/overview.md">Smart reply</a>.
|
||||
<a href="../models/smart_reply/overview.md">Smart reply</a>.
|
||||
|
||||
The smart reply model we currently host is **smartreply_1.0_2017_11_01**.
|
||||
|
||||
|
@ -118,7 +118,7 @@ TensorFlow Lite provides:
|
||||
to all first-party and third-party apps.
|
||||
|
||||
Also see the complete list of
|
||||
[TensorFlow Lite's supported models](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/models.md),
|
||||
[TensorFlow Lite's supported models](hosted_models.md),
|
||||
including the model sizes, performance numbers, and downloadable model files.
|
||||
|
||||
- Quantized versions of the MobileNet model, which runs faster than the
|
||||
|
@ -7,7 +7,7 @@
|
||||
TensorFlow Lite inference is the process of executing a TensorFlow Lite
|
||||
model on-device and extracting meaningful results from it. Inference is the
|
||||
final step in using the model on-device in the
|
||||
[architecture](./index.md#tensorflow_lite_architecture).
|
||||
[architecture](index.md#tensorflow_lite_architecture).
|
||||
|
||||
Inference for TensorFlow Lite models is run through an interpreter. This
|
||||
document outlines the various APIs for the interpreter along with the
|
||||
@ -51,14 +51,14 @@ On Android, TensorFlow Lite inference can be performed using either Java or C++
|
||||
APIs. The Java APIs provide convenience and can be used directly within your
|
||||
Android Activity classes. The C++ APIs on the other hand may offer more
|
||||
flexibility and speed, but may require writing JNI wrappers to move data between
|
||||
Java and C++ layers. You can find an example [here](./android.md).
|
||||
Java and C++ layers. You can find an example [here](android.md).
|
||||
|
||||
#### iOS
|
||||
TensorFlow Lite provides Swift/Objective C++ APIs for inference on iOS. An
|
||||
example can be found [here](./ios.md).
|
||||
example can be found [here](ios.md).
|
||||
|
||||
#### Linux
|
||||
On Linux platforms such as [Raspberry Pi](./build_rpi.md), TensorFlow Lite C++
|
||||
On Linux platforms such as [Raspberry Pi](build_rpi.md), TensorFlow Lite C++
|
||||
and Python APIs can be used to run inference.
|
||||
|
||||
|
||||
@ -72,7 +72,7 @@ should be no surprise that the APIs try to avoid unnecessary copies at the
|
||||
expense of convenience. Similarly, consistency with TensorFlow APIs was not an
|
||||
explicit goal and some variance is to be expected.
|
||||
|
||||
There is also a [Python API for TensorFlow Lite](./../convert/python_api.md).
|
||||
There is also a [Python API for TensorFlow Lite](../convert/python_api.md).
|
||||
|
||||
### Loading a Model
|
||||
|
||||
@ -205,7 +205,7 @@ where each entry in `inputs` corresponds to an input tensor and
|
||||
`map_of_indices_to_outputs` maps indices of output tensors to the corresponding
|
||||
output data. In both cases the tensor indices should correspond to the values
|
||||
given to the
|
||||
[TensorFlow Lite Optimized Converter](./../convert/cmdline_examples.md) when the
|
||||
[TensorFlow Lite Optimized Converter](../convert/cmdline_examples.md) when the
|
||||
model was created. Be aware that the order of tensors in `input` must match the
|
||||
order given to the `TensorFlow Lite Optimized Converter`.
|
||||
|
||||
|
@ -79,16 +79,16 @@ Under `Project navigator -> tflite_camera_example -> Targets ->
|
||||
tflite_camera_example -> General` change the bundle identifier by pre-pending
|
||||
your name:
|
||||
|
||||

|
||||

|
||||
|
||||
Plug in your iOS device. Note the app must be executed with a real device with
|
||||
camera. Select the iOS device from the drop-down menu.
|
||||
|
||||

|
||||

|
||||
|
||||
Click the "Run" button to build and run the app
|
||||
|
||||

|
||||

|
||||
|
||||
Note that as mentioned earlier, you must already have a device set up and linked
|
||||
to your Apple Developer account in order to deploy the app on a device.
|
||||
|
@ -9,7 +9,7 @@ Since the set of TensorFlow Lite operations is smaller than TensorFlow's, not
|
||||
every model is convertible. Even for supported operations, very specific usage
|
||||
patterns are sometimes expected, for performance reasons. We expect to expand
|
||||
the set of supported operations in future TensorFlow Lite releases. Additional
|
||||
ops can be included by [using select TensorFlow ops](using_select_tf_ops.md), at
|
||||
ops can be included by [using select TensorFlow ops](ops_select.md), at
|
||||
the cost of binary size.
|
||||
|
||||
The best way to understand how to build a TensorFlow model that can be used with
|
||||
@ -27,7 +27,7 @@ between floating-point and quantized models lies in the way they are converted.
|
||||
Quantized conversion requires dynamic range information for tensors. This
|
||||
requires "fake-quantization" during model training, getting range information
|
||||
via a calibration data set, or doing "on-the-fly" range estimation. See
|
||||
[quantization](performance/model_optimization.md).
|
||||
[quantization](../performance/model_optimization.md).
|
||||
|
||||
## Data Format and Broadcasting
|
||||
|
||||
|
@ -15,7 +15,7 @@ please send feedback about models that work and issues you are facing to
|
||||
tflite@tensorflow.org.
|
||||
|
||||
TensorFlow Lite will continue to have
|
||||
[TensorFlow Lite builtin ops](tf_ops_compatibility.md) optimized for mobile and
|
||||
[TensorFlow Lite builtin ops](ops_compatibility.md) optimized for mobile and
|
||||
embedded devices. However, TensorFlow Lite models can now use a subset of
|
||||
TensorFlow ops when TFLite builtin ops are not sufficient.
|
||||
|
||||
@ -34,7 +34,7 @@ choice. It also discusses some [known limitations](#known-limitations), the
|
||||
|
||||
To convert a TensorFlow model to a TensorFlow Lite model with TensorFlow ops,
|
||||
use the `target_ops` argument in the
|
||||
[TensorFlow Lite converter](https://www.tensorflow.org/lite/convert/). The
|
||||
[TensorFlow Lite converter](../convert/index.md). The
|
||||
following values are valid options for `target_ops`:
|
||||
|
||||
* `TFLITE_BUILTINS` - Converts models using TensorFlow Lite builtin ops.
|
||||
@ -64,7 +64,7 @@ open("converted_model.tflite", "wb").write(tflite_model)
|
||||
```
|
||||
|
||||
The following example shows how to use `target_ops` in the
|
||||
[`tflite_convert`](https://www.tensorflow.org/lite/convert/cmdline_examples)
|
||||
[`tflite_convert`](../convert/cmdline_examples.md)
|
||||
command line tool.
|
||||
|
||||
```
|
||||
@ -97,7 +97,7 @@ includes the necessary library of TensorFlow ops.
|
||||
### Android AAR
|
||||
|
||||
A new Android AAR target with select TensorFlow ops has been added for
|
||||
convenience. Assuming a <a href="./demo_android.md">working TensorFlow Lite
|
||||
convenience. Assuming a <a href="android.md">working TensorFlow Lite
|
||||
build environment</a>, build the Android AAR with select TensorFlow ops as
|
||||
follows:
|
||||
|
||||
|
@ -15,14 +15,14 @@ If you understand image classification, you’re new to TensorFlow Lite, and
|
||||
you’re working with Android or iOS, we recommend following the corresponding
|
||||
tutorial that will walk you through our sample code.
|
||||
|
||||
<a class="button button-primary" href="android">Android</a>
|
||||
<a class="button button-primary" href="ios">iOS</a>
|
||||
<a class="button button-primary" href="android.md">Android</a>
|
||||
<a class="button button-primary" href="ios.md">iOS</a>
|
||||
|
||||
We also provide <a href="example_applications">example applications</a> you can
|
||||
use to get started.
|
||||
|
||||
If you are using a platform other than Android or iOS, or you are already
|
||||
familiar with the <a href="../../apis">TensorFlow Lite APIs</a>, you can
|
||||
familiar with the <a href="https://www.tensorflow.org/api_docs/python/tf/lite">TensorFlow Lite APIs</a>, you can
|
||||
download our starter image classification model and the accompanying labels.
|
||||
|
||||
<a class="button button-primary" href="https://storage.googleapis.com/download.tensorflow.org/models/tflite/mobilenet_v1_1.0_224_quant_and_labels.zip">Download
|
||||
@ -34,7 +34,7 @@ performance, accuracy, and model size. For guidance, see
|
||||
<a href="#choose_a_different_model">Choose a different model</a>.
|
||||
|
||||
If you are using a platform other than Android or iOS, or you are already
|
||||
familiar with the <a href="../../apis.md">TensorFlow Lite APIs</a>, you can
|
||||
familiar with the <a href="https://www.tensorflow.org/api_docs/python/tf/lite">TensorFlow Lite APIs</a>, you can
|
||||
download our starter image classification model and the accompanying labels.
|
||||
|
||||
<a class="button button-primary" href="https://storage.googleapis.com/download.tensorflow.org/models/tflite/mobilenet_v1_1.0_224_quant_and_labels.zip">Download
|
||||
@ -46,7 +46,7 @@ We have example applications for image classification for both Android and iOS.
|
||||
|
||||
<a class="button button-primary" href="https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification/android">Android
|
||||
example</a>
|
||||
<a class="button button-primary" href="https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification/ios">iOS
|
||||
<a class="button button-primary" href="https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification/ios.md">iOS
|
||||
example</a>
|
||||
|
||||
The following screenshot shows the Android image classification example:
|
||||
@ -204,8 +204,8 @@ If you want to train a model to recognize new classes, see
|
||||
For the following use cases, you should use a different type of model:
|
||||
|
||||
<ul>
|
||||
<li>Predicting the type and position of one or more objects within an image (see <a href="object_detection">object detection</a>)</li>
|
||||
<li>Predicting the composition of an image, for example subject versus background (see <a href="segmentation">segmentation</a>)</li>
|
||||
<li>Predicting the type and position of one or more objects within an image (see <a href="../object_detection/overview.md">object detection</a>)</li>
|
||||
<li>Predicting the composition of an image, for example subject versus background (see <a href="../segmentation/overview.md">segmentation</a>)</li>
|
||||
</ul>
|
||||
|
||||
Once you have the starter model running on your target device, you can
|
||||
@ -239,7 +239,7 @@ We measure accuracy in terms of how often the model correctly classifies an
|
||||
image. For example, a model with a stated accuracy of 60% can be expected to
|
||||
classify an image correctly an average of 60% of the time.
|
||||
|
||||
Our <a href="../hosted.md">List of hosted models</a> provides Top-1 and Top-5
|
||||
Our <a href="../../guide/hosted_models.md">list of hosted models</a> provides Top-1 and Top-5
|
||||
accuracy statistics. Top-1 refers to how often the correct label appears as the
|
||||
label with the highest probability in the model’s output. Top-5 refers to how
|
||||
often the correct label appears in the top 5 highest probabilities in the
|
||||
@ -258,14 +258,14 @@ Our quantized Mobilenet models’ size ranges from 0.5 to 3.4 Mb.
|
||||
### Architecture
|
||||
|
||||
There are several different architectures of models available on
|
||||
<a href="../hosted.md">List of hosted models</a>, indicated by the model’s name.
|
||||
<a href="../../guide/hosted_models.md">List of hosted models</a>, indicated by the model’s name.
|
||||
For example, you can choose between Mobilenet, Inception, and others.
|
||||
|
||||
The architecture of a model impacts its performance, accuracy, and size. All of
|
||||
our hosted models are trained on the same data, meaning you can use the provided
|
||||
statistics to compare them and choose which is optimal for your application.
|
||||
|
||||
Note: The image classification models we provide accept varying sizes of input. For some models, this is indicated in the filename. For example, the Mobilenet_V1_1.0_224 model accepts an input of 224x224 pixels. <br /><br />All of the models require three color channels per pixel (red, green, and blue). Quantized models require 1 byte per channel, and float models require 4 bytes per channel.<br /><br />Our <a href="android.md">Android</a> and <a href="ios">iOS</a> code samples demonstrate how to process full-sized camera images into the required format for each model.
|
||||
Note: The image classification models we provide accept varying sizes of input. For some models, this is indicated in the filename. For example, the Mobilenet_V1_1.0_224 model accepts an input of 224x224 pixels. <br /><br />All of the models require three color channels per pixel (red, green, and blue). Quantized models require 1 byte per channel, and float models require 4 bytes per channel.<br /><br />Our <a href="android.md">Android</a> and <a href="ios.md">iOS</a> code samples demonstrate how to process full-sized camera images into the required format for each model.
|
||||
|
||||
## Customize model
|
||||
|
||||
|
@ -17,7 +17,7 @@ example</a>
|
||||
example</a>
|
||||
|
||||
If you are using a platform other than Android or iOS, or you are already
|
||||
familiar with the <a href="../apis.md">TensorFlow Lite APIs</a>, you can
|
||||
familiar with the <a href="https://www.tensorflow.org/api_docs/python/tf/lite">TensorFlow Lite APIs</a>, you can
|
||||
download our starter object detection model and the accompanying labels.
|
||||
|
||||
<a class="button button-primary" href="http://storage.googleapis.com/download.tensorflow.org/models/tflite/coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip">Download
|
||||
|
@ -3,7 +3,7 @@
|
||||
Post-training quantization is a general technique to reduce model size while also
|
||||
providing up to 3x lower latency with little degradation in model accuracy. Post-training
|
||||
quantization quantizes weights from floating point to 8-bits of precision. This technique
|
||||
is enabled as an option in the [TensorFlow Lite converter](../convert):
|
||||
is enabled as an option in the [TensorFlow Lite converter](../convert/index.md):
|
||||
|
||||
```
|
||||
import tensorflow as tf
|
||||
@ -31,7 +31,7 @@ Hybrid ops are available for the most compute-intensive operators in a network:
|
||||
|
||||
Since weights are quantized post training, there could be an accuracy loss, particularly for
|
||||
smaller networks. Pre-trained fully quantized models are provided for specific networks in
|
||||
the [TensorFlow Lite model repository](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/models.md#image-classification-quantized-models){:.external}. It is important to check the accuracy of the quantized model to verify that any degradation
|
||||
the [TensorFlow Lite model repository](../models/). It is important to check the accuracy of the quantized model to verify that any degradation
|
||||
in accuracy is within acceptable limits. There is a tool to evaluate [TensorFlow Lite model accuracy](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/tools/accuracy/README.md){:.external}.
|
||||
|
||||
If the accuracy drop is too high, consider using [quantization aware training](https://github.com/tensorflow/tensorflow/tree/r1.13/tensorflow/contrib/quantize){:.external}.
|
||||
|
Loading…
x
Reference in New Issue
Block a user