diff --git a/tensorflow/lite/g3doc/convert/cmdline_examples.md b/tensorflow/lite/g3doc/convert/cmdline_examples.md index 92bbc609f38..067f09a5576 100644 --- a/tensorflow/lite/g3doc/convert/cmdline_examples.md +++ b/tensorflow/lite/g3doc/convert/cmdline_examples.md @@ -64,19 +64,19 @@ tflite_convert \ --saved_model_dir=/tmp/saved_model ``` -[SavedModel](https://www.tensorflow.org/guide/saved_model#using_savedmodel_with_estimators) +[SavedModel](https://www.tensorflow.org/guide/saved_model.md#using_savedmodel_with_estimators) has fewer required flags than frozen graphs due to access to additional data contained within the SavedModel. The values for `--input_arrays` and `--output_arrays` are an aggregated, alphabetized list of the inputs and outputs -in the [SignatureDefs](https://www.tensorflow.org/serving/signature_defs) within +in the [SignatureDefs](../../serving/signature_defs.md) within the -[MetaGraphDef](https://www.tensorflow.org/guide/saved_model#apis_to_build_and_load_a_savedmodel) +[MetaGraphDef](https://www.tensorflow.org/saved_model.md#apis_to_build_and_load_a_savedmodel) specified by `--saved_model_tag_set`. As with the GraphDef, the value for `input_shapes` is automatically determined whenever possible. There is currently no support for MetaGraphDefs without a SignatureDef or for MetaGraphDefs that use the [`assets/` -directory](https://www.tensorflow.org/guide/saved_model#structure_of_a_savedmodel_directory). +directory](https://www.tensorflow.org/guide/saved_model.md#structure_of_a_savedmodel_directory). ### Convert a tf.Keras model diff --git a/tensorflow/lite/g3doc/convert/cmdline_reference.md b/tensorflow/lite/g3doc/convert/cmdline_reference.md index 3c178de3962..609ab3fdede 100644 --- a/tensorflow/lite/g3doc/convert/cmdline_reference.md +++ b/tensorflow/lite/g3doc/convert/cmdline_reference.md @@ -38,7 +38,7 @@ The following flags specify optional parameters when using SavedModels. Specifies a comma-separated set of tags identifying the MetaGraphDef within the SavedModel to analyze. All tags in the tag set must be specified. * `--saved_model_signature_key`. Type: string. Default: - [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants). + `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`. Specifies the key identifying the SignatureDef containing inputs and outputs. diff --git a/tensorflow/lite/g3doc/convert/python_api.md b/tensorflow/lite/g3doc/convert/python_api.md index 4d2c7361c9f..06c7389053a 100644 --- a/tensorflow/lite/g3doc/convert/python_api.md +++ b/tensorflow/lite/g3doc/convert/python_api.md @@ -241,8 +241,8 @@ interpreter.allocate_tensors() In order to run the latest version of the TensorFlow Lite Converter Python API, either install the nightly build with [pip](https://www.tensorflow.org/install/pip) (recommended) or -[Docker](https://www.tensorflow.org/install/docker), or -[build the pip package from source](https://www.tensorflow.org/install/source). +[Docker](https://www.tensorflow.org/install/docker.md), or +[build the pip package from source](https://www.tensorflow.org/install/source.md). ### Converting models from TensorFlow 1.12 diff --git a/tensorflow/lite/g3doc/guide/build_ios.md b/tensorflow/lite/g3doc/guide/build_ios.md index c195b6abf4f..40f2ac2fdfd 100644 --- a/tensorflow/lite/g3doc/guide/build_ios.md +++ b/tensorflow/lite/g3doc/guide/build_ios.md @@ -3,7 +3,7 @@ This document describes how to build TensorFlow Lite iOS library. If you just want to use it, the easiest way is using the TensorFlow Lite CocoaPod releases. -See [TensorFlow Lite iOS Demo](demo_ios.md) for examples. +See [TensorFlow Lite iOS Demo](ios.md) for examples. ## Building diff --git a/tensorflow/lite/g3doc/guide/faq.md b/tensorflow/lite/g3doc/guide/faq.md index 6cd1b7be9f9..a0e4d09ef1e 100644 --- a/tensorflow/lite/g3doc/guide/faq.md +++ b/tensorflow/lite/g3doc/guide/faq.md @@ -11,17 +11,17 @@ detailed documentation for the topic or file a The TensorFlow Lite converter supports the following formats: * SavedModels: - [TFLiteConverter.from_saved_model](convert/python_api.md#exporting_a_savedmodel_) + [TFLiteConverter.from_saved_model](../convert/python_api.md#exporting_a_savedmodel_) * Frozen GraphDefs generated by [freeze_graph.py](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py): - [TFLiteConverter.from_frozen_graph](convert/python_api#exporting_a_graphdef_from_file_) + [TFLiteConverter.from_frozen_graph](../convert/python_api.md#exporting_a_graphdef_from_file_) * tf.keras HDF5 models: - [TFLiteConverter.from_keras_model_file](convert/python_api#exporting_a_tfkeras_file_) + [TFLiteConverter.from_keras_model_file](../convert/python_api.md#exporting_a_tfkeras_file_) * tf.Session: - [TFLiteConverter.from_session](python_api#exporting_a_graphdef_from_tfsession_) + [TFLiteConverter.from_session](../convert/python_api.md#exporting_a_graphdef_from_tfsession_) The recommended approach is to integrate the -[Python converter](convert/python_api.md) into your model pipeline in order to +[Python converter](../convert/python_api.md) into your model pipeline in order to detect compatibility issues early on. #### Why doesn't my model convert? @@ -69,7 +69,7 @@ bazel run //tensorflow/lite/tools:visualize model.tflite visualized_model.html #### Why are some operations not implemented in TensorFlow Lite? In order to keep TensorFlow Lite lightweight, only certain operations were used -in the converter. The [Compatibility Guide](tf_ops_compatibility.md) provides a +in the converter. The [Compatibility Guide](ops_compatibility.md) provides a list of operations currently supported by TensorFlow Lite. If you don’t see a specific operation (or an equivalent) listed, it's likely @@ -78,34 +78,34 @@ GitHub [issue #21526](https://github.com/tensorflow/tensorflow/issues/21526). Leave a comment if your request hasn’t already been mentioned. In the meanwhile, you could try implementing a -[custom operator](custom_operators.md) or using a different model that only +[custom operator](ops_custom.md) or using a different model that only contains supported operators. If binary size is not a constraint, try using -TensorFlow Lite with [select TensorFlow ops](using_select_tf_ops.md). +TensorFlow Lite with [select TensorFlow ops](ops_select.md). #### How do I test that a TensorFlow Lite model behaves the same as the original TensorFlow model? The best way to test the behavior of a TensorFlow Lite model is to use our API with test data and compare the outputs to TensorFlow for the same inputs. Take a -look at our [Python Interpreter example](convert/python_api.md) that generates +look at our [Python Interpreter example](../convert/python_api.md) that generates random data to feed to the interpreter. ## Optimization #### How do I reduce the size of my converted TensorFlow Lite model? -[Post-training quantization](performance/post_training_quantization.md) can be +[Post-training quantization](../performance/post_training_quantization.md) can be used during conversion to TensorFlow Lite to reduce the size of the model. Post-training quantization quantizes weights to 8-bits of precision from floating-point and dequantizes them during runtime to perform floating point computations. However, note that this could have some accuracy implications. If retraining the model is an option, consider -[Quantization-aware training](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/quantize/README.md). +[Quantization-aware training](https://github.com/tensorflow/tensorflow/tree/r1.13/tensorflow/contrib/quantize). However, note that quantization-aware training is only available for a subset of convolutional neural network architectures. For a deeper understanding of different optimization methods, look at -[Model optimization](performance/model_optimization.md). +[Model optimization](../performance/model_optimization.md). #### How do I optimize TensorFlow Lite performance for my machine learning task? @@ -113,7 +113,7 @@ The high-level process to optimize TensorFlow Lite performance looks something like this: * *Make sure that you have the right model for the task.* For image - classification, check out our [list of hosted models](models.md). + classification, check out our [list of hosted models](hosted_models.md). * *Tweak the number of threads.* Many TensorFlow Lite operators support multi-threaded kernels. You can use `SetNumThreads()` in the [C++ API](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/interpreter.h#L345) @@ -124,12 +124,12 @@ like this: Networks API, call [`UseNNAPI`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/interpreter.h#L343) on the interpreter. Or take a look at our - [GPU delegate tutorial](performance/gpu.md). + [GPU delegate tutorial](../performance/gpu.md). * *(Advanced) Profile Model.* The Tensorflow Lite [benchmarking tool](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/benchmark) has a built-in profiler that can show per-operator statistics. If you know how you can optimize an operator’s performance for your specific platform, - you can implement a [custom operator](custom_operators.md). + you can implement a [custom operator](ops_custom.md). For a more in-depth discussion on how to optimize performance, take a look at -[Best Practices](performance/best_practices.md). +[Best Practices](../performance/best_practices.md). diff --git a/tensorflow/lite/g3doc/guide/get_started.md b/tensorflow/lite/g3doc/guide/get_started.md index ec5640c0caf..eb1c4de1abd 100644 --- a/tensorflow/lite/g3doc/guide/get_started.md +++ b/tensorflow/lite/g3doc/guide/get_started.md @@ -35,7 +35,7 @@ by suggesting contextually relevant messages. The model is built specifically fo memory constrained devices, such as watches and phones, and has been successfully used in Smart Replies on Android Wear. Currently, this model is Android-specific. -These pre-trained models are [available for download](models.md). +These pre-trained models are [available for download](hosted_models.md). ### Re-train Inception-V3 or MobileNet for a custom data set @@ -63,24 +63,24 @@ the framework. See to create file for the custom model. TensorFlow Lite currently supports a subset of TensorFlow operators. Refer to -the [TensorFlow Lite & TensorFlow Compatibility Guide](tf_ops_compatibility.md) +the [TensorFlow Lite & TensorFlow Compatibility Guide](ops_compatibility.md) for supported operators and their usage. This set of operators will continue to grow in future Tensorflow Lite releases. ## 2. Convert the model format -The [TensorFlow Lite Converter](convert/index.md) accepts the following file +The [TensorFlow Lite Converter](../convert.md) accepts the following file formats: * `SavedModel` — A `GraphDef` and checkpoint with a signature that labels input and output arguments to a model. See the documentation for converting - SavedModels using [Python](convert/python_api.md#basic_savedmodel) or using - the [command line](convert/cmdline_examples.md#savedmodel). + SavedModels using [Python](../convert/python_api.md#basic_savedmodel) or using + the [command line](../convert/cmdline_examples.md#savedmodel). * `tf.keras` - A HDF5 file containing a model with weights and input and output arguments generated by `tf.Keras`. See the documentation for converting HDF5 models using - [Python](convert/python_api.md#basic_keras_file) or using the - [command line](convert/cmdline_examples.md#keras). + [Python](../convert/python_api.md#basic_keras_file) or using the + [command line](../convert/cmdline_examples.md#keras). * `frozen tf.GraphDef` — A subclass of `tf.GraphDef` that does not contain variables. A `GraphDef` can be converted to a `frozen GraphDef` by taking a checkpoint and a `GraphDef`, and converting each variable into a constant @@ -154,9 +154,9 @@ the arguments for specifying the output nodes for inference in the ### Full converter reference -The [TensorFlow Lite Converter](convert/index.md) can be -[Python](convert/python_api.md) or from the -[command line](convert/cmdline_examples.md). This allows you to integrate the +The [TensorFlow Lite Converter](../convert.md) can be +[Python](../convert/python_api.md) or from the +[command line](../convert/cmdline_examples.md). This allows you to integrate the conversion step into the model design workflow, ensuring the model is easy to convert to a mobile inference graph. @@ -195,15 +195,15 @@ The open source Android demo app uses the JNI interface and is available [on GitHub](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/java/demo/app). You can also download a [prebuilt APK](http://download.tensorflow.org/deps/tflite/TfLiteCameraDemo.apk). -See the Android demo guide for details. +See the Android demo guide for details. -The Android mobile guide has instructions for +The Android mobile guide has instructions for installing TensorFlow on Android and setting up `bazel` and Android Studio. ### iOS To integrate a TensorFlow model in an iOS app, see the -[TensorFlow Lite for iOS](ios.md) guide and iOS demo +[TensorFlow Lite for iOS](ios.md) guide and iOS demo guide. #### Core ML support @@ -218,9 +218,9 @@ devices. To use the converter, refer to the ### ARM32 and ARM64 Linux Compile Tensorflow Lite for a Raspberry Pi by following the -[RPi build instructions](rpi.md) Compile Tensorflow Lite for a generic aarch64 +[RPi build instructions](build_rpi.md) Compile Tensorflow Lite for a generic aarch64 board such as Odroid C2, Pine64, NanoPi, and others by following the -[ARM64 Linux build instructions](linux_aarch64.md) This compiles a static +[ARM64 Linux build instructions](build_arm64.md) This compiles a static library file (`.a`) used to build your app. There are plans for Python bindings and a demo app. @@ -253,7 +253,7 @@ tflite_quantized_model=converter.convert() open(“quantized_model.tflite”, “wb”).write(tflite_quantized_model) ``` -Read the full documentation [here](performance/post_training_quantization.md) +Read the full documentation [here](../performance/post_training_quantization.md) and see a tutorial [here](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/tutorials/post_training_quant.ipynb). @@ -268,4 +268,4 @@ Another benefit with GPU inference is its power efficiency. GPUs carry out the computations in a very efficient and optimized manner, so that they consume less power and generate less heat than when the same task is run on CPUs. -Read the tutorial [here](performance/gpu) and full documentation [here](performance/gpu_advanced). +Read the tutorial [here](../performance/gpu.md) and full documentation [here](../performance/gpu_advanced.md). diff --git a/tensorflow/lite/g3doc/guide/hosted_models.md b/tensorflow/lite/g3doc/guide/hosted_models.md index bc4b90824f0..69f196782ea 100644 --- a/tensorflow/lite/g3doc/guide/hosted_models.md +++ b/tensorflow/lite/g3doc/guide/hosted_models.md @@ -3,7 +3,7 @@ The following is an incomplete list of pre-trained models optimized to work with TensorFlow Lite. -To get started choosing a model, visit Models. +To get started choosing a model, visit Models. Note: The best model for a given application depends on your requirements. For example, some applications might benefit from higher accuracy, while others @@ -13,7 +13,7 @@ models to find the optimal balance between size, performance, and accuracy. ## Image classification For more information about image classification, see -Image classification. +Image classification. ### Quantized models @@ -50,7 +50,7 @@ Graph. Note: Performance numbers were benchmarked on Pixel-2 using single thread large core. Accuracy numbers were computed using the -[TFLite accuracy tool](../tools/accuracy/ilsvrc.md). +[TFLite accuracy tool](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/accuracy/ilsvrc). ### Floating point models @@ -108,7 +108,7 @@ BIG core. ## Object detection For more information about object detection, see -Object detection. +Object detection. The object detection model we currently host is **coco_ssd_mobilenet_v1_1.0_quant_2018_06_29**. @@ -119,7 +119,7 @@ model and labels ## Pose estimation For more information about pose estimation, see -Pose estimation. +Pose estimation. The pose estimation model we currently host is **multi_person_mobilenet_v1_075_float**. @@ -130,7 +130,7 @@ model ## Image segmentation For more information about image segmentation, see -Segmentation. +Segmentation. The image segmentation model we currently host is **deeplabv3_257_mv_gpu**. @@ -140,7 +140,7 @@ model ## Smart reply For more information about smart reply, see -Smart reply. +Smart reply. The smart reply model we currently host is **smartreply_1.0_2017_11_01**. diff --git a/tensorflow/lite/g3doc/guide/index.md b/tensorflow/lite/g3doc/guide/index.md index abfc5780f7b..288f7a07576 100644 --- a/tensorflow/lite/g3doc/guide/index.md +++ b/tensorflow/lite/g3doc/guide/index.md @@ -118,7 +118,7 @@ TensorFlow Lite provides: to all first-party and third-party apps. Also see the complete list of - [TensorFlow Lite's supported models](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/models.md), + [TensorFlow Lite's supported models](hosted_models.md), including the model sizes, performance numbers, and downloadable model files. - Quantized versions of the MobileNet model, which runs faster than the diff --git a/tensorflow/lite/g3doc/guide/inference.md b/tensorflow/lite/g3doc/guide/inference.md index e1ead9d0d51..b0107ece0b1 100644 --- a/tensorflow/lite/g3doc/guide/inference.md +++ b/tensorflow/lite/g3doc/guide/inference.md @@ -7,7 +7,7 @@ TensorFlow Lite inference is the process of executing a TensorFlow Lite model on-device and extracting meaningful results from it. Inference is the final step in using the model on-device in the -[architecture](./index.md#tensorflow_lite_architecture). +[architecture](index.md#tensorflow_lite_architecture). Inference for TensorFlow Lite models is run through an interpreter. This document outlines the various APIs for the interpreter along with the @@ -51,14 +51,14 @@ On Android, TensorFlow Lite inference can be performed using either Java or C++ APIs. The Java APIs provide convenience and can be used directly within your Android Activity classes. The C++ APIs on the other hand may offer more flexibility and speed, but may require writing JNI wrappers to move data between -Java and C++ layers. You can find an example [here](./android.md). +Java and C++ layers. You can find an example [here](android.md). #### iOS TensorFlow Lite provides Swift/Objective C++ APIs for inference on iOS. An -example can be found [here](./ios.md). +example can be found [here](ios.md). #### Linux -On Linux platforms such as [Raspberry Pi](./build_rpi.md), TensorFlow Lite C++ +On Linux platforms such as [Raspberry Pi](build_rpi.md), TensorFlow Lite C++ and Python APIs can be used to run inference. @@ -72,7 +72,7 @@ should be no surprise that the APIs try to avoid unnecessary copies at the expense of convenience. Similarly, consistency with TensorFlow APIs was not an explicit goal and some variance is to be expected. -There is also a [Python API for TensorFlow Lite](./../convert/python_api.md). +There is also a [Python API for TensorFlow Lite](../convert/python_api.md). ### Loading a Model @@ -205,7 +205,7 @@ where each entry in `inputs` corresponds to an input tensor and `map_of_indices_to_outputs` maps indices of output tensors to the corresponding output data. In both cases the tensor indices should correspond to the values given to the -[TensorFlow Lite Optimized Converter](./../convert/cmdline_examples.md) when the +[TensorFlow Lite Optimized Converter](../convert/cmdline_examples.md) when the model was created. Be aware that the order of tensors in `input` must match the order given to the `TensorFlow Lite Optimized Converter`. diff --git a/tensorflow/lite/g3doc/guide/ios.md b/tensorflow/lite/g3doc/guide/ios.md index f3d93b8d21e..3565ce71df3 100644 --- a/tensorflow/lite/g3doc/guide/ios.md +++ b/tensorflow/lite/g3doc/guide/ios.md @@ -79,16 +79,16 @@ Under `Project navigator -> tflite_camera_example -> Targets -> tflite_camera_example -> General` change the bundle identifier by pre-pending your name: -![pre-pend your name to the bundle identifier](images/ios/bundle_identifier.png) +![pre-pend your name to the bundle identifier](../images/ios/bundle_identifier.png) Plug in your iOS device. Note the app must be executed with a real device with camera. Select the iOS device from the drop-down menu. -![Device selection](images/ios/device_selection.png) +![Device selection](../images/ios/device_selection.png) Click the "Run" button to build and run the app -![Build and execute](images/ios/build_and_execute.png) +![Build and execute](../images/ios/build_and_execute.png) Note that as mentioned earlier, you must already have a device set up and linked to your Apple Developer account in order to deploy the app on a device. diff --git a/tensorflow/lite/g3doc/guide/ops_compatibility.md b/tensorflow/lite/g3doc/guide/ops_compatibility.md index 81165f57cb5..a75566b9bf0 100644 --- a/tensorflow/lite/g3doc/guide/ops_compatibility.md +++ b/tensorflow/lite/g3doc/guide/ops_compatibility.md @@ -9,7 +9,7 @@ Since the set of TensorFlow Lite operations is smaller than TensorFlow's, not every model is convertible. Even for supported operations, very specific usage patterns are sometimes expected, for performance reasons. We expect to expand the set of supported operations in future TensorFlow Lite releases. Additional -ops can be included by [using select TensorFlow ops](using_select_tf_ops.md), at +ops can be included by [using select TensorFlow ops](ops_select.md), at the cost of binary size. The best way to understand how to build a TensorFlow model that can be used with @@ -27,7 +27,7 @@ between floating-point and quantized models lies in the way they are converted. Quantized conversion requires dynamic range information for tensors. This requires "fake-quantization" during model training, getting range information via a calibration data set, or doing "on-the-fly" range estimation. See -[quantization](performance/model_optimization.md). +[quantization](../performance/model_optimization.md). ## Data Format and Broadcasting diff --git a/tensorflow/lite/g3doc/guide/ops_select.md b/tensorflow/lite/g3doc/guide/ops_select.md index e08256ec903..21649ea62ba 100644 --- a/tensorflow/lite/g3doc/guide/ops_select.md +++ b/tensorflow/lite/g3doc/guide/ops_select.md @@ -15,7 +15,7 @@ please send feedback about models that work and issues you are facing to tflite@tensorflow.org. TensorFlow Lite will continue to have -[TensorFlow Lite builtin ops](tf_ops_compatibility.md) optimized for mobile and +[TensorFlow Lite builtin ops](ops_compatibility.md) optimized for mobile and embedded devices. However, TensorFlow Lite models can now use a subset of TensorFlow ops when TFLite builtin ops are not sufficient. @@ -34,7 +34,7 @@ choice. It also discusses some [known limitations](#known-limitations), the To convert a TensorFlow model to a TensorFlow Lite model with TensorFlow ops, use the `target_ops` argument in the -[TensorFlow Lite converter](https://www.tensorflow.org/lite/convert/). The +[TensorFlow Lite converter](../convert/index.md). The following values are valid options for `target_ops`: * `TFLITE_BUILTINS` - Converts models using TensorFlow Lite builtin ops. @@ -64,7 +64,7 @@ open("converted_model.tflite", "wb").write(tflite_model) ``` The following example shows how to use `target_ops` in the -[`tflite_convert`](https://www.tensorflow.org/lite/convert/cmdline_examples) +[`tflite_convert`](../convert/cmdline_examples.md) command line tool. ``` @@ -97,7 +97,7 @@ includes the necessary library of TensorFlow ops. ### Android AAR A new Android AAR target with select TensorFlow ops has been added for -convenience. Assuming a working TensorFlow Lite +convenience. Assuming a working TensorFlow Lite build environment, build the Android AAR with select TensorFlow ops as follows: diff --git a/tensorflow/lite/g3doc/models/image_classification/overview.md b/tensorflow/lite/g3doc/models/image_classification/overview.md index 1f5a7dcb270..9ddbaf43ef0 100644 --- a/tensorflow/lite/g3doc/models/image_classification/overview.md +++ b/tensorflow/lite/g3doc/models/image_classification/overview.md @@ -15,14 +15,14 @@ If you understand image classification, you’re new to TensorFlow Lite, and you’re working with Android or iOS, we recommend following the corresponding tutorial that will walk you through our sample code. -Android -iOS +Android +iOS We also provide example applications you can use to get started. If you are using a platform other than Android or iOS, or you are already -familiar with the TensorFlow Lite APIs, you can +familiar with the TensorFlow Lite APIs, you can download our starter image classification model and the accompanying labels. Download @@ -34,7 +34,7 @@ performance, accuracy, and model size. For guidance, see Choose a different model. If you are using a platform other than Android or iOS, or you are already -familiar with the TensorFlow Lite APIs, you can +familiar with the TensorFlow Lite APIs, you can download our starter image classification model and the accompanying labels. Download @@ -46,7 +46,7 @@ We have example applications for image classification for both Android and iOS. Android example -iOS +iOS example The following screenshot shows the Android image classification example: @@ -204,8 +204,8 @@ If you want to train a model to recognize new classes, see For the following use cases, you should use a different type of model: Once you have the starter model running on your target device, you can @@ -239,7 +239,7 @@ We measure accuracy in terms of how often the model correctly classifies an image. For example, a model with a stated accuracy of 60% can be expected to classify an image correctly an average of 60% of the time. -Our List of hosted models provides Top-1 and Top-5 +Our list of hosted models provides Top-1 and Top-5 accuracy statistics. Top-1 refers to how often the correct label appears as the label with the highest probability in the model’s output. Top-5 refers to how often the correct label appears in the top 5 highest probabilities in the @@ -258,14 +258,14 @@ Our quantized Mobilenet models’ size ranges from 0.5 to 3.4 Mb. ### Architecture There are several different architectures of models available on -List of hosted models, indicated by the model’s name. +List of hosted models, indicated by the model’s name. For example, you can choose between Mobilenet, Inception, and others. The architecture of a model impacts its performance, accuracy, and size. All of our hosted models are trained on the same data, meaning you can use the provided statistics to compare them and choose which is optimal for your application. -Note: The image classification models we provide accept varying sizes of input. For some models, this is indicated in the filename. For example, the Mobilenet_V1_1.0_224 model accepts an input of 224x224 pixels.

All of the models require three color channels per pixel (red, green, and blue). Quantized models require 1 byte per channel, and float models require 4 bytes per channel.

Our Android and iOS code samples demonstrate how to process full-sized camera images into the required format for each model. +Note: The image classification models we provide accept varying sizes of input. For some models, this is indicated in the filename. For example, the Mobilenet_V1_1.0_224 model accepts an input of 224x224 pixels.

All of the models require three color channels per pixel (red, green, and blue). Quantized models require 1 byte per channel, and float models require 4 bytes per channel.

Our Android and iOS code samples demonstrate how to process full-sized camera images into the required format for each model. ## Customize model diff --git a/tensorflow/lite/g3doc/models/object_detection/overview.md b/tensorflow/lite/g3doc/models/object_detection/overview.md index a0295d02984..ffa6381ef3d 100644 --- a/tensorflow/lite/g3doc/models/object_detection/overview.md +++ b/tensorflow/lite/g3doc/models/object_detection/overview.md @@ -17,7 +17,7 @@ example example If you are using a platform other than Android or iOS, or you are already -familiar with the TensorFlow Lite APIs, you can +familiar with the TensorFlow Lite APIs, you can download our starter object detection model and the accompanying labels. Download diff --git a/tensorflow/lite/g3doc/performance/post_training_quantization.md b/tensorflow/lite/g3doc/performance/post_training_quantization.md index 59206f42f4f..0aa7e5163a9 100644 --- a/tensorflow/lite/g3doc/performance/post_training_quantization.md +++ b/tensorflow/lite/g3doc/performance/post_training_quantization.md @@ -3,7 +3,7 @@ Post-training quantization is a general technique to reduce model size while also providing up to 3x lower latency with little degradation in model accuracy. Post-training quantization quantizes weights from floating point to 8-bits of precision. This technique -is enabled as an option in the [TensorFlow Lite converter](../convert): +is enabled as an option in the [TensorFlow Lite converter](../convert/index.md): ``` import tensorflow as tf @@ -31,7 +31,7 @@ Hybrid ops are available for the most compute-intensive operators in a network: Since weights are quantized post training, there could be an accuracy loss, particularly for smaller networks. Pre-trained fully quantized models are provided for specific networks in -the [TensorFlow Lite model repository](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/models.md#image-classification-quantized-models){:.external}. It is important to check the accuracy of the quantized model to verify that any degradation +the [TensorFlow Lite model repository](../models/). It is important to check the accuracy of the quantized model to verify that any degradation in accuracy is within acceptable limits. There is a tool to evaluate [TensorFlow Lite model accuracy](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/tools/accuracy/README.md){:.external}. If the accuracy drop is too high, consider using [quantization aware training](https://github.com/tensorflow/tensorflow/tree/r1.13/tensorflow/contrib/quantize){:.external}.