PiperOrigin-RevId: 236802220
This commit is contained in:
Billy Lamberta 2019-03-05 00:55:10 -08:00 committed by TensorFlower Gardener
parent f2b72d031c
commit 17a758f7bb
4 changed files with 7 additions and 7 deletions

View File

@ -64,19 +64,19 @@ tflite_convert \
--saved_model_dir=/tmp/saved_model
```
[SavedModel](https://www.tensorflow.org/guide/saved_model.md#using_savedmodel_with_estimators)
[SavedModel](https://www.tensorflow.org/guide/saved_model#using_savedmodel_with_estimators)
has fewer required flags than frozen graphs due to access to additional data
contained within the SavedModel. The values for `--input_arrays` and
`--output_arrays` are an aggregated, alphabetized list of the inputs and outputs
in the [SignatureDefs](../../serving/signature_defs.md) within
the
[MetaGraphDef](https://www.tensorflow.org/saved_model.md#apis_to_build_and_load_a_savedmodel)
[MetaGraphDef](https://www.tensorflow.org/saved_model#apis_to_build_and_load_a_savedmodel)
specified by `--saved_model_tag_set`. As with the GraphDef, the value for
`input_shapes` is automatically determined whenever possible.
There is currently no support for MetaGraphDefs without a SignatureDef or for
MetaGraphDefs that use the [`assets/`
directory](https://www.tensorflow.org/guide/saved_model.md#structure_of_a_savedmodel_directory).
directory](https://www.tensorflow.org/guide/saved_model#structure_of_a_savedmodel_directory).
### Convert a tf.Keras model <a name="keras"></a>

View File

@ -241,8 +241,8 @@ interpreter.allocate_tensors()
In order to run the latest version of the TensorFlow Lite Converter Python API,
either install the nightly build with
[pip](https://www.tensorflow.org/install/pip) (recommended) or
[Docker](https://www.tensorflow.org/install/docker.md), or
[build the pip package from source](https://www.tensorflow.org/install/source.md).
[Docker](https://www.tensorflow.org/install/docker), or
[build the pip package from source](https://www.tensorflow.org/install/source).
### Converting models from TensorFlow 1.12 <a name="pre_tensorflow_1.12"></a>

View File

@ -34,7 +34,7 @@ choice. It also discusses some [known limitations](#known-limitations), the
To convert a TensorFlow model to a TensorFlow Lite model with TensorFlow ops,
use the `target_ops` argument in the
[TensorFlow Lite converter](../convert/index.md). The
[TensorFlow Lite converter](../convert/). The
following values are valid options for `target_ops`:
* `TFLITE_BUILTINS` - Converts models using TensorFlow Lite builtin ops.

View File

@ -3,7 +3,7 @@
Post-training quantization is a general technique to reduce model size while also
providing up to 3x lower latency with little degradation in model accuracy. Post-training
quantization quantizes weights from floating point to 8-bits of precision. This technique
is enabled as an option in the [TensorFlow Lite converter](../convert/index.md):
is enabled as an option in the [TensorFlow Lite converter](../convert/):
```
import tensorflow as tf