Edit Hexagon documentation to reflect new supported models
PiperOrigin-RevId: 312144610 Change-Id: I9c8b0d9ad6ea4b745b4bb985ca143cca660a5b14
This commit is contained in:
parent
869920697b
commit
da67fcddef
@ -22,15 +22,15 @@ are supported, including:
|
||||
|
||||
**Supported models:**
|
||||
|
||||
The Hexagon delegate currently supports quantized models generated using
|
||||
[quantization-aware training](https://github.com/tensorflow/tensorflow/tree/r1.13/tensorflow/contrib/quantize),
|
||||
e.g.,
|
||||
[these quantized models](https://www.tensorflow.org/lite/guide/hosted_models#quantized_models)
|
||||
hosted on the TensorFlow Lite repo. It does not (yet) support models with
|
||||
[8-bit symmetric quantization spec](https://www.tensorflow.org/lite/performance/quantization_spec).
|
||||
Sample models include
|
||||
[MobileNet V1](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_1.0_224_quant.tgz),
|
||||
[SSD Mobilenet](https://storage.googleapis.com/download.tensorflow.org/models/tflite/coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip).
|
||||
The Hexagon delegate supports all models that conform to our
|
||||
[8-bit symmetric quantization spec](https://www.tensorflow.org/lite/performance/quantization_spec),
|
||||
including those generated using
|
||||
[post-training integer quantization](https://www.tensorflow.org/lite/performance/post_training_integer_quant).
|
||||
UInt8 models trained with the legacy
|
||||
[quantization-aware training](https://github.com/tensorflow/tensorflow/tree/r1.13/tensorflow/contrib/quantize)
|
||||
path are also supported, for e.g.,
|
||||
[these quantized versions](https://www.tensorflow.org/lite/guide/hosted_models#quantized_models)
|
||||
on our Hosted Models page.
|
||||
|
||||
## Hexagon Delegate Java API
|
||||
|
||||
@ -254,10 +254,6 @@ ro.board.platform`).
|
||||
|
||||
## FAQ
|
||||
|
||||
* Will the delegate support models created using
|
||||
[post-training quantization](https://www.tensorflow.org/lite/performance/post_training_quantization)?
|
||||
* This is tentatively planned for a future release, though there is no
|
||||
concrete timeline.
|
||||
* Which ops are supported by the delegate?
|
||||
* See the current list of [supported ops and constraints](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/delegates/hexagon/README.md)
|
||||
* How can I tell that the model is using the DSP when I enable the delegate?
|
||||
|
Loading…
Reference in New Issue
Block a user