STT-tensorflow/tensorflow/lite/g3doc/performance/model_optimization.md
Gregory Clark a628c339c5 Minor TF Lite documentation updates.
PiperOrigin-RevId: 314643633
Change-Id: Ieaa82849c35d1071d6a750b60c72ca08c47a0db7
2020-06-03 18:30:34 -07:00

7.5 KiB

Model optimization

Edge devices often have limited memory or computational power. Various optimizations can be applied to models so that they can be run within these constraints. In addition, some optimizations allow the use of specialized hardware for accelerated inference.

TensorFlow Lite and the TensorFlow Model Optimization Toolkit provide tools to minimize the complexity of optimizing inference.

It's recommended that you consider model optimization during your application development process. This document outlines some best practices for optimizing TensorFlow models for deployment to edge hardware.

Why models should be optimized

There are several main ways model optimization can help with application development.

Size reduction

Some forms of optimization can be used to reduce the size of a model. Smaller models have the following benefits:

  • Smaller storage size: Smaller models occupy less storage space on your users' devices. For example, an Android app using a smaller model will take up less storage space on a user's mobile device.
  • Smaller download size: Smaller models require less time and bandwidth to download to users' devices.
  • Less memory usage: Smaller models use less RAM when they are run, which frees up memory for other parts of your application to use, and can translate to better performance and stability.

Quantization can reduce the size of a model in all of these cases, potentially at the expense of some accuracy. Pruning can reduce the size of a model for download by making it more easily compressible.

Latency reduction

Latency is the amount of time it takes to run a single inference with a given model. Some forms of optimization can reduce the amount of computation required to run inference using a model, resulting in lower latency. Latency can also have an impact on power consumption.

Currently, quantization can be used to reduce latency by simplifying the calculations that occur during inference, potentially at the expense of some accuracy.

Accelerator compatibility

Some hardware accelerators, such as the Edge TPU, can run inference extremely fast with models that have been correctly optimized.

Generally, these types of devices require models to be quantized in a specific way. See each hardware accelerators documentation to learn more about their requirements.

Trade-offs

Optimizations can potentially result in changes in model accuracy, which must be considered during the application development process.

The accuracy changes depend on the individual model being optimized, and are difficult to predict ahead of time. Generally, models that are optimized for size or latency will lose a small amount of accuracy. Depending on your application, this may or may not impact your users' experience. In rare cases, certain models may gain some accuracy as a result of the optimization process.

Types of optimization

TensorFlow Lite currently supports optimization via quantization and pruning.

These are part of the TensorFlow Model Optimization Toolkit, which provides resources for model optimization techniques that are compatible with TensorFlow Lite.

Quantization

Quantization works by reducing the precision of the numbers used to represent a model's parameters, which by default are 32-bit floating point numbers. This results in a smaller model size and faster computation.

The following types of quantization are available in TensorFlow Lite:

Technique Data requirements Size reduction Accuracy Supported hardware
Post-training float16 quantization No data Up to 50% Insignificant accuracy loss CPU, GPU
Post-training dynamic range quantization No data Up to 75% Accuracy loss CPU, GPU (Android)
Post-training integer quantization Unlabelled representative sample Up to 75% Smaller accuracy loss CPU, GPU (Android), EdgeTPU, Hexagon DSP
Quantization-aware training Labelled training data Up to 75% Smallest accuracy loss CPU, GPU (Android), EdgeTPU, Hexagon DSP

Below are the latency and accuracy results for post-training quantization and quantization-aware training on a few models. All latency numbers are measured on Pixel 2 devices using a single big core CPU. As the toolkit improves, so will the numbers here:

Model Top-1 Accuracy (Original) Top-1 Accuracy (Post Training Quantized) Top-1 Accuracy (Quantization Aware Training) Latency (Original) (ms) Latency (Post Training Quantized) (ms) Latency (Quantization Aware Training) (ms) Size (Original) (MB) Size (Optimized) (MB)
Mobilenet-v1-1-2240.7090.6570.70 1241126416.94.3
Mobilenet-v2-1-2240.7190.6370.709 899854143.6
Inception_v30.780.7720.775 113084554395.723.9
Resnet_v2_1010.7700.768N/A 39732868N/A178.344.9
Table 1 Benefits of model quantization for select CNN models

Pruning

Pruning works by removing parameters within a model that have only a minor impact on its predictions. Pruned models are the same size on disk, and have the same runtime latency, but can be compressed more effectively. This makes pruning a useful technique for reducing model download size.

In the future, TensorFlow Lite will provide latency reduction for pruned models.

Development workflow

As a starting point, check if the models in hosted models can work for your application. If not, we recommend that users start with the post-training quantization tool since this is broadly applicable and does not require training data.

For cases where the accuracy and latency targets are not met, or hardware accelerator support is important, quantization-aware training{:.external} is the better option. See additional optimization techniques under the TensorFlow Model Optimization Toolkit.

If you want to further reduce your model size, you can try pruning prior to quantizing your models.