STT-tensorflow/tensorflow/lite/tools/delegates
Jared Duke 26c468322d Add -Wc++14-compat to common TFLite warning opts
PiperOrigin-RevId: 351631306
Change-Id: Ie0b5b545d23751ce216a72a43d83d2b95dc4200d
2021-01-13 11:55:58 -08:00
..
BUILD Add -Wc++14-compat to common TFLite warning opts 2021-01-13 11:55:58 -08:00
coreml_delegate_provider.cc Migrate CoreML delegate directory out of experimental 2020-11-30 16:15:52 -08:00
default_execution_provider.cc Support the verbose mode in logging TFLite delegate-related parameter values. 2020-07-06 18:05:38 -07:00
delegate_provider.h add tflite::tools::prefix to delegate provider, this prevents custom delegate provider not under tflite::tools namespace issue 2020-11-10 01:09:22 -08:00
external_delegate_provider.cc 1. Create an external delegate adaptor to illustrate the use of external delegate as an alternative way for testing, benchmarking and evaluation. 2020-07-29 19:44:06 -07:00
gpu_delegate_provider.cc Enable GPU delegate in benchmark_model on Linux (non-Android) when CL_DELEGATE_NO_GL is defined. 2020-08-05 16:00:04 -07:00
hexagon_delegate_provider.cc Support the verbose mode in logging TFLite delegate-related parameter values. 2020-07-06 18:05:38 -07:00
nnapi_delegate_provider.cc Disallow use of NNAPI CPU implementation by default on Android 10 and later. TFLite's own implementation typically performs better. 2020-10-27 00:18:50 -07:00
README.md Update the README to match the current implementation of nnapi delegate provider. 2020-12-24 18:44:23 -08:00
xnnpack_delegate_provider.cc Support the verbose mode in logging TFLite delegate-related parameter values. 2020-07-06 18:05:38 -07:00

TFLite Delegate Utilities for Tooling

TFLite Delegate Registrar

A TFLite delegate registrar is provided here. The registrar keeps a list of TFLite delegate providers, each of which defines a list parameters that could be initialized from commandline arguments and provides a TFLite delegate instance creation based on those parameters. This delegate registrar has been used in TFLite evaluation tools and the benchmark model tool.

A particular TFLite delegate provider can be used by linking the corresponding library, e.g. adding it to the deps of a BUILD rule. Note that each delegate provider library has been configured with alwayslink=1 in the BUILD rule so that it will be linked to any binary that directly or indirectly depends on it.

The following lists all implemented TFLite delegate providers and their corresponding list of parameters that each supports to create a particular TFLite delegate.

Common parameters

  • num_threads: int (default=1)
    The number of threads to use for running the inference on CPU.
  • max_delegated_partitions: int (default=0, i.e. no limit)
    The maximum number of partitions that will be delegated.
    Currently supported by the GPU, Hexagon, CoreML and NNAPI delegate.
  • min_nodes_per_partition: int (default=delegate's own choice)
    The minimal number of TFLite graph nodes of a partition that needs to be reached to be delegated. A negative value or 0 means to use the default choice of each delegate.
    This option is currently supported by the Hexagon and CoreML delegate.

GPU delegate provider

Only Android and iOS devices support GPU delegate.

Common options

  • use_gpu: bool (default=false)
    Whether to use the GPU accelerator delegate.
  • gpu_precision_loss_allowed: bool (default=true)
    Whether to allow the GPU delegate to carry out computation with some precision loss (i.e. processing in FP16) or not. If allowed, the performance will increase.
  • gpu_experimental_enable_quant: bool (default=true)
    Whether to allow the GPU delegate to run a 8-bit quantized model or not.

Android options

  • gpu_backend: string (default="")
    Force the GPU delegate to use a particular backend for execution, and fail if unsuccessful. Should be one of: cl, gl. By default, the GPU delegate will try OpenCL first and then OpenGL if the former fails.

iOS options

  • gpu_wait_type: string (default="")
    Which GPU wait_type option to use. Should be one of the following: passive, active, do_not_wait, aggressive. When left blank, passive mode is used by default.

NNAPI delegate provider

  • use_nnapi: bool (default=false)
    Whether to use Android NNAPI. This API is available on recent Android devices. When on Android Q+, will also print the names of NNAPI accelerators accessible through the nnapi_accelerator_name flag.
  • nnapi_accelerator_name: string (default="")
    The name of the NNAPI accelerator to use (requires Android Q+). If left blank, NNAPI will automatically select which of the available accelerators to use.
  • nnapi_execution_preference: string (default="")
    Which NNAPI execution preference to use when executing using NNAPI. Should be one of the following: fast_single_answer, sustained_speed, low_power, undefined.
  • nnapi_execution_priority: string (default="")
    The relative priority for executions of the model in NNAPI. Should be one of the following: default, low, medium and high. This option requires Android 11+.
  • disable_nnapi_cpu: bool (default=true)
    Excludes the NNAPI CPU reference implementation from the possible devices to be used by NNAPI to execute the model. This option is ignored if nnapi_accelerator_name is specified.
  • nnapi_allow_fp16: bool (default=false)
    Whether to allow FP32 computation to be run in FP16.

Hexagon delegate provider

  • use_hexagon: bool (default=false)
    Whether to use the Hexagon delegate. Not all devices may support the Hexagon delegate, refer to the TensorFlow Lite documentation for more information about which devices/chipsets are supported and about how to get the required libraries. To use the Hexagon delegate also build the hexagon_nn:libhexagon_interface.so target and copy the library to the device. All libraries should be copied to /data/local/tmp on the device.
  • hexagon_profiling: bool (default=false)
    Whether to profile ops running on hexagon.

XNNPACK delegate provider

  • use_xnnpack: bool (default=false)
    Whether to use the XNNPack delegate.

CoreML delegate provider

  • use_coreml: bool (default=false)
    Whether to use the Core ML delegate. This option is only available in iOS.
  • coreml_version: int (default=0)
    Target Core ML version for model conversion. The default value is 0 and it means using the newest version that's available on the device.

External delegate provider

  • external_delegate_path: string (default="")
    Path to the external delegate library to use.
  • external_delegate_options: string (default="")
    A list of options to be passed to the external delegate library. Options should be in the format of option1:value1;option2:value2;optionN:valueN