STT-tensorflow/tensorflow/lite/nnapi
Stefano Galarraga 75e5afd100 If a target accelerator is specified, use its feature level to determine operations to delegate instead of SDK version.
PiperOrigin-RevId: 289853984
Change-Id: Ic482388cd9a15855d4347375f263213fd3e90eaf
2020-01-15 07:28:35 -08:00
..
BUILD Refactors NnApiMock to extract a class to be used to do failure injection on NNAPI in native tests 2019-12-16 08:01:14 -08:00
NeuralNetworksShim.h Add NNAPI Vendor Extension support in TFLite. 2020-01-06 07:43:59 -08:00
NeuralNetworksTypes.h Add NNAPI Vendor Extension support in TFLite. 2020-01-06 07:43:59 -08:00
nnapi_handler_test.cc Refactors NnApiMock to extract a class to be used to do failure injection on NNAPI in native tests 2019-12-16 08:01:14 -08:00
nnapi_handler.cc If a target accelerator is specified, use its feature level to determine operations to delegate instead of SDK version. 2020-01-15 07:28:35 -08:00
nnapi_handler.h If a target accelerator is specified, use its feature level to determine operations to delegate instead of SDK version. 2020-01-15 07:28:35 -08:00
nnapi_implementation_disabled.cc Allow using libneuralnetworks.so on non-Android platforms. 2019-01-25 13:44:13 -08:00
nnapi_implementation_test.cc Mock ASharedMemory_create on non-Android platforms only if NNAPI is loaded 2019-08-08 11:15:39 -07:00
nnapi_implementation.cc Add NNAPI Vendor Extension support in TFLite. 2020-01-06 07:43:59 -08:00
nnapi_implementation.h Add NNAPI Vendor Extension support in TFLite. 2020-01-06 07:43:59 -08:00
nnapi_util.cc Modify TF Lite benchmark to print names of available NNAPI accelerators when --use_nnapi=true 2019-08-28 18:47:51 -07:00
nnapi_util.h Modify TF Lite benchmark to print names of available NNAPI accelerators when --use_nnapi=true 2019-08-28 18:47:51 -07:00
README.md

Android Neural Network API

The Android Neural Networks API (NNAPI) is an Android C API designed for running computationally intensive operators for machine learning on mobile devices. Tensorflow Lite is designed to use the NNAPI to perform hardware-accelerated inference operators on supported devices. Based on the apps requirements and the hardware capabilities on a device, the NNAPI can distribute the computation workload across available on-device processors, including dedicated neural network hardware, graphics processing units (GPUs), and digital signal processors (DSPs). For devices that lack a specialized vendor driver, the NNAPI runtime relies on optimized code to execute requests on the CPU. For more information about the NNAPI, please refer to the NNAPI documentation