STT-tensorflow/tensorflow/lite/nnapi
A. Unique TensorFlower 8019570ea0 load nnapi 1.3 memory domain functions in shim layer.
PiperOrigin-RevId: 316755954
Change-Id: If4fc4a7c1001e4b47479531914d2631ee2e31fcd
2020-06-16 14:24:22 -07:00
..
BUILD Disable NNAPI on nacl 2020-03-31 13:36:26 -07:00
NeuralNetworksShim.h load nnapi 1.3 memory domain functions in shim layer. 2020-06-16 14:24:22 -07:00
NeuralNetworksTypes.h load nnapi 1.3 memory domain functions in shim layer. 2020-06-16 14:24:22 -07:00
nnapi_handler_test.cc Fix model delegation failure when accelerator name is specified for NNAPI version < 1.2 2020-04-02 00:33:43 -07:00
nnapi_handler.cc Fix model delegation failure when accelerator name is specified for NNAPI version < 1.2 2020-04-02 00:33:43 -07:00
nnapi_handler.h Fix model delegation failure when accelerator name is specified for NNAPI version < 1.2 2020-04-02 00:33:43 -07:00
nnapi_implementation_disabled.cc Allow using libneuralnetworks.so on non-Android platforms. 2019-01-25 13:44:13 -08:00
nnapi_implementation_test.cc Mock ASharedMemory_create on non-Android platforms only if NNAPI is loaded 2019-08-08 11:15:39 -07:00
nnapi_implementation.cc load nnapi 1.3 memory domain functions in shim layer. 2020-06-16 14:24:22 -07:00
nnapi_implementation.h load nnapi 1.3 memory domain functions in shim layer. 2020-06-16 14:24:22 -07:00
nnapi_util.cc Modify TF Lite benchmark to print names of available NNAPI accelerators when --use_nnapi=true 2019-08-28 18:47:51 -07:00
nnapi_util.h Modify TF Lite benchmark to print names of available NNAPI accelerators when --use_nnapi=true 2019-08-28 18:47:51 -07:00
README.md Migrate TensorFlow Lite out of tensorflow/contrib 2018-10-31 14:20:28 -07:00

Android Neural Network API

The Android Neural Networks API (NNAPI) is an Android C API designed for running computationally intensive operators for machine learning on mobile devices. Tensorflow Lite is designed to use the NNAPI to perform hardware-accelerated inference operators on supported devices. Based on the apps requirements and the hardware capabilities on a device, the NNAPI can distribute the computation workload across available on-device processors, including dedicated neural network hardware, graphics processing units (GPUs), and digital signal processors (DSPs). For devices that lack a specialized vendor driver, the NNAPI runtime relies on optimized code to execute requests on the CPU. For more information about the NNAPI, please refer to the NNAPI documentation