STT-tensorflow/tensorflow/lite/nnapi
2019-08-28 18:47:51 -07:00
..
BUILD Modify TF Lite benchmark to print names of available NNAPI accelerators when --use_nnapi=true 2019-08-28 18:47:51 -07:00
NeuralNetworksShim.h Return -1 for ASharedMemory_create if the function is not available. 2019-01-29 17:35:32 -08:00
NeuralNetworksTypes.h Add transpose conv support to the NNAPI delegate 2019-07-31 05:57:25 -07:00
nnapi_implementation_disabled.cc Allow using libneuralnetworks.so on non-Android platforms. 2019-01-25 13:44:13 -08:00
nnapi_implementation_test.cc Mock ASharedMemory_create on non-Android platforms only if NNAPI is loaded 2019-08-08 11:15:39 -07:00
nnapi_implementation.cc Mock ASharedMemory_create on non-Android platforms only if NNAPI is loaded 2019-08-08 11:15:39 -07:00
nnapi_implementation.h Update TFLite NNAPI delegate with NNAPI 1.2 features. 2019-03-12 09:57:15 -07:00
nnapi_util.cc Modify TF Lite benchmark to print names of available NNAPI accelerators when --use_nnapi=true 2019-08-28 18:47:51 -07:00
nnapi_util.h Modify TF Lite benchmark to print names of available NNAPI accelerators when --use_nnapi=true 2019-08-28 18:47:51 -07:00
README.md

Android Neural Network API

The Android Neural Networks API (NNAPI) is an Android C API designed for running computationally intensive operators for machine learning on mobile devices. Tensorflow Lite is designed to use the NNAPI to perform hardware-accelerated inference operators on supported devices. Based on the apps requirements and the hardware capabilities on a device, the NNAPI can distribute the computation workload across available on-device processors, including dedicated neural network hardware, graphics processing units (GPUs), and digital signal processors (DSPs). For devices that lack a specialized vendor driver, the NNAPI runtime relies on optimized code to execute requests on the CPU. For more information about the NNAPI, please refer to the NNAPI documentation