STT-tensorflow/tensorflow/lite/nnapi
2019-08-01 14:23:09 -07:00
..
BUILD Exclude Android NN API-related sources and flags from iOS, Mac, and Emscripten builds 2019-08-01 14:23:09 -07:00
NeuralNetworksShim.h Return -1 for ASharedMemory_create if the function is not available. 2019-01-29 17:35:32 -08:00
NeuralNetworksTypes.h Add transpose conv support to the NNAPI delegate 2019-07-31 05:57:25 -07:00
nnapi_implementation_disabled.cc Allow using libneuralnetworks.so on non-Android platforms. 2019-01-25 13:44:13 -08:00
nnapi_implementation_test.cc Adds Android Q types and functions to NeuralNetworksShim.h and nnapi_implementation*. 2019-01-29 11:15:24 -08:00
nnapi_implementation.cc Corrected log-format to %s. Removes build-warning 2019-05-23 14:28:43 +02:00
nnapi_implementation.h Update TFLite NNAPI delegate with NNAPI 1.2 features. 2019-03-12 09:57:15 -07:00
README.md Migrate TensorFlow Lite out of tensorflow/contrib 2018-10-31 14:20:28 -07:00

Android Neural Network API

The Android Neural Networks API (NNAPI) is an Android C API designed for running computationally intensive operators for machine learning on mobile devices. Tensorflow Lite is designed to use the NNAPI to perform hardware-accelerated inference operators on supported devices. Based on the apps requirements and the hardware capabilities on a device, the NNAPI can distribute the computation workload across available on-device processors, including dedicated neural network hardware, graphics processing units (GPUs), and digital signal processors (DSPs). For devices that lack a specialized vendor driver, the NNAPI runtime relies on optimized code to execute requests on the CPU. For more information about the NNAPI, please refer to the NNAPI documentation