Updated XNNPACK delegeate readme

Added some info on XNNPACK delegate to avoid confusion cause by XNNPACK engine being single-threaded by default.
Further details are available in the description of the following issue: https://github.com/tensorflow/tensorflow/issues/42277
This commit is contained in:
Georgiy Manuilov 2020-11-21 15:04:59 +03:00 committed by GitHub
parent 2d263ad1ca
commit ae33193529
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -63,6 +63,27 @@ bazel build -c opt --fat_apk_cpu=x86,x86_64,arm64-v8a,armeabi-v7a \
//tensorflow/lite/java:tensorflow-lite
```
Note that in this case `Interpreter::SetNumThreads` invocation does not take
effect on number of threads used by XNNPACK engine. In order to specify number
of threads available for XNNPACK engine you should manually pass the value when
constructing the interpreter. The snippet below illustrates this assuming you are
using `InterpreterBuilder` to construct the interpreter:
```c++
// Load model
tflite::Model* model;
...
// Construct the interprepter
tflite::ops::builtin::BuiltinOpResolver resolver;
std::unique_ptr<tflite::Interpreter> interpreter;
TfLiteStatus res = tflite::InterpreterBuilder(model, resolver, num_threads);
```
**XNNPACK engine used by TensorFlow Lite interpreter uses a single thread for
inference by default.**
### Enable XNNPACK via additional dependency
Another way to enable XNNPACK is to build and link the