1. Create an external delegate adaptor to illustrate the use of external delegate as an alternative way for testing, benchmarking and evaluation.

2. Fixed a memory bug in parsing and using external delegate options in external delegate provider.

PiperOrigin-RevId: 323920482
Change-Id: Id258ccd48c924dc4b438293d2dd6776285958d81
This commit is contained in:
Chao Mei 2020-07-29 19:40:02 -07:00 committed by TensorFlower Gardener
parent 040d035412
commit 3ee5868313
4 changed files with 200 additions and 13 deletions

View File

@ -22,6 +22,21 @@ cc_library(
],
)
cc_binary(
name = "dummy_external_delegate.so",
srcs = [
"external_delegate_adaptor.cc",
],
linkshared = 1,
linkstatic = 1,
deps = [
":dummy_delegate",
"//tensorflow/lite/c:common",
"//tensorflow/lite/tools:command_line_flags",
"//tensorflow/lite/tools:logging",
],
)
#### The following are for using the dummy test delegate in TFLite tooling ####
cc_library(
name = "dummy_delegate_provider",

View File

@ -20,18 +20,32 @@ the ideas above. For more sophisticated examples, refer to [Flex delegate](https
## Testing & Tooling
We recommend levaraging the
[delegate registrar](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/delegates)
to plug in the newly created TFLite delegate to reuse existing TFLite kernel
tests and utility tools including the model benchmark tool and the task
evaluation tools. In short, create a delegate provider like the
[`dummy_delegate_provider`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/delegates/utils/dummy_delegate/dummy_delegate_provider.cc)
There are currently **two optionss** to plug in a newly created TFLite delegate
to reuse existing TFLite kernel tests and and tooling:
- Utilize the **[delegate registrar](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/delegates)**
mechansim
- Utilize the
**[external delegate](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/delegates/external)**
mechanism.
The former approach requires few changes as detailed below. The latter one
requires even fewer changes and works with pre-built Tensorflow Lite tooling
binaries. However, it is less explicit and it might be more complicated to set
up in automated integration tests. Therefore, for better clarity, the
delegate-registrar approach is slightly preferred here.
We now describe each option above in more details in the following sections.
### Option 1: Utilize Delegate Registrar
In this approach, create a delegate provider like the
[`dummy_delegate_provider.cc`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/delegates/utils/dummy_delegate/dummy_delegate_provider.cc)
here, and then add it as an extra dependency when building the binary. Refer
[here](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/delegates)
for more delegate provider examples. The following details the above in the
context of this dummy delegate.
for more delegate provider examples. Now we look at using this provider for
testing and evaluation.
###Kernel Tests
#### Kernel Tests
Tests referred here are defined in [tensorflow/lite/kernels](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/kernels).
They are based on the
[test_util library](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/kernels/test_util.h)
@ -64,12 +78,12 @@ bazel build -c opt tensorflow/lite/kernels:add_test
bazel-bin/tensorflow/lite/kernels/add_test --use_dummy_delegate=true
```
### Benchmark and Task Evaluation Tools
#### Benchmark and Task Evaluation Tools
In TFLite, we have developed
[model benchmark tool](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/benchmark)
and
[task evaluation tools](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/evaluation/tasks)
[evaluation tools](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/evaluation/tasks)
that already have integrated existing various TFLite delegates. To reuse these
tools for the new delegate, similar to the kernel testing above, we simply add
the created delegate provider as an additional dependency when building the
@ -107,4 +121,44 @@ bazel-bin/tensorflow/lite/delegates/utils/dummy_delegate/benchmark_model_plus_du
```
### Option 2: Utilize Tensorflow Lite External Delegate
In this **alternative approach to reuse existing Tensorflow Lite kernel testing
and tooling**, we first create an external delegate adaptor like the [`external_delegate_adaptor.cc`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/delegates/utils/dummy_delegate/external_delegate_adaptor.cc) here, and create the corresponding BUILD target
to build a dynamic library.
Afterwards, one could build binaries or use pre-built ones that are linked with
the
[`external_delegate_provider`](https://github.com/tensorflow/tensorflow/blob/8c6f2d55762f3fc94f98fdd8b3c5d59ee1276dba/tensorflow/lite/tools/delegates/BUILD#L145-L159)
library which supports command-line flags as described
[here](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/delegates#external-delegate-provider).
Note this delegate provider has already been linked to existing testing and
tooling binaries.
For example, the following illustrates how to benchmark the dummy delegate here
via this external-delegate approach. We could use similar commands for testing
and evaluation tools.
```
bazel build -c opt tensorflow/lite/delegates/utils/dummy_delegate:dummy_external_delegate.so
# Copy the .so file to the directory that the external delegate will be loaded
# from at your choice.
cp bazel-bin/tensorflow/lite/delegates/utils/dummy_delegate/dummy_external_delegate.so /tmp
bazel build -c opt tensorflow/lite/tools/benchmark:benchmark_model
# Setting a non-empty --external_delegate_path value will trigger applying
# the external delegate during runtime.
bazel-bin/tensorflow/lite/tools/benchmark/benchmark_model \
--graph=/tmp/mobilenet-v2.tflite \
--external_delegate_path=/tmp/dummy_external_delegate.so \
--external_delegate_options='error_during_init:true;error_during_prepare:true'
```
It is worth noting the *external delegate* is the corresponding C++
implementation of the *delegate* in Tensorflow Lite Python binding as shown
[here](https://github.com/tensorflow/tensorflow/blob/7145fc0e49be01ef6943f4df386ce38567e37797/tensorflow/lite/python/interpreter.py#L42).
Therefore, the dynamic external delegate adaptor library created here could be
directly used with Tensorflow Lite Python APIs.
More detailed guide on TFLite delegate is coming soon.

View File

@ -0,0 +1,106 @@
/* Copyright 2020 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
#include <string>
#include <vector>
#include "tensorflow/lite/c/common.h"
#include "tensorflow/lite/delegates/utils/dummy_delegate/dummy_delegate.h"
#include "tensorflow/lite/tools/command_line_flags.h"
#include "tensorflow/lite/tools/logging.h"
namespace tflite {
namespace tools {
TfLiteDelegate* CreateDummyDelegateFromOptions(char** options_keys,
char** options_values,
size_t num_options) {
DummyDelegateOptions options = TfLiteDummyDelegateOptionsDefault();
// Parse key-values options to DummyDelegateOptions by mimicking them as
// command-line flags.
std::unique_ptr<const char*> argv =
std::unique_ptr<const char*>(new const char*[num_options + 1]);
constexpr char kDummyDelegateParsing[] = "dummy_delegate_parsing";
argv.get()[0] = kDummyDelegateParsing;
std::vector<std::string> option_args;
option_args.reserve(num_options);
for (int i = 0; i < num_options; ++i) {
option_args.emplace_back("--");
option_args.rbegin()->append(options_keys[i]);
option_args.rbegin()->push_back('=');
option_args.rbegin()->append(options_values[i]);
argv.get()[i + 1] = option_args.rbegin()->c_str();
}
constexpr char kAllowedBuiltinOp[] = "allowed_builtin_code";
constexpr char kReportErrorDuingInit[] = "error_during_init";
constexpr char kReportErrorDuingPrepare[] = "error_during_prepare";
constexpr char kReportErrorDuingInvoke[] = "error_during_invoke";
std::vector<tflite::Flag> flag_list = {
tflite::Flag::CreateFlag(kAllowedBuiltinOp, &options.allowed_builtin_code,
"Allowed builtin code."),
tflite::Flag::CreateFlag(kReportErrorDuingInit,
&options.error_during_init,
"Report error during init."),
tflite::Flag::CreateFlag(kReportErrorDuingPrepare,
&options.error_during_prepare,
"Report error during prepare."),
tflite::Flag::CreateFlag(kReportErrorDuingInvoke,
&options.error_during_invoke,
"Report error during invoke."),
};
int argc = num_options + 1;
if (!tflite::Flags::Parse(&argc, argv.get(), flag_list)) {
return nullptr;
}
TFLITE_LOG(INFO) << "Dummy delegate: allowed_builtin_code set to "
<< options.allowed_builtin_code << ".";
TFLITE_LOG(INFO) << "Dummy delegate: error_during_init set to "
<< options.error_during_init << ".";
TFLITE_LOG(INFO) << "Dummy delegate: error_during_prepare set to "
<< options.error_during_prepare << ".";
TFLITE_LOG(INFO) << "Dummy delegate: error_during_invoke set to "
<< options.error_during_invoke << ".";
return TfLiteDummyDelegateCreate(&options);
}
} // namespace tools
} // namespace tflite
#ifdef __cplusplus
extern "C" {
#endif // __cplusplus
// Defines two symbols that need to be exported to use the TFLite external
// delegate. See tensorflow/lite/delegates/external for details.
TFL_CAPI_EXPORT TfLiteDelegate* tflite_plugin_create_delegate(
char** options_keys, char** options_values, size_t num_options,
void (*report_error)(const char*)) {
return tflite::tools::CreateDummyDelegateFromOptions(
options_keys, options_values, num_options);
}
TFL_CAPI_EXPORT void tflite_plugin_destroy_delegate(TfLiteDelegate* delegate) {
TfLiteDummyDelegateDelete(delegate);
}
#ifdef __cplusplus
}
#endif // __cplusplus

View File

@ -88,11 +88,23 @@ TfLiteDelegatePtr ExternalDelegateProvider::CreateTfLiteDelegate(
const std::vector<std::string> options =
SplitString(params.Get<std::string>("external_delegate_options"), ';');
std::vector<std::string> keys, values;
// We reserve the memory here to avoid memory pointer change during
// insertion to vectors above.
keys.reserve(options.size());
values.reserve(options.size());
for (const auto& option : options) {
auto key_value = SplitString(option, ':');
if (key_value.size() == 2) {
delegate_options.insert(&delegate_options, key_value[0].c_str(),
key_value[1].c_str());
// The inserted (key,value) pair has to outlive the
// TfLiteExternalDelegateCreate call, therefore, we use two vectors
// 'keys' and 'values' to achieve this.
// Also, we will insert the memory pointer of key and value to
// delegate_options later, we have to ensure the pointer won't change by
// reserving the memory earlier.
keys.emplace_back(key_value[0]);
values.emplace_back(key_value[1]);
delegate_options.insert(&delegate_options, keys.back().c_str(),
values.back().c_str());
}
}