Fix the source specialization for the examples.
With https://github.com/tensorflow/tensorflow/pull/46473, we removed support for TAGS on the makefile command line. An unintended consequence was that we were no longer specializing the sources in the examples. This change specializes the sources using the TARGET, which appears to be the only command line option that is needed. Manually confirmed that the generated arduino projects have the correct sources (e.g. micro_speech/arduino/audio_provider.cc is used in the output directory). Test sequence: ``` make -f tensorflow/lite/micro/tools/make/Makefile TARGET=arduino OPTIMIZED_KERNEL_DIR=cmsis_nn generate_arduino_zip cd tensorflow/lite/micro/tools/make/gen/arduino_x86_64_default unzip prj/tensorflow_lite.zip ``` And then confirmed that the code in ``` tensorflow/lite/micro/tools/make/gen/arduino_x86_64_default/tensorflow_lite/examples/micro_speech/arduino_audio_provider.cc matches ``` matches the code in: ``` tensorflow/lite/micro/examples/micro_speech/arduino/audio_provider.cc ```
This commit is contained in:
parent
de527a9478
commit
0c6169ff8c
@ -29,13 +29,13 @@ visual wakewords model.
|
||||
To run the keyword benchmark on x86, run
|
||||
|
||||
```
|
||||
make -f tensorflow/lite/micro/tools/make/Makefile TAGS=posix test_keyword_benchmark
|
||||
make -f tensorflow/lite/micro/tools/make/Makefile test_keyword_benchmark
|
||||
```
|
||||
|
||||
To run the person detection benchmark on x86, run
|
||||
|
||||
```
|
||||
make -f tensorflow/lite/micro/tools/make/Makefile TAGS=posix test_person_detection_benchmark
|
||||
make -f tensorflow/lite/micro/tools/make/Makefile test_person_detection_benchmark
|
||||
```
|
||||
|
||||
## Run on Xtensa XPG Simulator
|
||||
@ -44,7 +44,7 @@ To run the keyword benchmark on the Xtensa XPG simulator, you will need a valid
|
||||
Xtensa toolchain and license. With these set up, run:
|
||||
|
||||
```
|
||||
make -f tensorflow/lite/micro/tools/make/Makefile TARGET=xtensa-xpg XTENSA_CORE=<xtensa core> TAGS=xtensa_hifimini test_keyword_benchmark -j18
|
||||
make -f tensorflow/lite/micro/tools/make/Makefile TARGET=xtensa OPTIMIZED_KERNEL_DIR=xtensa TARGET_ARCH=<target architecture> XTENSA_CORE=<xtensa core> test_keyword_benchmark -j18
|
||||
```
|
||||
|
||||
## Run on Sparkfun Edge
|
||||
|
@ -1,9 +1,12 @@
|
||||
<!-- mdformat off(b/169948621#comment2) -->
|
||||
|
||||
# Info
|
||||
|
||||
To use CMSIS-NN optimized kernels instead of reference kernel add TAGS=cmsis_nn
|
||||
to the make line. Some micro architectures have optimizations (M4 or higher),
|
||||
others don't. The kernels that doesn't have optimization for a certain micro
|
||||
architecture fallback to use TFLu reference kernels.
|
||||
To use CMSIS-NN optimized kernels instead of reference kernel add
|
||||
OPTIMIZED_KERNEL_DIR=cmsis_nn to the make line. Some micro architectures have
|
||||
optimizations (M4 or higher), others don't. The kernels that doesn't have
|
||||
optimization for a certain micro architecture fallback to use TFLu reference
|
||||
kernels.
|
||||
|
||||
The optimizations are almost exclusively made for int8 (symmetric) model. For
|
||||
more details, please read
|
||||
@ -14,7 +17,7 @@ more details, please read
|
||||
A simple way to compile a binary with CMSIS-NN optimizations.
|
||||
|
||||
```
|
||||
make -f tensorflow/lite/micro/tools/make/Makefile TAGS=cmsis_nn \
|
||||
make -f tensorflow/lite/micro/tools/make/Makefile OPTIMIZED_KERNEL_DIR=cmsis_nn \
|
||||
TARGET=sparkfun_edge person_detection_int8_bin
|
||||
```
|
||||
|
||||
@ -24,7 +27,7 @@ Using mbed you'll be able to compile for the many different targets supported by
|
||||
mbed. Here's an example on how to do that. Start by generating an mbed project.
|
||||
|
||||
```
|
||||
make -f tensorflow/lite/micro/tools/make/Makefile TAGS=cmsis_nn \
|
||||
make -f tensorflow/lite/micro/tools/make/Makefile OPTIMIZED_KERNEL_DIR=cmsis_nn \
|
||||
generate_person_detection_mbed_project
|
||||
```
|
||||
|
||||
|
@ -39,18 +39,10 @@ substitute_specialized_implementation = \
|
||||
substitute_specialized_implementations = \
|
||||
$(foreach source,$(1),$(call substitute_specialized_implementation,$(source),$(2)))
|
||||
|
||||
# Here we're first looking for specialized implementations in ref_dir/$(TAG1)
|
||||
# and then ref_dir/$(TAG2), etc, before falling back to ref_dir's
|
||||
# implementation.
|
||||
# The argument to this function should be a list of space-separated file paths,
|
||||
# with any wildcards already expanded.
|
||||
define specialize_on_tags
|
||||
$(if $(2),$(call substitute_specialized_implementations,$(call specialize_on_tags,$(1),$(wordlist 2,$(words $(2)),$(2))),$(firstword $(2))),$(1))
|
||||
endef
|
||||
|
||||
# The entry point that most targets should use to find implementation-specific
|
||||
# versions of their source files. The only argument is a list of file paths.
|
||||
specialize = $(call specialize_on_tags,$(1),$(strip $(call reverse,$(ALL_TAGS))))
|
||||
# Tests and project generation targets use this entrypoint for to get the
|
||||
# specialized sources. It should be avoided for any new functionality.
|
||||
# The only argument is a list of file paths.
|
||||
specialize = $(call substitute_specialized_implementations,$(1),$(TARGET))
|
||||
|
||||
# TODO(b/143904317): It would be better to have the dependency be
|
||||
# THIRD_PARTY_TARGETS instead of third_party_downloads. However, that does not
|
||||
|
Loading…
Reference in New Issue
Block a user