Merge pull request #37852 from kiszk:spelling_tweaks_mdtd

PiperOrigin-RevId: 302860845
Change-Id: I62b871d05aff1016ae4feab3fc7c6f60c5eca144
This commit is contained in:
TensorFlower Gardener 2020-03-25 04:21:42 -07:00
commit c6cab14e56
12 changed files with 49 additions and 30 deletions

View File

@ -3412,7 +3412,7 @@ def TFL_BidirectionalSequenceLSTMOp :
let summary = "Bidirectional sequence lstm operator"; let summary = "Bidirectional sequence lstm operator";
let description = [{ let description = [{
Bidirectional lstm is essentiallay two lstms, one running forward & the Bidirectional lstm is essentially two lstms, one running forward & the
other running backward. And the output is the concatenation of the two other running backward. And the output is the concatenation of the two
lstms. lstms.
}]; }];

View File

@ -51,7 +51,7 @@ class HLOClient_Op<string mnemonic, list<OpTrait> traits> :
// broadcasting (via the broadcast_dimensions attribute) and implicit degenerate // broadcasting (via the broadcast_dimensions attribute) and implicit degenerate
// shape broadcasting. // shape broadcasting.
// //
// These have 1:1 correspondance with same-named ops in the xla_hlo dialect; // These have 1:1 correspondence with same-named ops in the xla_hlo dialect;
// however, those operations do not support broadcasting. // however, those operations do not support broadcasting.
// //
// See: // See:

View File

@ -382,7 +382,7 @@ class createIotaOp<string dim>: NativeCodeCall<
def createConvertOp: NativeCodeCall< def createConvertOp: NativeCodeCall<
"CreateConvertOp(&($_builder), $0.getOwner()->getLoc(), $1, $2)">; "CreateConvertOp(&($_builder), $0.getOwner()->getLoc(), $1, $2)">;
// Performs a substitution of MatrixBandPartOp for XLA HLO ops. Psuedocode is // Performs a substitution of MatrixBandPartOp for XLA HLO ops. Pseudocode is
// shown below, given a tensor `input` with k dimensions [I, J, K, ..., M, N] // shown below, given a tensor `input` with k dimensions [I, J, K, ..., M, N]
// and two integers, `num_lower` and `num_upper`: // and two integers, `num_lower` and `num_upper`:
// //
@ -454,14 +454,14 @@ def : Pat<(TF_ConstOp:$res ElementsAttr:$value), (HLO_ConstOp $value),
// TODO(hinsu): Make these patterns to TF to TF lowering. Relu6 lowering will // TODO(hinsu): Make these patterns to TF to TF lowering. Relu6 lowering will
// require HLO canonicalization of min and max on a tensor to ClampOp. // require HLO canonicalization of min and max on a tensor to ClampOp.
// TODO(hinsu): Lower unsinged and quantized types after supporting // TODO(hinsu): Lower unsigned and quantized types after supporting
// them in GetScalarOfType. // them in GetScalarOfType.
def : Pat<(TF_ReluOp AnyRankedTensor:$input), def : Pat<(TF_ReluOp AnyRankedTensor:$input),
(HLO_MaxOp (HLO_ConstOp:$zero (GetScalarOfType<0> $input)), $input, (HLO_MaxOp (HLO_ConstOp:$zero (GetScalarOfType<0> $input)), $input,
(BinBroadcastDimensions $zero, $input)), (BinBroadcastDimensions $zero, $input)),
[(TF_SintOrFpTensor $input)]>; [(TF_SintOrFpTensor $input)]>;
// TODO(hinsu): Lower unsinged and quantized types after supporting // TODO(hinsu): Lower unsigned and quantized types after supporting
// them in GetScalarOfType. // them in GetScalarOfType.
def : Pat<(TF_Relu6Op AnyRankedTensor:$input), def : Pat<(TF_Relu6Op AnyRankedTensor:$input),
(HLO_ClampOp (HLO_ConstOp (GetScalarOfType<0> $input)), $input, (HLO_ClampOp (HLO_ConstOp (GetScalarOfType<0> $input)), $input,

View File

@ -14,7 +14,6 @@ the model to the XNNPACK delegate. The users must destroy the delegate with
`TfLiteXNNPackDelegateDelete` **after** releasing the TensorFlow Lite `TfLiteXNNPackDelegateDelete` **after** releasing the TensorFlow Lite
interpreter. The snippet below illustrates the typical usage: interpreter. The snippet below illustrates the typical usage:
```c++ ```c++
// Build the interpreter // Build the interpreter
std::unique_ptr<tflite::Interpreter> interpreter; std::unique_ptr<tflite::Interpreter> interpreter;
@ -40,7 +39,7 @@ interpreter->Invoke()
... ...
// IMPORTANT: release the interpreter before destroing the delegate // IMPORTANT: release the interpreter before destroying the delegate
interpreter.reset(); interpreter.reset();
TfLiteXNNPackDelegateDelete(xnnpack_delegate); TfLiteXNNPackDelegateDelete(xnnpack_delegate);
``` ```

View File

@ -133,7 +133,7 @@ But also the following advantages:
The philosophy underlying this profiler is that software performance depends on The philosophy underlying this profiler is that software performance depends on
software engineers profiling often, and a key factor limiting that in practice software engineers profiling often, and a key factor limiting that in practice
is the difficulty or cumbersome aspects of profiling with more serious profilers is the difficulty or cumbersome aspects of profiling with more serious profilers
such as Linux's "perf", espectially in embedded/mobile development: multiple such as Linux's "perf", especially in embedded/mobile development: multiple
command lines are involved to copy symbol files to devices, retrieve profile command lines are involved to copy symbol files to devices, retrieve profile
data from the device, etc. In that context, it is useful to make profiling as data from the device, etc. In that context, it is useful to make profiling as
easy as benchmarking, even on embedded targets, even if the price to pay for easy as benchmarking, even on embedded targets, even if the price to pay for

View File

@ -171,7 +171,7 @@ TensorFlow Lite metadata provides a standard for model descriptions. The
metadata is an important source of knowledge about what the model does and its metadata is an important source of knowledge about what the model does and its
input / output information. This makes it easier for other developers to input / output information. This makes it easier for other developers to
understand the best practices and for code generators to create platform understand the best practices and for code generators to create platform
specific wrapper code. For more infomation, please refer to the specific wrapper code. For more information, please refer to the
[TensorFlow Lite Metadata](metadata.md) section. [TensorFlow Lite Metadata](metadata.md) section.
## Installing TensorFlow <a name="versioning"></a> ## Installing TensorFlow <a name="versioning"></a>
@ -192,7 +192,7 @@ either install the nightly build with
[Docker](https://www.tensorflow.org/install/docker), or [Docker](https://www.tensorflow.org/install/docker), or
[build the pip package from source](https://www.tensorflow.org/install/source). [build the pip package from source](https://www.tensorflow.org/install/source).
### Custom ops in the experimenal new converter ### Custom ops in the experimental new converter
There is a behavior change in how models containing There is a behavior change in how models containing
[custom ops](https://www.tensorflow.org/lite/guide/ops_custom) (those for which [custom ops](https://www.tensorflow.org/lite/guide/ops_custom) (those for which

View File

@ -52,7 +52,7 @@ operator is executed. Check out our
Model optimization aims to create smaller models that are generally faster and Model optimization aims to create smaller models that are generally faster and
more energy efficient, so that they can be deployed on mobile devices. There are more energy efficient, so that they can be deployed on mobile devices. There are
multiple optimization techniques suppored by TensorFlow Lite, such as multiple optimization techniques supported by TensorFlow Lite, such as
quantization. quantization.
Check out our [model optimization docs](model_optimization.md) for details. Check out our [model optimization docs](model_optimization.md) for details.

View File

@ -397,16 +397,23 @@ The following instructions will help you build and deploy the sample to the
[NXP FRDM K66F](https://www.nxp.com/design/development-boards/freedom-development-boards/mcu-boards/freedom-development-platform-for-kinetis-k66-k65-and-k26-mcus:FRDM-K66F) [NXP FRDM K66F](https://www.nxp.com/design/development-boards/freedom-development-boards/mcu-boards/freedom-development-platform-for-kinetis-k66-k65-and-k26-mcus:FRDM-K66F)
using [ARM Mbed](https://github.com/ARMmbed/mbed-cli). using [ARM Mbed](https://github.com/ARMmbed/mbed-cli).
1. Download [the TensorFlow source code](https://github.com/tensorflow/tensorflow). 1. Download
2. Follow instructions from [mbed website](https://os.mbed.com/docs/mbed-os/v5.13/tools/installation-and-setup.html) to setup and install mbed CLI. [the TensorFlow source code](https://github.com/tensorflow/tensorflow).
2. Follow instructions from
[mbed website](https://os.mbed.com/docs/mbed-os/v5.13/tools/installation-and-setup.html)
to setup and install mbed CLI.
3. Compile TensorFlow with the following command to generate mbed project: 3. Compile TensorFlow with the following command to generate mbed project:
``` ```
make -f tensorflow/lite/micro/tools/make/Makefile TARGET=mbed TAGS="nxp_k66f" generate_micro_speech_mbed_project make -f tensorflow/lite/micro/tools/make/Makefile TARGET=mbed TAGS="nxp_k66f" generate_micro_speech_mbed_project
``` ```
4. Go to the location of the generated project. The generated project is usually
in `tensorflow/lite/micro/tools/make/gen/mbed_cortex-m4/prj/micro_speech/mbed` 4. Go to the location of the generated project. The generated project is
usually in
`tensorflow/lite/micro/tools/make/gen/mbed_cortex-m4/prj/micro_speech/mbed`
5. Create a mbed project using the generated files: `mbed new .` 5. Create a mbed project using the generated files: `mbed new .`
6. Change the project setting to use C++ 11 rather than C++ 14 using: 6. Change the project setting to use C++ 11 rather than C++ 14 using:
``` ```
@ -415,13 +422,15 @@ using [ARM Mbed](https://github.com/ARMmbed/mbed-cli).
for line in fileinput.input(filename, inplace=True): for line in fileinput.input(filename, inplace=True):
print line.replace("\"-std=gnu++14\"","\"-std=c++11\", \"-fpermissive\"")' print line.replace("\"-std=gnu++14\"","\"-std=c++11\", \"-fpermissive\"")'
``` ```
7. To compile project, use the following command: 7. To compile project, use the following command:
``` ```
mbed compile --target K66F --toolchain GCC_ARM --profile release mbed compile --target K66F --toolchain GCC_ARM --profile release
``` ```
8. For some mbed compliers, you may get compile error in mbed_rtc_time.cpp.
Go to `mbed-os/platform/mbed_rtc_time.h` and comment line 32 and line 37: 8. For some mbed compilers, you may get compile error in mbed_rtc_time.cpp. Go
to `mbed-os/platform/mbed_rtc_time.h` and comment line 32 and line 37:
``` ```
//#if !defined(__GNUC__) || defined(__CC_ARM) || defined(__clang__) //#if !defined(__GNUC__) || defined(__CC_ARM) || defined(__clang__)
@ -431,25 +440,35 @@ using [ARM Mbed](https://github.com/ARMmbed/mbed-cli).
}; };
//#endif //#endif
``` ```
9. Look at helpful resources from NXP website such as [NXP FRDM-K66F User guide](https://www.nxp.com/docs/en/user-guide/FRDMK66FUG.pdf) and [NXP FRDM-K66F Getting Started](https://www.nxp.com/document/guide/get-started-with-the-frdm-k66f:NGS-FRDM-K66F)
9. Look at helpful resources from NXP website such as
[NXP FRDM-K66F User guide](https://www.nxp.com/docs/en/user-guide/FRDMK66FUG.pdf)
and
[NXP FRDM-K66F Getting Started](https://www.nxp.com/document/guide/get-started-with-the-frdm-k66f:NGS-FRDM-K66F)
to understand information about the board. to understand information about the board.
10. Connect the USB cable to the micro USB port. When the Ethernet port is 10. Connect the USB cable to the micro USB port. When the Ethernet port is
facing towards you, the micro USB port is left of the Ethernet port. facing towards you, the micro USB port is left of the Ethernet port.
11. To compile and flash in a single step, add the `--flash` option:
11. To compile and flash in a single step, add the `--flash` option:
``` ```
mbed compile --target K66F --toolchain GCC_ARM --profile release --flash mbed compile --target K66F --toolchain GCC_ARM --profile release --flash
``` ```
12. Disconnect USB cable from the device to power down the device and connect 12. Disconnect USB cable from the device to power down the device and connect
back the power cable to start running the model. back the power cable to start running the model.
13. Connect to serial port with baud rate of 9600 and correct serial device
to view the output from the MCU. In linux, you can run the following screen 13. Connect to serial port with baud rate of 9600 and correct serial device to
view the output from the MCU. In linux, you can run the following screen
command if the serial device is `/dev/ttyACM0`: command if the serial device is `/dev/ttyACM0`:
``` ```
sudo screen /dev/ttyACM0 9600 sudo screen /dev/ttyACM0 9600
``` ```
14. Saying "Yes" will print "Yes" and "No" will print "No" on the serial port. 14. Saying "Yes" will print "Yes" and "No" will print "No" on the serial port.
15. A loopback path from microphone to headset jack is enabled. Headset jack is 15. A loopback path from microphone to headset jack is enabled. Headset jack is
in black color. If there is no output on the serial port, you can connect in black color. If there is no output on the serial port, you can connect
headphone to headphone port to check if audio loopback path is working. headphone to headphone port to check if audio loopback path is working.

View File

@ -202,7 +202,7 @@ The next steps assume that the
* The `IDF_PATH` environment variable is set * The `IDF_PATH` environment variable is set
* `idf.py` and Xtensa-esp32 tools (e.g. `xtensa-esp32-elf-gcc`) are in `$PATH` * `idf.py` and Xtensa-esp32 tools (e.g. `xtensa-esp32-elf-gcc`) are in `$PATH`
* `esp32-camera` should be downloaded in `comopnents/` dir of example as * `esp32-camera` should be downloaded in `components/` dir of example as
explained in `Building the example`(below) explained in `Building the example`(below)
### Generate the examples ### Generate the examples

View File

@ -16,7 +16,7 @@ The next steps assume that the
[IDF environment variables are set](https://docs.espressif.com/projects/esp-idf/en/latest/get-started/index.html#step-4-set-up-the-environment-variables) : [IDF environment variables are set](https://docs.espressif.com/projects/esp-idf/en/latest/get-started/index.html#step-4-set-up-the-environment-variables) :
* The `IDF_PATH` environment variable is set. * `idf.py` and Xtensa-esp32 tools * The `IDF_PATH` environment variable is set. * `idf.py` and Xtensa-esp32 tools
(e.g., `xtensa-esp32-elf-gcc`) are in `$PATH`. * `esp32-camera` should be (e.g., `xtensa-esp32-elf-gcc`) are in `$PATH`. * `esp32-camera` should be
downloaded in `comopnents/` dir of example as explained in `Build the downloaded in `components/` dir of example as explained in `Build the
example`(below) example`(below)
## Build the example ## Build the example

View File

@ -36,8 +36,9 @@ bazel build -c opt \
``` ```
adb install -r -d -g bazel-bin/tensorflow/lite/tools/benchmark/android/benchmark_model.apk adb install -r -d -g bazel-bin/tensorflow/lite/tools/benchmark/android/benchmark_model.apk
``` ```
Note: Make sure to install with "-g" option to grant the permission for reading Note: Make sure to install with "-g" option to grant the permission for reading
extenal storage. external storage.
(3) Push the compute graph that you need to test. (3) Push the compute graph that you need to test.
@ -113,12 +114,12 @@ the system dismisses the notification and displays a third notification "Trace
saved", confirming that your trace has been saved and that you're ready to share saved", confirming that your trace has been saved and that you're ready to share
the system trace. the system trace.
(9) [Share](https://developer.android.com/topic/performance/tracing/on-device#share-trace) (9)
[Share](https://developer.android.com/topic/performance/tracing/on-device#share-trace)
a trace file, a trace file,
[convert](https://developer.android.com/topic/performance/tracing/on-device#converting_between_trace_formats) [convert](https://developer.android.com/topic/performance/tracing/on-device#converting_between_trace_formats)
between tracing formats and between tracing formats and
[create](https://developer.android.com/topic/performance/tracing/on-device#create-html-report) [create](https://developer.android.com/topic/performance/tracing/on-device#create-html-report)
an HTML report. an HTML report. Note that, the captured tracing file format is either in
Note that, the catured tracing file format is either in Perfetto format or in Perfetto format or in Systrace format depending on the Android version of your
Systrace format depending on the Android version of your device. Select the device. Select the appropriate method to handle the generated file.
appropriate method to handle the generated file.

View File

@ -83,7 +83,7 @@ this UI, to see the logs for a failed build:
* Submit special pull request (PR) comment to trigger CI: **bot:mlx:test** * Submit special pull request (PR) comment to trigger CI: **bot:mlx:test**
* Test session is run automatically. * Test session is run automatically.
* Test results and artefacts (log files) are reported via PR comments * Test results and artifacts (log files) are reported via PR comments
##### CI Steps ##### CI Steps