minor spelling tweaks

This commit is contained in:
Kazuaki Ishizaki 2020-03-24 12:34:02 +09:00
parent b009488e56
commit 49418bd074
12 changed files with 16 additions and 16 deletions

View File

@ -3397,7 +3397,7 @@ def TFL_BidirectionalSequenceLSTMOp :
let summary = "Bidirectional sequence lstm operator"; let summary = "Bidirectional sequence lstm operator";
let description = [{ let description = [{
Bidirectional lstm is essentiallay two lstms, one running forward & the Bidirectional lstm is essentially two lstms, one running forward & the
other running backward. And the output is the concatenation of the two other running backward. And the output is the concatenation of the two
lstms. lstms.
}]; }];

View File

@ -51,7 +51,7 @@ class HLOClient_Op<string mnemonic, list<OpTrait> traits> :
// broadcasting (via the broadcast_dimensions attribute) and implicit degenerate // broadcasting (via the broadcast_dimensions attribute) and implicit degenerate
// shape broadcasting. // shape broadcasting.
// //
// These have 1:1 correspondance with same-named ops in the xla_hlo dialect; // These have 1:1 correspondence with same-named ops in the xla_hlo dialect;
// however, those operations do not support broadcasting. // however, those operations do not support broadcasting.
// //
// See: // See:

View File

@ -382,7 +382,7 @@ class createIotaOp<string dim>: NativeCodeCall<
def createConvertOp: NativeCodeCall< def createConvertOp: NativeCodeCall<
"CreateConvertOp(&($_builder), $0.getOwner()->getLoc(), $1, $2)">; "CreateConvertOp(&($_builder), $0.getOwner()->getLoc(), $1, $2)">;
// Performs a substitution of MatrixBandPartOp for XLA HLO ops. Psuedocode is // Performs a substitution of MatrixBandPartOp for XLA HLO ops. Pseudocode is
// shown below, given a tensor `input` with k dimensions [I, J, K, ..., M, N] // shown below, given a tensor `input` with k dimensions [I, J, K, ..., M, N]
// and two integers, `num_lower` and `num_upper`: // and two integers, `num_lower` and `num_upper`:
// //
@ -454,14 +454,14 @@ def : Pat<(TF_ConstOp:$res ElementsAttr:$value), (HLO_ConstOp $value),
// TODO(hinsu): Make these patterns to TF to TF lowering. Relu6 lowering will // TODO(hinsu): Make these patterns to TF to TF lowering. Relu6 lowering will
// require HLO canonicalization of min and max on a tensor to ClampOp. // require HLO canonicalization of min and max on a tensor to ClampOp.
// TODO(hinsu): Lower unsinged and quantized types after supporting // TODO(hinsu): Lower unsigned and quantized types after supporting
// them in GetScalarOfType. // them in GetScalarOfType.
def : Pat<(TF_ReluOp AnyRankedTensor:$input), def : Pat<(TF_ReluOp AnyRankedTensor:$input),
(HLO_MaxOp (HLO_ConstOp:$zero (GetScalarOfType<0> $input)), $input, (HLO_MaxOp (HLO_ConstOp:$zero (GetScalarOfType<0> $input)), $input,
(BinBroadcastDimensions $zero, $input)), (BinBroadcastDimensions $zero, $input)),
[(TF_SintOrFpTensor $input)]>; [(TF_SintOrFpTensor $input)]>;
// TODO(hinsu): Lower unsinged and quantized types after supporting // TODO(hinsu): Lower unsigned and quantized types after supporting
// them in GetScalarOfType. // them in GetScalarOfType.
def : Pat<(TF_Relu6Op AnyRankedTensor:$input), def : Pat<(TF_Relu6Op AnyRankedTensor:$input),
(HLO_ClampOp (HLO_ConstOp (GetScalarOfType<0> $input)), $input, (HLO_ClampOp (HLO_ConstOp (GetScalarOfType<0> $input)), $input,

View File

@ -40,7 +40,7 @@ interpreter->Invoke()
... ...
// IMPORTANT: release the interpreter before destroing the delegate // IMPORTANT: release the interpreter before destroying the delegate
interpreter.reset(); interpreter.reset();
TfLiteXNNPackDelegateDelete(xnnpack_delegate); TfLiteXNNPackDelegateDelete(xnnpack_delegate);
``` ```

View File

@ -133,7 +133,7 @@ But also the following advantages:
The philosophy underlying this profiler is that software performance depends on The philosophy underlying this profiler is that software performance depends on
software engineers profiling often, and a key factor limiting that in practice software engineers profiling often, and a key factor limiting that in practice
is the difficulty or cumbersome aspects of profiling with more serious profilers is the difficulty or cumbersome aspects of profiling with more serious profilers
such as Linux's "perf", espectially in embedded/mobile development: multiple such as Linux's "perf", especially in embedded/mobile development: multiple
command lines are involved to copy symbol files to devices, retrieve profile command lines are involved to copy symbol files to devices, retrieve profile
data from the device, etc. In that context, it is useful to make profiling as data from the device, etc. In that context, it is useful to make profiling as
easy as benchmarking, even on embedded targets, even if the price to pay for easy as benchmarking, even on embedded targets, even if the price to pay for

View File

@ -171,7 +171,7 @@ TensorFlow Lite metadata provides a standard for model descriptions. The
metadata is an important source of knowledge about what the model does and its metadata is an important source of knowledge about what the model does and its
input / output information. This makes it easier for other developers to input / output information. This makes it easier for other developers to
understand the best practices and for code generators to create platform understand the best practices and for code generators to create platform
specific wrapper code. For more infomation, please refer to the specific wrapper code. For more information, please refer to the
[TensorFlow Lite Metadata](metadata.md) section. [TensorFlow Lite Metadata](metadata.md) section.
## Installing TensorFlow <a name="versioning"></a> ## Installing TensorFlow <a name="versioning"></a>
@ -192,7 +192,7 @@ either install the nightly build with
[Docker](https://www.tensorflow.org/install/docker), or [Docker](https://www.tensorflow.org/install/docker), or
[build the pip package from source](https://www.tensorflow.org/install/source). [build the pip package from source](https://www.tensorflow.org/install/source).
### Custom ops in the experimenal new converter ### Custom ops in the experimental new converter
There is a behavior change in how models containing There is a behavior change in how models containing
[custom ops](https://www.tensorflow.org/lite/guide/ops_custom) (those for which [custom ops](https://www.tensorflow.org/lite/guide/ops_custom) (those for which

View File

@ -52,7 +52,7 @@ operator is executed. Check out our
Model optimization aims to create smaller models that are generally faster and Model optimization aims to create smaller models that are generally faster and
more energy efficient, so that they can be deployed on mobile devices. There are more energy efficient, so that they can be deployed on mobile devices. There are
multiple optimization techniques suppored by TensorFlow Lite, such as multiple optimization techniques supported by TensorFlow Lite, such as
quantization. quantization.
Check out our [model optimization docs](model_optimization.md) for details. Check out our [model optimization docs](model_optimization.md) for details.

View File

@ -420,7 +420,7 @@ using [ARM Mbed](https://github.com/ARMmbed/mbed-cli).
``` ```
mbed compile --target K66F --toolchain GCC_ARM --profile release mbed compile --target K66F --toolchain GCC_ARM --profile release
``` ```
8. For some mbed compliers, you may get compile error in mbed_rtc_time.cpp. 8. For some mbed compilers, you may get compile error in mbed_rtc_time.cpp.
Go to `mbed-os/platform/mbed_rtc_time.h` and comment line 32 and line 37: Go to `mbed-os/platform/mbed_rtc_time.h` and comment line 32 and line 37:
``` ```

View File

@ -202,7 +202,7 @@ The next steps assume that the
* The `IDF_PATH` environment variable is set * The `IDF_PATH` environment variable is set
* `idf.py` and Xtensa-esp32 tools (e.g. `xtensa-esp32-elf-gcc`) are in `$PATH` * `idf.py` and Xtensa-esp32 tools (e.g. `xtensa-esp32-elf-gcc`) are in `$PATH`
* `esp32-camera` should be downloaded in `comopnents/` dir of example as * `esp32-camera` should be downloaded in `components/` dir of example as
explained in `Building the example`(below) explained in `Building the example`(below)
### Generate the examples ### Generate the examples

View File

@ -16,7 +16,7 @@ The next steps assume that the
[IDF environment variables are set](https://docs.espressif.com/projects/esp-idf/en/latest/get-started/index.html#step-4-set-up-the-environment-variables) : [IDF environment variables are set](https://docs.espressif.com/projects/esp-idf/en/latest/get-started/index.html#step-4-set-up-the-environment-variables) :
* The `IDF_PATH` environment variable is set. * `idf.py` and Xtensa-esp32 tools * The `IDF_PATH` environment variable is set. * `idf.py` and Xtensa-esp32 tools
(e.g., `xtensa-esp32-elf-gcc`) are in `$PATH`. * `esp32-camera` should be (e.g., `xtensa-esp32-elf-gcc`) are in `$PATH`. * `esp32-camera` should be
downloaded in `comopnents/` dir of example as explained in `Build the downloaded in `components/` dir of example as explained in `Build the
example`(below) example`(below)
## Build the example ## Build the example

View File

@ -37,7 +37,7 @@ bazel build -c opt \
adb install -r -d -g bazel-bin/tensorflow/lite/tools/benchmark/android/benchmark_model.apk adb install -r -d -g bazel-bin/tensorflow/lite/tools/benchmark/android/benchmark_model.apk
``` ```
Note: Make sure to install with "-g" option to grant the permission for reading Note: Make sure to install with "-g" option to grant the permission for reading
extenal storage. external storage.
(3) Push the compute graph that you need to test. (3) Push the compute graph that you need to test.
@ -119,6 +119,6 @@ a trace file,
between tracing formats and between tracing formats and
[create](https://developer.android.com/topic/performance/tracing/on-device#create-html-report) [create](https://developer.android.com/topic/performance/tracing/on-device#create-html-report)
an HTML report. an HTML report.
Note that, the catured tracing file format is either in Perfetto format or in Note that, the captured tracing file format is either in Perfetto format or in
Systrace format depending on the Android version of your device. Select the Systrace format depending on the Android version of your device. Select the
appropriate method to handle the generated file. appropriate method to handle the generated file.

View File

@ -83,7 +83,7 @@ this UI, to see the logs for a failed build:
* Submit special pull request (PR) comment to trigger CI: **bot:mlx:test** * Submit special pull request (PR) comment to trigger CI: **bot:mlx:test**
* Test session is run automatically. * Test session is run automatically.
* Test results and artefacts (log files) are reported via PR comments * Test results and artifacts (log files) are reported via PR comments
##### CI Steps ##### CI Steps