Merge pull request #44276 from kiszk:spelling_tweaks_lite_doc

PiperOrigin-RevId: 339223647
Change-Id: I55d089e6090d4a920e5b4f675c79c9f600e53c20
This commit is contained in:
TensorFlower Gardener 2020-10-27 04:44:26 -07:00
commit f122343d36
20 changed files with 106 additions and 109 deletions

View File

@ -37,7 +37,7 @@ There are three parts to the model metadata in the
[schema](https://github.com/tensorflow/tflite-support/blob/master/tensorflow_lite_support/metadata/metadata_schema.fbs):
1. **Model information** - Overall description of the model as well as items
such as licence terms. See
such as license terms. See
[ModelMetadata](https://github.com/tensorflow/tflite-support/blob/4cd0551658b6e26030e0ba7fc4d3127152e0d4ae/tensorflow_lite_support/metadata/metadata_schema.fbs#L640).
2. **Input information** - Description of the inputs and pre-processing
required such as normalization. See
@ -82,8 +82,8 @@ is compatible with existing TFLite framework and Interpreter. See
[Pack mtadata and associated files into the model](#pack-metadata-and-associated-files-into-the-model)
for more details.
The associated file information can be recored in the metadata. Depending on the
file type and where the file is attached to (i.e. `ModelMetadata`,
The associated file information can be recorded in the metadata. Depending on
the file type and where the file is attached to (i.e. `ModelMetadata`,
`SubGraphMetadata`, and `TensorMetadata`),
[the TensorFlow Lite Android code generator](../inference_with_metadata/codegen.md)
may apply corresponding pre/post processing automatically to the object. See
@ -328,7 +328,7 @@ populator.populate()
You can pack as many associated files as you want into the model through
`load_associated_files`. However, it is required to pack at least those files
documented in the metadata. In this example, packing the lable file is
documented in the metadata. In this example, packing the label file is
mandatory.
## Visualize the metadata
@ -375,12 +375,12 @@ does not imply the true incompatibility. When bumping up the MAJOR number, it
does not necessarily mean the backwards compatibility is broken. Therefore, we
use the
[Flatbuffers file identification](https://google.github.io/flatbuffers/md__schemas.html),
[file_identifiler](https://github.com/tensorflow/tflite-support/blob/4cd0551658b6e26030e0ba7fc4d3127152e0d4ae/tensorflow_lite_support/metadata/metadata_schema.fbs#L61),
[file_identifier](https://github.com/tensorflow/tflite-support/blob/4cd0551658b6e26030e0ba7fc4d3127152e0d4ae/tensorflow_lite_support/metadata/metadata_schema.fbs#L61),
to denote the true compatibility of the metadata schema. The file identifier is
exactly 4 characters long. It is fixed to a certain metadata schema and not
subject to change by users. If the backward compatibility of the metadata schema
has to be broken for some reason, the file_identifier will bump up, for example,
from “M001” to “M002”. File_identifiler is expected to be changed much less
from “M001” to “M002”. File_identifier is expected to be changed much less
frequently than the metadata_version.
### The minimum necessary metadata parser version

View File

@ -137,7 +137,7 @@ with your own model and test data.
The `BetNLClassifier` API expects a TFLite model with mandatory
[TFLite Model Metadata](../../convert/metadata.md).
The Metadata should meet the following requiresments:
The Metadata should meet the following requirements:
* input_process_units for Wordpiece/Sentencepiece Tokenizer

View File

@ -153,7 +153,7 @@ with your own model and test data.
The `BertQuestionAnswerer` API expects a TFLite model with mandatory
[TFLite Model Metadata](../../convert/metadata.md).
The Metadata should meet the following requiresments:
The Metadata should meet the following requirements:
* `input_process_units` for Wordpiece/Sentencepiece Tokenizer

View File

@ -370,7 +370,7 @@ native API to be built first.
Here is an example using ObjC
[`TFLBertQuestionAnswerer`](https://github.com/tensorflow/tflite-support/blob/master/tensorflow_lite_support/ios/task/text/qa/Sources/TFLBertQuestionAnswerer.h)
for [MobileBert](https://tfhub.dev/tensorflow/lite-model/mobilebert/1/default/1)
in Swfit.
in Swift.
```swift
static let mobileBertModelPath = "path/to/model.tflite";
@ -427,7 +427,7 @@ following the steps below:
std::unique_ptr<QuestionAnswererCPP> _bertQuestionAnswerwer;
}
// Initilalize the native API object
// Initialize the native API object
+ (instancetype)mobilebertQuestionAnswererWithModelPath:(NSString *)modelPath
vocabPath:(NSString *)vocabPath {
absl::StatusOr<std::unique_ptr<QuestionAnswererCPP>> cQuestionAnswerer =

View File

@ -39,7 +39,7 @@ help in understanding performance bottlenecks and which operators dominate the
computation time.
You can also use
[TensrFlow Lite tracing](measurement.md#trace_tensorflow_lite_internals_in_android)
[TensorFlow Lite tracing](measurement.md#trace_tensorflow_lite_internals_in_android)
to profile the model in your Android application, using standard Android system
tracing, and to visualize the operator invocations by time with GUI based
profiling tools.

View File

@ -186,7 +186,7 @@ You can get nightly pre-built binaries for this tool as listed below:
* [android_aarch64](https://storage.googleapis.com/tensorflow-nightly-public/prod/tensorflow/release/lite/tools/nightly/latest/android_aarch64_benchmark_model_performance_options)
* [android_arm](https://storage.googleapis.com/tensorflow-nightly-public/prod/tensorflow/release/lite/tools/nightly/latest/android_arm_benchmark_model_performance_options)
### iOS benchamark app
### iOS benchmark app
To run benchmarks on iOS device, you need to build the app from
[source](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/benchmark/ios).
@ -421,7 +421,7 @@ internal events.
Some examples of events are:
* Operator invocation
* Graph modification by deleagate
* Graph modification by delegate
* Tensor allocation
Among different options for capturing traces, this guide covers the Android

View File

@ -45,7 +45,7 @@ Supported mapping type: string → int64, int64 → string
<tr>
<td rowspan="2" >tf.lookup.index_table_from_tensor
</td>
<td rowspan="2" colspan="5" >Supported natively when num_oov_bukcets=0 and dtype=dtypes.string.
<td rowspan="2" colspan="5" >Supported natively when num_oov_buckets=0 and dtype=dtypes.string.
<p>
For the oov concept, you will need a <a href="https://www.tensorflow.org/lite/guide/ops_select" title="Select TensorFlow operators to use in TensorFlow Lite">Flex delegate</a>.
</td>
@ -78,8 +78,6 @@ tf.contrib.lookup.MutableDenseHashTable
</tr>
</table>
## Python Sample code
Here, you can find the Python sample code:

View File

@ -109,7 +109,7 @@ fixing a bug needs a bigger architectural change.
### Reference Kernel Implementations
Pull requests that port reference kernels from TF Lite Mobile to TF Lite Micro
are welcome once we have enouch context from the contributor on why the
are welcome once we have enough context from the contributor on why the
additional kernel is needed.
1. Please create a

View File

@ -117,7 +117,7 @@ detailed allocation logging:
#include "recording_micro_interpreter.h"
// Simply change the class name from 'MicroInterpreter' to 'RecordingMicroInterpreter':
tflite::RecoridngMicroInterpreter interpreter(
tflite::RecordingMicroInterpreter interpreter(
tflite::GetModel(my_model_data), ops_resolver,
tensor_arena, tensor_arena_size, error_reporter);

View File

@ -168,10 +168,8 @@ make -f tensorflow/lite/micro/tools/make/Makefile TARGET=esp generate_hello_worl
### Building the example
Go the the example project directory
```
cd tensorflow/lite/micro/tools/make/gen/esp_xtensa-esp32/prj/hello_world/esp-idf
```
Go to the example project directory `cd
tensorflow/lite/micro/tools/make/gen/esp_xtensa-esp32/prj/hello_world/esp-idf`
Then build with `idf.py`
```
@ -201,7 +199,7 @@ idf.py --port /dev/ttyUSB0 flash monitor
The following instructions will help you build and deploy this example to
[HIMAX WE1 EVB](https://github.com/HimaxWiseEyePlus/bsp_tflu/tree/master/HIMAX_WE1_EVB_board_brief)
board. To undstand more about using this board, please check
board. To understand more about using this board, please check
[HIMAX WE1 EVB user guide](https://github.com/HimaxWiseEyePlus/bsp_tflu/tree/master/HIMAX_WE1_EVB_user_guide).
### Initial Setup

View File

@ -145,7 +145,7 @@ SLOPE:
The following instructions will help you build and deploy this example to
[HIMAX WE1 EVB](https://github.com/HimaxWiseEyePlus/bsp_tflu/tree/master/HIMAX_WE1_EVB_board_brief)
board. To undstand more about using this board, please check
board. To understand more about using this board, please check
[HIMAX WE1 EVB user guide](https://github.com/HimaxWiseEyePlus/bsp_tflu/tree/master/HIMAX_WE1_EVB_user_guide).
### Initial Setup
@ -246,7 +246,7 @@ Following the Steps to run magic wand example at HIMAX WE1 EVB platform.
After these steps, press reset button on the HIMAX WE1 EVB, you will see
application output in the serial terminal. Perform following gestures
`'Wing'`,`'Ring'`,`'Slope'` and you can see the otuput in serial terminal.
`'Wing'`,`'Ring'`,`'Slope'` and you can see the output in serial terminal.
```
WING:

View File

@ -69,7 +69,7 @@ generate_micro_speech_mock_make_project
```
Note that `TAGS=reduce_codesize` applies example specific changes of code to
reduce total size of application. It can be ommited.
reduce total size of application. It can be omitted.
### Build and Run Example
@ -220,7 +220,7 @@ generate_micro_speech_esp_project`
### Building the example
Go the the example project directory `cd
Go to the example project directory `cd
tensorflow/lite/micro/tools/make/gen/esp_xtensa-esp32/prj/micro_speech/esp-idf`
Then build with `idf.py` `idf.py build`
@ -688,9 +688,9 @@ The following instructions will help you build and deploy the sample to the
5. Build the project:
/tensorflow/lite/micro/tools/make/gen/ceva_bx1/prj/micro_speech/make$ make
6. This should build the project and create a file called micro_speech.elf.
7. The supplied configuarion reads input from a files and expects a file called
input.wav (easily changed in audio_provider.cc) to be placed in the same
directory of the .elf file
7. The supplied configuration reads input from a files and expects a file
called input.wav (easily changed in audio_provider.cc) to be placed in the
same directory of the .elf file
8. We used Google's speech command dataset: V0.0.2:
http://download.tensorflow.org/data/speech_commands_v0.02.tar.gz V0.0.1:
http://download.tensorflow.org/data/speech_commands_v0.01.tar.gz

View File

@ -308,7 +308,7 @@ generate_person_detection_esp_project`
### Building the example
Go the the example project directory `cd
Go to the example project directory `cd
tensorflow/lite/micro/tools/make/gen/esp_xtensa-esp32/prj/person_detection/esp-idf`
As the `person_detection` example requires an external component `esp32-camera`

View File

@ -140,41 +140,41 @@ This will take a couple of days on a single-GPU v100 instance to complete all
one-million steps, but you should be able to get a fairly accurate model after
a few hours if you want to experiment early.
- The checkpoints and summaries will the saved in the folder given in the
`--train_dir` argument, so that's where you'll have to look for the results.
- The `--dataset_dir` parameter should match the one where you saved the
TFRecords from the Visual Wake Words build script.
- The architecture we'll be using is defined by the `--model_name` argument.
The 'mobilenet_v1' prefix tells the script to use the first version of
MobileNet. We did experiment with later versions, but these used more RAM for
their intermediate activation buffers, so for now we kept with the original.
The '025' is the depth multiplier to use, which mostly affects the number of
weight parameters, this low setting ensures the model fits within 250KB of
Flash.
- `--preprocessing_name` controls how input images are modified before they're
fed into the model. The 'mobilenet_v1' version shrinks the width and height of
the images to the size given in `--train_image_size` (in our case 96 pixels
since we want to reduce the compute requirements). It also scales the pixel
values from 0 to 255 integers into -1.0 to +1.0 floating point numbers (though
we'll be quantizing those after training).
- The
[HM01B0](https://himax.com.tw/products/cmos-image-sensor/image-sensors/hm01b0/)
camera we're using on the SparkFun Edge board is monochrome, so to get the best
results we have to train our model on black and white images too, so we pass in
the `--input_grayscale` flag to enable that preprocessing.
- The `--learning_rate`, `--label_smoothing`, `--learning_rate_decay_factor`,
`--num_epochs_per_decay`, `--moving_average_decay` and `--batch_size` are all
parameters that control how weights are updated during the the training
process. Training deep networks is still a bit of a dark art, so these exact
values we found through experimentation for this particular model. You can try
tweaking them to speed up training or gain a small boost in accuracy, but we
can't give much guidance for how to make those changes, and it's easy to get
combinations where the training accuracy never converges.
- The `--max_number_of_steps` defines how long the training should continue.
There's no good way to figure out this threshold in advance, you have to
experiment to tell when the accuracy of the model is no longer improving to
tell when to cut it off. In our case we default to a million steps, since with
this particular model we know that's a good point to stop.
- The checkpoints and summaries will the saved in the folder given in the
`--train_dir` argument, so that's where you'll have to look for the results.
- The `--dataset_dir` parameter should match the one where you saved the
TFRecords from the Visual Wake Words build script.
- The architecture we'll be using is defined by the `--model_name` argument.
The 'mobilenet_v1' prefix tells the script to use the first version of
MobileNet. We did experiment with later versions, but these used more RAM
for their intermediate activation buffers, so for now we kept with the
original. The '025' is the depth multiplier to use, which mostly affects the
number of weight parameters, this low setting ensures the model fits within
250KB of Flash.
- `--preprocessing_name` controls how input images are modified before they're
fed into the model. The 'mobilenet_v1' version shrinks the width and height
of the images to the size given in `--train_image_size` (in our case 96
pixels since we want to reduce the compute requirements). It also scales the
pixel values from 0 to 255 integers into -1.0 to +1.0 floating point numbers
(though we'll be quantizing those after training).
- The
[HM01B0](https://himax.com.tw/products/cmos-image-sensor/image-sensors/hm01b0/)
camera we're using on the SparkFun Edge board is monochrome, so to get the
best results we have to train our model on black and white images too, so we
pass in the `--input_grayscale` flag to enable that preprocessing.
- The `--learning_rate`, `--label_smoothing`, `--learning_rate_decay_factor`,
`--num_epochs_per_decay`, `--moving_average_decay` and `--batch_size` are
all parameters that control how weights are updated during the training
process. Training deep networks is still a bit of a dark art, so these exact
values we found through experimentation for this particular model. You can
try tweaking them to speed up training or gain a small boost in accuracy,
but we can't give much guidance for how to make those changes, and it's easy
to get combinations where the training accuracy never converges.
- The `--max_number_of_steps` defines how long the training should continue.
There's no good way to figure out this threshold in advance, you have to
experiment to tell when the accuracy of the model is no longer improving to
tell when to cut it off. In our case we default to a million steps, since
with this particular model we know that's a good point to stop.
Once you start the script, you should see output that looks something like this:

View File

@ -56,7 +56,7 @@ generate_person_detection_int8_make_project
```
Note that `TAGS=reduce_codesize` applies example specific changes of code to
reduce total size of application. It can be ommited.
reduce total size of application. It can be omitted.
### Build and Run Example
@ -275,7 +275,7 @@ greyscale, and 18.6 seconds to run inference.
The following instructions will help you build and deploy this example to
[HIMAX WE1 EVB](https://github.com/HimaxWiseEyePlus/bsp_tflu/tree/master/HIMAX_WE1_EVB_board_brief)
board. To undstand more about using this board, please check
board. To understand more about using this board, please check
[HIMAX WE1 EVB user guide](https://github.com/HimaxWiseEyePlus/bsp_tflu/tree/master/HIMAX_WE1_EVB_user_guide).
### Initial Setup

View File

@ -140,41 +140,41 @@ This will take a couple of days on a single-GPU v100 instance to complete all
one-million steps, but you should be able to get a fairly accurate model after
a few hours if you want to experiment early.
- The checkpoints and summaries will the saved in the folder given in the
`--train_dir` argument, so that's where you'll have to look for the results.
- The `--dataset_dir` parameter should match the one where you saved the
TFRecords from the Visual Wake Words build script.
- The architecture we'll be using is defined by the `--model_name` argument.
The 'mobilenet_v1' prefix tells the script to use the first version of
MobileNet. We did experiment with later versions, but these used more RAM for
their intermediate activation buffers, so for now we kept with the original.
The '025' is the depth multiplier to use, which mostly affects the number of
weight parameters, this low setting ensures the model fits within 250KB of
Flash.
- `--preprocessing_name` controls how input images are modified before they're
fed into the model. The 'mobilenet_v1' version shrinks the width and height of
the images to the size given in `--train_image_size` (in our case 96 pixels
since we want to reduce the compute requirements). It also scales the pixel
values from 0 to 255 integers into -1.0 to +1.0 floating point numbers (though
we'll be quantizing those after training).
- The
[HM01B0](https://himax.com.tw/products/cmos-image-sensor/image-sensors/hm01b0/)
camera we're using on the SparkFun Edge board is monochrome, so to get the best
results we have to train our model on black and white images too, so we pass in
the `--input_grayscale` flag to enable that preprocessing.
- The `--learning_rate`, `--label_smoothing`, `--learning_rate_decay_factor`,
`--num_epochs_per_decay`, `--moving_average_decay` and `--batch_size` are all
parameters that control how weights are updated during the the training
process. Training deep networks is still a bit of a dark art, so these exact
values we found through experimentation for this particular model. You can try
tweaking them to speed up training or gain a small boost in accuracy, but we
can't give much guidance for how to make those changes, and it's easy to get
combinations where the training accuracy never converges.
- The `--max_number_of_steps` defines how long the training should continue.
There's no good way to figure out this threshold in advance, you have to
experiment to tell when the accuracy of the model is no longer improving to
tell when to cut it off. In our case we default to a million steps, since with
this particular model we know that's a good point to stop.
- The checkpoints and summaries will the saved in the folder given in the
`--train_dir` argument, so that's where you'll have to look for the results.
- The `--dataset_dir` parameter should match the one where you saved the
TFRecords from the Visual Wake Words build script.
- The architecture we'll be using is defined by the `--model_name` argument.
The 'mobilenet_v1' prefix tells the script to use the first version of
MobileNet. We did experiment with later versions, but these used more RAM
for their intermediate activation buffers, so for now we kept with the
original. The '025' is the depth multiplier to use, which mostly affects the
number of weight parameters, this low setting ensures the model fits within
250KB of Flash.
- `--preprocessing_name` controls how input images are modified before they're
fed into the model. The 'mobilenet_v1' version shrinks the width and height
of the images to the size given in `--train_image_size` (in our case 96
pixels since we want to reduce the compute requirements). It also scales the
pixel values from 0 to 255 integers into -1.0 to +1.0 floating point numbers
(though we'll be quantizing those after training).
- The
[HM01B0](https://himax.com.tw/products/cmos-image-sensor/image-sensors/hm01b0/)
camera we're using on the SparkFun Edge board is monochrome, so to get the
best results we have to train our model on black and white images too, so we
pass in the `--input_grayscale` flag to enable that preprocessing.
- The `--learning_rate`, `--label_smoothing`, `--learning_rate_decay_factor`,
`--num_epochs_per_decay`, `--moving_average_decay` and `--batch_size` are
all parameters that control how weights are updated during the training
process. Training deep networks is still a bit of a dark art, so these exact
values we found through experimentation for this particular model. You can
try tweaking them to speed up training or gain a small boost in accuracy,
but we can't give much guidance for how to make those changes, and it's easy
to get combinations where the training accuracy never converges.
- The `--max_number_of_steps` defines how long the training should continue.
There's no good way to figure out this threshold in advance, you have to
experiment to tell when the accuracy of the model is no longer improving to
tell when to cut it off. In our case we default to a million steps, since
with this particular model we know that's a good point to stop.
Once you start the script, you should see output that looks something like this:

View File

@ -116,7 +116,7 @@ root
```
# Each stack* object contains the following information
stack*
|-- counts: 5 # Number of occurence with the exact same call stack
|-- counts: 5 # Number of occurrences with the exact same call stack
|-- [list of functions in the call stack]
```
@ -130,4 +130,4 @@ The regular expression used in this script is configured with a standard
* `base`: Base regular expression to clean up the log, this is set to clean up
the ANSI color codes in GDB
* `custom`: A series of other regular expressions (the script will run them in
order) to extract the information from the the log
order) to extract the information from the log

View File

@ -70,7 +70,7 @@ section for instructions on toolchain installation.
If you wish to use the MetaWare Debugger to debug your code, you need to also
install the Digilent Adept 2 software, which includes the necessary drivers for
connecting to the targets. This is available from oficial
connecting to the targets. This is available from official
[Digilent site](https://reference.digilentinc.com/reference/software/adept/start?redirect=1#software_downloads).
You should install the “System” component, and Runtime. Utilities and SDK are
NOT required.

View File

@ -14,14 +14,15 @@ Host Tools (i.e analysis tools etc.)
#### Step 1. Install CMake tool
It requires CMake 3.16 or higher. On Ubunutu, you can simply run the following
It requires CMake 3.16 or higher. On Ubuntu, you can simply run the following
command.
```sh
sudo apt-get install cmake
```
Or you can follow [the offcial cmake installation guide](https://cmake.org/install/)
Or you can follow
[the official cmake installation guide](https://cmake.org/install/)
#### Step 2. Clone TensorFlow repository

View File

@ -11,7 +11,7 @@ latency & output-value deviation) in two settings:
To do so, the tool generates random gaussian data and passes it through two
TFLite Interpreters - one running single-threaded CPU kernels and the other
parametrized by the user's arguments.
parameterized by the user's arguments.
It measures the latency of both, as well as the absolute difference between the
output tensors from each Interpreter, on a per-element basis.