minor spelling tweaks

This commit is contained in:
Kazuaki Ishizaki 2019-10-10 15:38:58 +09:00
parent c1358ebff0
commit f54b1374af
11 changed files with 13 additions and 13 deletions
tensorflow
compiler/mlir
core
go/op
lite
experimental
examples/lstm/g3doc
micro
examples/micro_speech/apollo3
tools/make/targets/ecm3531
g3doc
models/image_classification
performance

View File

@ -1217,7 +1217,7 @@ Softmax operator
### Description:
Computes element-wise softmax activiations with the following formula
Computes element-wise softmax activations with the following formula
exp(input) / tf.reduce_sum(exp(input * beta), dim)

View File

@ -3033,7 +3033,7 @@ This op determines the maximum scale_factor that would map the initial
quantized range.
It determines the scale from one of input_min and input_max, then updates the
other one to maximize the respresentable range.
other one to maximize the representable range.
e.g.

View File

@ -103,7 +103,7 @@ This op determines the maximum scale_factor that would map the initial
quantized range.
It determines the scale from one of input_min and input_max, then updates the
other one to maximize the respresentable range.
other one to maximize the representable range.
e.g.

View File

@ -100,7 +100,7 @@ accelerator_micros and cpu_micros. Note: cpu and accelerator can run in parallel
`-order_by`: Order the results by [name|depth|bytes|peak_bytes|residual_bytes|output_bytes|micros|accelerator_micros|cpu_micros|params|float_ops|occurrence]
`-account_type_regexes`: Account and display the nodes whose types match one of the type regexes specified. tfprof allow user to define extra operation types for graph nodes through tensorflow.tfprof.OpLogProto proto. regexes are comma-sperated.
`-account_type_regexes`: Account and display the nodes whose types match one of the type regexes specified. tfprof allow user to define extra operation types for graph nodes through tensorflow.tfprof.OpLogProto proto. regexes are comma-separated.
`-start_name_regexes`: Show node starting from the node that matches the regexes, recursively. regexes are comma-separated.

View File

@ -63,7 +63,7 @@ For an operation to have float operation statistics:
run_count.
```python
# To profile float opertions in commandline, you need to pass --graph_path
# To profile float operations in commandline, you need to pass --graph_path
# and --op_log_path.
tfprof> scope -min_float_ops 1 -select float_ops -account_displayed_op_only
node name | # float_ops

View File

@ -643,7 +643,7 @@ func QuantizeAndDequantizeV2NarrowRange(value bool) QuantizeAndDequantizeV2Attr
// quantized range.
//
// It determines the scale from one of input_min and input_max, then updates the
// other one to maximize the respresentable range.
// other one to maximize the representable range.
//
// e.g.
//

View File

@ -316,7 +316,7 @@ def run_main(_):
'--use_post_training_quantize',
action='store_true',
default=True,
help='Whether or not to use post_training_quatize.')
help='Whether or not to use post_training_quantize.')
parsed_flags, _ = parser.parse_known_args()
train_and_export(parsed_flags)

View File

@ -42,7 +42,7 @@
* cmsis_power.txt: the magnitude squared of the DFT
* cmsis_power_avg.txt: the 6-bin average of the magnitude squared of
the DFT
* Run both verisons of the 1KHz pre-processor test and then compare.
* Run both versons of the 1KHz pre-processor test and then compare.
* These files can be plotted with "python compare\_1k.py"
* Also prints out the number of cycles the code took to execute (using the
DWT->CYCCNT register)
@ -60,7 +60,7 @@
* micro_power.txt: the magnitude squared of the DFT
* micro_power_avg.txt: the 6-bin average of the magnitude squared of
the DFT
* Run both verisons of the 1KHz pre-processor test and then compare.
* Run both versons of the 1KHz pre-processor test and then compare.
* These files can be plotted with "python compare\_1k.py"
* Also prints out the number of cycles the code took to execute (using the
DWT->CYCCNT register)
@ -79,7 +79,7 @@
is the same: a 1 kHz sinusoid.
* **get\_yesno\_data.cmd**: A GDB command file that runs preprocessor_test
(where TARGET=apollo3evb) and dumps the calculated data for the "yes" and
"no" input wavfeorms to text files
"no" input waveforms to text files
* **\_main.c**: Point of entry for the micro_speech test
* **preprocessor_1k.cc**: A version of preprocessor.cc where a 1 kHz sinusoid
is provided as input to the preprocessor

View File

@ -4,6 +4,6 @@ https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimenta
CONTACT INFORMATION:
Contact info@etacompute.com for more information on obtaining the Eta Compute
SDK and evalution board.
SDK and evaluation board.
www.etacompute.com

View File

@ -186,7 +186,7 @@ protected void runInference() {
The output of the inference is stored in a byte array `labelProbArray`, which is
allocated in the subclass's constructor. It consists of a single outer element,
containing one innner element for each label in the classification model.
containing one inner element for each label in the classification model.
To run inference, we call `run()` on the interpreter instance, passing the input
and output buffers as arguments.

View File

@ -81,7 +81,7 @@ class MyDelegate {
};
// Create the TfLiteRegistration for the Kernel node which will replace
// the subrgaph in the main TfLite graph.
// the subgraph in the main TfLite graph.
TfLiteRegistration GetMyDelegateNodeRegistration() {
// This is the registration for the Delegate Node that gets added to
// the TFLite graph instead of the subGraph it replaces.