Fix graph_transform documentation:

- remove \ from within strings
  - remove :0 from inputs and outputs, so fold_constants works
  - make sure fold_(old_)batch_norms runs before quantize_weigths
    and round_weights.
Change: 153728959
This commit is contained in:
A. Unique TensorFlower 2017-04-20 09:52:05 -08:00 committed by TensorFlower Gardener
parent e8082d5780
commit 92bf4b3927

View File

@ -136,15 +136,14 @@ bazel build tensorflow/tools/graph_transforms:transform_graph
bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
--in_graph=tensorflow_inception_graph.pb \
--out_graph=optimized_inception_graph.pb \
--inputs='Mul:0' \
--outputs='softmax:0' \
--transforms='\
strip_unused_nodes(type=float, shape="1,299,299,3") \
remove_nodes(op=Identity, op=CheckNumerics) \
fold_constants(ignore_errors=true) \
fold_batch_norms \
fold_old_batch_norms\
'
--inputs='Mul' \
--outputs='softmax' \
--transforms='
strip_unused_nodes(type=float, shape="1,299,299,3")
remove_nodes(op=Identity, op=CheckNumerics)
fold_constants(ignore_errors=true)
fold_batch_norms
fold_old_batch_norms'
```
The batch norm folding is included twice because there are two different flavors
@ -177,14 +176,13 @@ bazel build tensorflow/tools/graph_transforms:transform_graph
bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
--in_graph=tensorflow_inception_graph.pb \
--out_graph=optimized_inception_graph.pb \
--inputs='Mul:0' \
--outputs='softmax:0' \
--transforms='\
strip_unused_nodes(type=float, shape="1,299,299,3") \
fold_constants(ignore_errors=true) \
fold_batch_norms \
fold_old_batch_norms\
'
--inputs='Mul' \
--outputs='softmax' \
--transforms='
strip_unused_nodes(type=float, shape="1,299,299,3")
fold_constants(ignore_errors=true)
fold_batch_norms
fold_old_batch_norms'
```
### Shrinking File Size
@ -212,11 +210,14 @@ bazel build tensorflow/tools/graph_transforms:transform_graph
bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
--in_graph=tensorflow_inception_graph.pb \
--out_graph=optimized_inception_graph.pb \
--inputs='Mul:0' \
--outputs='softmax:0' \
--inputs='Mul' \
--outputs='softmax' \
--transforms='\
round_weights(num_steps=256) \
'
strip_unused_nodes(type=float, shape="1,299,299,3")
fold_constants(ignore_errors=true)
fold_batch_norms
fold_old_batch_norms
round_weights(num_steps=256)'
```
You should see that the `optimized_inception_graph.pb` output file is the same
@ -236,11 +237,14 @@ bazel build tensorflow/tools/graph_transforms:transform_graph
bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
--in_graph=tensorflow_inception_graph.pb \
--out_graph=optimized_inception_graph.pb \
--inputs='Mul:0' \
--outputs='softmax:0' \
--transforms='\
quantize_weights \
'
--inputs='Mul' \
--outputs='softmax' \
--transforms='
strip_unused_nodes(type=float, shape="1,299,299,3")
fold_constants(ignore_errors=true)
fold_batch_norms
fold_old_batch_norms
quantize_weights'
```
You should see that the size of the output graph is about a quarter of the
@ -263,9 +267,8 @@ bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
--out_graph=optimized_inception_graph.pb \
--inputs='Mul:0' \
--outputs='softmax:0' \
--transforms='\
obfuscate_names \
'
--transforms='
obfuscate_names'
```
### Eight-bit Calculations
@ -280,17 +283,19 @@ bazel build tensorflow/tools/graph_transforms:transform_graph
bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
--in_graph=tensorflow_inception_graph.pb \
--out_graph=optimized_inception_graph.pb \
--inputs='Mul:0' \
--outputs='softmax:0' \
--inputs='Mul' \
--outputs='softmax' \
--transforms='
add_default_attributes
strip_unused_nodes(type=float, shape="1,299,299,3")
remove_nodes(op=Identity, op=CheckNumerics)
fold_old_batch_norms
quantize_weights
quantize_nodes
strip_unused_nodes
sort_by_execution_order'
add_default_attributes
strip_unused_nodes(type=float, shape="1,299,299,3")
remove_nodes(op=Identity, op=CheckNumerics)
fold_constants(ignore_errors=true)
fold_batch_norms
fold_old_batch_norms
quantize_weights
quantize_nodes
strip_unused_nodes
sort_by_execution_order'
```
This process converts all the operations in the graph that have eight-bit
@ -446,7 +451,7 @@ bazel-bin/tensorflow/examples/label_image/label_image \
--input_layer=Mul \
--output_layer=softmax \
--graph=/tmp/logged_quantized_inception.pb \
--labels=learning/brain/models/image/inception_v3/imagenet_comp_graph_label_strings.txt \
--labels=${HOME}/Downloads/imagenet_comp_graph_label_strings.txt \
--logtostderr \
2>/tmp/min_max_log_small.txt
```
@ -580,7 +585,10 @@ Converts any large (more than 15 element) float Const op into an eight-bit
equivalent, followed by a float conversion op so that the result is usable by
subsequent nodes. This is mostly useful for [shrinking file
sizes](#shrinking-file-size), but also helps with the more advanced
[quantize_nodes](#quantize_nodes) transform.
[quantize_nodes](#quantize_nodes) transform. Even though there are no
prerequesites, it is advisable to run [fold_batch_norms](#fold_batch_norms) or
[fold_old_batch_norms](#fold_old_batch_norms), because rounding variances down
to zero may cause significant loss of precision.
### remove_attribute
@ -665,7 +673,11 @@ Rounds all float values in large Const ops (more than 15 elements) to the given
number of steps. The unique values are chosen per buffer by linearly allocating
between the largest and smallest values present. This is useful when you'll be
deploying on mobile, and you want a model that will compress effectively. See
[shrinking file size](#shrinking-file-size) for more details.
[shrinking file size](#shrinking-file-size) for more details. Even though there
are no prerequesites, it is advisable to run
[fold_batch_norms](#fold_batch_norms) or
[fold_old_batch_norms](#fold_old_batch_norms), because rounding variances down
to zero may cause significant loss of precision.
### sparsify_gather