Fix graph_transform documentation:
- remove \ from within strings - remove :0 from inputs and outputs, so fold_constants works - make sure fold_(old_)batch_norms runs before quantize_weigths and round_weights. Change: 153728959
This commit is contained in:
parent
e8082d5780
commit
92bf4b3927
@ -136,15 +136,14 @@ bazel build tensorflow/tools/graph_transforms:transform_graph
|
|||||||
bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
|
bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
|
||||||
--in_graph=tensorflow_inception_graph.pb \
|
--in_graph=tensorflow_inception_graph.pb \
|
||||||
--out_graph=optimized_inception_graph.pb \
|
--out_graph=optimized_inception_graph.pb \
|
||||||
--inputs='Mul:0' \
|
--inputs='Mul' \
|
||||||
--outputs='softmax:0' \
|
--outputs='softmax' \
|
||||||
--transforms='\
|
--transforms='
|
||||||
strip_unused_nodes(type=float, shape="1,299,299,3") \
|
strip_unused_nodes(type=float, shape="1,299,299,3")
|
||||||
remove_nodes(op=Identity, op=CheckNumerics) \
|
remove_nodes(op=Identity, op=CheckNumerics)
|
||||||
fold_constants(ignore_errors=true) \
|
fold_constants(ignore_errors=true)
|
||||||
fold_batch_norms \
|
fold_batch_norms
|
||||||
fold_old_batch_norms\
|
fold_old_batch_norms'
|
||||||
'
|
|
||||||
```
|
```
|
||||||
|
|
||||||
The batch norm folding is included twice because there are two different flavors
|
The batch norm folding is included twice because there are two different flavors
|
||||||
@ -177,14 +176,13 @@ bazel build tensorflow/tools/graph_transforms:transform_graph
|
|||||||
bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
|
bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
|
||||||
--in_graph=tensorflow_inception_graph.pb \
|
--in_graph=tensorflow_inception_graph.pb \
|
||||||
--out_graph=optimized_inception_graph.pb \
|
--out_graph=optimized_inception_graph.pb \
|
||||||
--inputs='Mul:0' \
|
--inputs='Mul' \
|
||||||
--outputs='softmax:0' \
|
--outputs='softmax' \
|
||||||
--transforms='\
|
--transforms='
|
||||||
strip_unused_nodes(type=float, shape="1,299,299,3") \
|
strip_unused_nodes(type=float, shape="1,299,299,3")
|
||||||
fold_constants(ignore_errors=true) \
|
fold_constants(ignore_errors=true)
|
||||||
fold_batch_norms \
|
fold_batch_norms
|
||||||
fold_old_batch_norms\
|
fold_old_batch_norms'
|
||||||
'
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Shrinking File Size
|
### Shrinking File Size
|
||||||
@ -212,11 +210,14 @@ bazel build tensorflow/tools/graph_transforms:transform_graph
|
|||||||
bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
|
bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
|
||||||
--in_graph=tensorflow_inception_graph.pb \
|
--in_graph=tensorflow_inception_graph.pb \
|
||||||
--out_graph=optimized_inception_graph.pb \
|
--out_graph=optimized_inception_graph.pb \
|
||||||
--inputs='Mul:0' \
|
--inputs='Mul' \
|
||||||
--outputs='softmax:0' \
|
--outputs='softmax' \
|
||||||
--transforms='\
|
--transforms='\
|
||||||
round_weights(num_steps=256) \
|
strip_unused_nodes(type=float, shape="1,299,299,3")
|
||||||
'
|
fold_constants(ignore_errors=true)
|
||||||
|
fold_batch_norms
|
||||||
|
fold_old_batch_norms
|
||||||
|
round_weights(num_steps=256)'
|
||||||
```
|
```
|
||||||
|
|
||||||
You should see that the `optimized_inception_graph.pb` output file is the same
|
You should see that the `optimized_inception_graph.pb` output file is the same
|
||||||
@ -236,11 +237,14 @@ bazel build tensorflow/tools/graph_transforms:transform_graph
|
|||||||
bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
|
bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
|
||||||
--in_graph=tensorflow_inception_graph.pb \
|
--in_graph=tensorflow_inception_graph.pb \
|
||||||
--out_graph=optimized_inception_graph.pb \
|
--out_graph=optimized_inception_graph.pb \
|
||||||
--inputs='Mul:0' \
|
--inputs='Mul' \
|
||||||
--outputs='softmax:0' \
|
--outputs='softmax' \
|
||||||
--transforms='\
|
--transforms='
|
||||||
quantize_weights \
|
strip_unused_nodes(type=float, shape="1,299,299,3")
|
||||||
'
|
fold_constants(ignore_errors=true)
|
||||||
|
fold_batch_norms
|
||||||
|
fold_old_batch_norms
|
||||||
|
quantize_weights'
|
||||||
```
|
```
|
||||||
|
|
||||||
You should see that the size of the output graph is about a quarter of the
|
You should see that the size of the output graph is about a quarter of the
|
||||||
@ -263,9 +267,8 @@ bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
|
|||||||
--out_graph=optimized_inception_graph.pb \
|
--out_graph=optimized_inception_graph.pb \
|
||||||
--inputs='Mul:0' \
|
--inputs='Mul:0' \
|
||||||
--outputs='softmax:0' \
|
--outputs='softmax:0' \
|
||||||
--transforms='\
|
--transforms='
|
||||||
obfuscate_names \
|
obfuscate_names'
|
||||||
'
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Eight-bit Calculations
|
### Eight-bit Calculations
|
||||||
@ -280,17 +283,19 @@ bazel build tensorflow/tools/graph_transforms:transform_graph
|
|||||||
bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
|
bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
|
||||||
--in_graph=tensorflow_inception_graph.pb \
|
--in_graph=tensorflow_inception_graph.pb \
|
||||||
--out_graph=optimized_inception_graph.pb \
|
--out_graph=optimized_inception_graph.pb \
|
||||||
--inputs='Mul:0' \
|
--inputs='Mul' \
|
||||||
--outputs='softmax:0' \
|
--outputs='softmax' \
|
||||||
--transforms='
|
--transforms='
|
||||||
add_default_attributes
|
add_default_attributes
|
||||||
strip_unused_nodes(type=float, shape="1,299,299,3")
|
strip_unused_nodes(type=float, shape="1,299,299,3")
|
||||||
remove_nodes(op=Identity, op=CheckNumerics)
|
remove_nodes(op=Identity, op=CheckNumerics)
|
||||||
fold_old_batch_norms
|
fold_constants(ignore_errors=true)
|
||||||
quantize_weights
|
fold_batch_norms
|
||||||
quantize_nodes
|
fold_old_batch_norms
|
||||||
strip_unused_nodes
|
quantize_weights
|
||||||
sort_by_execution_order'
|
quantize_nodes
|
||||||
|
strip_unused_nodes
|
||||||
|
sort_by_execution_order'
|
||||||
```
|
```
|
||||||
|
|
||||||
This process converts all the operations in the graph that have eight-bit
|
This process converts all the operations in the graph that have eight-bit
|
||||||
@ -446,7 +451,7 @@ bazel-bin/tensorflow/examples/label_image/label_image \
|
|||||||
--input_layer=Mul \
|
--input_layer=Mul \
|
||||||
--output_layer=softmax \
|
--output_layer=softmax \
|
||||||
--graph=/tmp/logged_quantized_inception.pb \
|
--graph=/tmp/logged_quantized_inception.pb \
|
||||||
--labels=learning/brain/models/image/inception_v3/imagenet_comp_graph_label_strings.txt \
|
--labels=${HOME}/Downloads/imagenet_comp_graph_label_strings.txt \
|
||||||
--logtostderr \
|
--logtostderr \
|
||||||
2>/tmp/min_max_log_small.txt
|
2>/tmp/min_max_log_small.txt
|
||||||
```
|
```
|
||||||
@ -580,7 +585,10 @@ Converts any large (more than 15 element) float Const op into an eight-bit
|
|||||||
equivalent, followed by a float conversion op so that the result is usable by
|
equivalent, followed by a float conversion op so that the result is usable by
|
||||||
subsequent nodes. This is mostly useful for [shrinking file
|
subsequent nodes. This is mostly useful for [shrinking file
|
||||||
sizes](#shrinking-file-size), but also helps with the more advanced
|
sizes](#shrinking-file-size), but also helps with the more advanced
|
||||||
[quantize_nodes](#quantize_nodes) transform.
|
[quantize_nodes](#quantize_nodes) transform. Even though there are no
|
||||||
|
prerequesites, it is advisable to run [fold_batch_norms](#fold_batch_norms) or
|
||||||
|
[fold_old_batch_norms](#fold_old_batch_norms), because rounding variances down
|
||||||
|
to zero may cause significant loss of precision.
|
||||||
|
|
||||||
### remove_attribute
|
### remove_attribute
|
||||||
|
|
||||||
@ -665,7 +673,11 @@ Rounds all float values in large Const ops (more than 15 elements) to the given
|
|||||||
number of steps. The unique values are chosen per buffer by linearly allocating
|
number of steps. The unique values are chosen per buffer by linearly allocating
|
||||||
between the largest and smallest values present. This is useful when you'll be
|
between the largest and smallest values present. This is useful when you'll be
|
||||||
deploying on mobile, and you want a model that will compress effectively. See
|
deploying on mobile, and you want a model that will compress effectively. See
|
||||||
[shrinking file size](#shrinking-file-size) for more details.
|
[shrinking file size](#shrinking-file-size) for more details. Even though there
|
||||||
|
are no prerequesites, it is advisable to run
|
||||||
|
[fold_batch_norms](#fold_batch_norms) or
|
||||||
|
[fold_old_batch_norms](#fold_old_batch_norms), because rounding variances down
|
||||||
|
to zero may cause significant loss of precision.
|
||||||
|
|
||||||
### sparsify_gather
|
### sparsify_gather
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user