Update generated Python Op docs.

Change: 134503299
This commit is contained in:
A. Unique TensorFlower 2016-09-27 23:21:51 -08:00 committed by TensorFlower Gardener
parent 8a0dd6e825
commit c1e4f0f6a1
5 changed files with 322 additions and 0 deletions

View File

@ -2444,3 +2444,167 @@ tf.sequence_mask([1, 3, 2], 5) =
* <b>`ValueError`</b>: if the arguments have invalid rank.
- - -
### `tf.dequantize(input, min_range, max_range, mode=None, name=None)` {#dequantize}
Dequantize the 'input' tensor into a float Tensor.
[min_range, max_range] are scalar floats that specify the range for
the 'input' data. The 'mode' attribute controls exactly which calculations are
used to convert the float values to their quantized equivalents.
In 'MIN_COMBINED' mode, each value of the tensor will undergo the following:
```
if T == qint8, in[i] += (range(T) + 1)/ 2.0
out[i] = min_range + (in[i]* (max_range - min_range) / range(T))
```
here `range(T) = numeric_limits<T>::max() - numeric_limits<T>::min()`
*MIN_COMBINED Mode Example*
If the input comes from a QuantizedRelu6, the output type is
quint8 (range of 0-255) but the possible range of QuantizedRelu6 is
0-6. The min_range and max_range values are therefore 0.0 and 6.0.
Dequantize on quint8 will take each value, cast to float, and multiply
by 6 / 255.
Note that if quantizedtype is qint8, the operation will additionally add
each value by 128 prior to casting.
If the mode is 'MIN_FIRST', then this approach is used:
```
number_of_steps = 1 << (# of bits in T)
range_adjust = number_of_steps / (number_of_steps - 1)
range = (range_max - range_min) * range_adjust
range_scale = range / number_of_steps
const double offset_input = static_cast<double>(input) - lowest_quantized;
result = range_min + ((input - numeric_limits<T>::min()) * range_scale)
```
##### Args:
* <b>`input`</b>: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint16`, `quint16`, `qint32`.
* <b>`min_range`</b>: A `Tensor` of type `float32`.
The minimum scalar value possibly produced for the input.
* <b>`max_range`</b>: A `Tensor` of type `float32`.
The maximum scalar value possibly produced for the input.
* <b>`mode`</b>: An optional `string` from: `"MIN_COMBINED", "MIN_FIRST"`. Defaults to `"MIN_COMBINED"`.
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
A `Tensor` of type `float32`.
- - -
### `tf.quantize_v2(input, min_range, max_range, T, mode=None, name=None)` {#quantize_v2}
Quantize the 'input' tensor of type float to 'output' tensor of type 'T'.
[min_range, max_range] are scalar floats that specify the range for
the 'input' data. The 'mode' attribute controls exactly which calculations are
used to convert the float values to their quantized equivalents.
In 'MIN_COMBINED' mode, each value of the tensor will undergo the following:
```
out[i] = (in[i] - min_range) * range(T) / (max_range - min_range)
if T == qint8, out[i] -= (range(T) + 1) / 2.0
```
here `range(T) = numeric_limits<T>::max() - numeric_limits<T>::min()`
*MIN_COMBINED Mode Example*
Assume the input is type float and has a possible range of [0.0, 6.0] and the
output type is quint8 ([0, 255]). The min_range and max_range values should be
specified as 0.0 and 6.0. Quantizing from float to quint8 will multiply each
value of the input by 255/6 and cast to quint8.
If the output type was qint8 ([-128, 127]), the operation will additionally
subtract each value by 128 prior to casting, so that the range of values aligns
with the range of qint8.
If the mode is 'MIN_FIRST', then this approach is used:
```
number_of_steps = 1 << (# of bits in T)
range_adjust = number_of_steps / (number_of_steps - 1)
range = (range_max - range_min) * range_adjust
range_scale = number_of_steps / range
quantized = round(input * range_scale) - round(range_min * range_scale) +
numeric_limits<T>::min()
quantized = max(quantized, numeric_limits<T>::min())
quantized = min(quantized, numeric_limits<T>::max())
```
The biggest difference between this and MIN_COMBINED is that the minimum range
is rounded first, before it's subtracted from the rounded value. With
MIN_COMBINED, a small bias is introduced where repeated iterations of quantizing
and dequantizing will introduce a larger and larger error.
One thing to watch out for is that the operator may choose to adjust the
requested minimum and maximum values slightly during the quantization process,
so you should always use the output ports as the range for further calculations.
For example, if the requested minimum and maximum values are close to equal,
they will be separated by a small epsilon value to prevent ill-formed quantized
buffers from being created. Otherwise, you can end up with buffers where all the
quantized values map to the same float value, which causes problems for
operations that have to perform further calculations on them.
##### Args:
* <b>`input`</b>: A `Tensor` of type `float32`.
* <b>`min_range`</b>: A `Tensor` of type `float32`.
The minimum scalar value possibly produced for the input.
* <b>`max_range`</b>: A `Tensor` of type `float32`.
The maximum scalar value possibly produced for the input.
* <b>`T`</b>: A `tf.DType` from: `tf.qint8, tf.quint8, tf.qint16, tf.quint16, tf.qint32`.
* <b>`mode`</b>: An optional `string` from: `"MIN_COMBINED", "MIN_FIRST"`. Defaults to `"MIN_COMBINED"`.
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
A tuple of `Tensor` objects (output, output_min, output_max).
* <b>`output`</b>: A `Tensor` of type `T`. The quantized data produced from the float input.
* <b>`output_min`</b>: A `Tensor` of type `float32`. The actual minimum scalar value used for the output.
* <b>`output_max`</b>: A `Tensor` of type `float32`. The actual maximum scalar value used for the output.
- - -
### `tf.quantized_concat(concat_dim, values, input_mins, input_maxes, name=None)` {#quantized_concat}
Concatenates quantized tensors along one dimension.
##### Args:
* <b>`concat_dim`</b>: A `Tensor` of type `int32`.
0-D. The dimension along which to concatenate. Must be in the
range [0, rank(values)).
* <b>`values`</b>: A list of at least 2 `Tensor` objects of the same type.
The `N` Tensors to concatenate. Their ranks and types must match,
and their sizes must match in all dimensions except `concat_dim`.
* <b>`input_mins`</b>: A list with the same number of `Tensor` objects as `values` of `Tensor` objects of type `float32`.
The minimum scalar values for each of the input tensors.
* <b>`input_maxes`</b>: A list with the same number of `Tensor` objects as `values` of `Tensor` objects of type `float32`.
The maximum scalar values for each of the input tensors.
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
A tuple of `Tensor` objects (output, output_min, output_max).
* <b>`output`</b>: A `Tensor`. Has the same type as `values`. A `Tensor` with the concatenation of values stacked along the
`concat_dim` dimension. This tensor's shape matches that of `values` except
in `concat_dim` where it has the sum of the sizes.
* <b>`output_min`</b>: A `Tensor` of type `float32`. The float value that the minimum quantized output value represents.
* <b>`output_max`</b>: A `Tensor` of type `float32`. The float value that the maximum quantized output value represents.

View File

@ -0,0 +1,74 @@
### `tf.quantize_v2(input, min_range, max_range, T, mode=None, name=None)` {#quantize_v2}
Quantize the 'input' tensor of type float to 'output' tensor of type 'T'.
[min_range, max_range] are scalar floats that specify the range for
the 'input' data. The 'mode' attribute controls exactly which calculations are
used to convert the float values to their quantized equivalents.
In 'MIN_COMBINED' mode, each value of the tensor will undergo the following:
```
out[i] = (in[i] - min_range) * range(T) / (max_range - min_range)
if T == qint8, out[i] -= (range(T) + 1) / 2.0
```
here `range(T) = numeric_limits<T>::max() - numeric_limits<T>::min()`
*MIN_COMBINED Mode Example*
Assume the input is type float and has a possible range of [0.0, 6.0] and the
output type is quint8 ([0, 255]). The min_range and max_range values should be
specified as 0.0 and 6.0. Quantizing from float to quint8 will multiply each
value of the input by 255/6 and cast to quint8.
If the output type was qint8 ([-128, 127]), the operation will additionally
subtract each value by 128 prior to casting, so that the range of values aligns
with the range of qint8.
If the mode is 'MIN_FIRST', then this approach is used:
```
number_of_steps = 1 << (# of bits in T)
range_adjust = number_of_steps / (number_of_steps - 1)
range = (range_max - range_min) * range_adjust
range_scale = number_of_steps / range
quantized = round(input * range_scale) - round(range_min * range_scale) +
numeric_limits<T>::min()
quantized = max(quantized, numeric_limits<T>::min())
quantized = min(quantized, numeric_limits<T>::max())
```
The biggest difference between this and MIN_COMBINED is that the minimum range
is rounded first, before it's subtracted from the rounded value. With
MIN_COMBINED, a small bias is introduced where repeated iterations of quantizing
and dequantizing will introduce a larger and larger error.
One thing to watch out for is that the operator may choose to adjust the
requested minimum and maximum values slightly during the quantization process,
so you should always use the output ports as the range for further calculations.
For example, if the requested minimum and maximum values are close to equal,
they will be separated by a small epsilon value to prevent ill-formed quantized
buffers from being created. Otherwise, you can end up with buffers where all the
quantized values map to the same float value, which causes problems for
operations that have to perform further calculations on them.
##### Args:
* <b>`input`</b>: A `Tensor` of type `float32`.
* <b>`min_range`</b>: A `Tensor` of type `float32`.
The minimum scalar value possibly produced for the input.
* <b>`max_range`</b>: A `Tensor` of type `float32`.
The maximum scalar value possibly produced for the input.
* <b>`T`</b>: A `tf.DType` from: `tf.qint8, tf.quint8, tf.qint16, tf.quint16, tf.qint32`.
* <b>`mode`</b>: An optional `string` from: `"MIN_COMBINED", "MIN_FIRST"`. Defaults to `"MIN_COMBINED"`.
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
A tuple of `Tensor` objects (output, output_min, output_max).
* <b>`output`</b>: A `Tensor` of type `T`. The quantized data produced from the float input.
* <b>`output_min`</b>: A `Tensor` of type `float32`. The actual minimum scalar value used for the output.
* <b>`output_max`</b>: A `Tensor` of type `float32`. The actual maximum scalar value used for the output.

View File

@ -0,0 +1,52 @@
### `tf.dequantize(input, min_range, max_range, mode=None, name=None)` {#dequantize}
Dequantize the 'input' tensor into a float Tensor.
[min_range, max_range] are scalar floats that specify the range for
the 'input' data. The 'mode' attribute controls exactly which calculations are
used to convert the float values to their quantized equivalents.
In 'MIN_COMBINED' mode, each value of the tensor will undergo the following:
```
if T == qint8, in[i] += (range(T) + 1)/ 2.0
out[i] = min_range + (in[i]* (max_range - min_range) / range(T))
```
here `range(T) = numeric_limits<T>::max() - numeric_limits<T>::min()`
*MIN_COMBINED Mode Example*
If the input comes from a QuantizedRelu6, the output type is
quint8 (range of 0-255) but the possible range of QuantizedRelu6 is
0-6. The min_range and max_range values are therefore 0.0 and 6.0.
Dequantize on quint8 will take each value, cast to float, and multiply
by 6 / 255.
Note that if quantizedtype is qint8, the operation will additionally add
each value by 128 prior to casting.
If the mode is 'MIN_FIRST', then this approach is used:
```
number_of_steps = 1 << (# of bits in T)
range_adjust = number_of_steps / (number_of_steps - 1)
range = (range_max - range_min) * range_adjust
range_scale = range / number_of_steps
const double offset_input = static_cast<double>(input) - lowest_quantized;
result = range_min + ((input - numeric_limits<T>::min()) * range_scale)
```
##### Args:
* <b>`input`</b>: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint16`, `quint16`, `qint32`.
* <b>`min_range`</b>: A `Tensor` of type `float32`.
The minimum scalar value possibly produced for the input.
* <b>`max_range`</b>: A `Tensor` of type `float32`.
The maximum scalar value possibly produced for the input.
* <b>`mode`</b>: An optional `string` from: `"MIN_COMBINED", "MIN_FIRST"`. Defaults to `"MIN_COMBINED"`.
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
A `Tensor` of type `float32`.

View File

@ -0,0 +1,29 @@
### `tf.quantized_concat(concat_dim, values, input_mins, input_maxes, name=None)` {#quantized_concat}
Concatenates quantized tensors along one dimension.
##### Args:
* <b>`concat_dim`</b>: A `Tensor` of type `int32`.
0-D. The dimension along which to concatenate. Must be in the
range [0, rank(values)).
* <b>`values`</b>: A list of at least 2 `Tensor` objects of the same type.
The `N` Tensors to concatenate. Their ranks and types must match,
and their sizes must match in all dimensions except `concat_dim`.
* <b>`input_mins`</b>: A list with the same number of `Tensor` objects as `values` of `Tensor` objects of type `float32`.
The minimum scalar values for each of the input tensors.
* <b>`input_maxes`</b>: A list with the same number of `Tensor` objects as `values` of `Tensor` objects of type `float32`.
The maximum scalar values for each of the input tensors.
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
A tuple of `Tensor` objects (output, output_min, output_max).
* <b>`output`</b>: A `Tensor`. Has the same type as `values`. A `Tensor` with the concatenation of values stacked along the
`concat_dim` dimension. This tensor's shape matches that of `values` except
in `concat_dim` where it has the sum of the sizes.
* <b>`output_min`</b>: A `Tensor` of type `float32`. The float value that the minimum quantized output value represents.
* <b>`output_max`</b>: A `Tensor` of type `float32`. The float value that the maximum quantized output value represents.

View File

@ -128,6 +128,7 @@
* [`cast`](../../api_docs/python/array_ops.md#cast)
* [`concat`](../../api_docs/python/array_ops.md#concat)
* [`depth_to_space`](../../api_docs/python/array_ops.md#depth_to_space)
* [`dequantize`](../../api_docs/python/array_ops.md#dequantize)
* [`dynamic_partition`](../../api_docs/python/array_ops.md#dynamic_partition)
* [`dynamic_stitch`](../../api_docs/python/array_ops.md#dynamic_stitch)
* [`expand_dims`](../../api_docs/python/array_ops.md#expand_dims)
@ -138,6 +139,8 @@
* [`one_hot`](../../api_docs/python/array_ops.md#one_hot)
* [`pack`](../../api_docs/python/array_ops.md#pack)
* [`pad`](../../api_docs/python/array_ops.md#pad)
* [`quantize_v2`](../../api_docs/python/array_ops.md#quantize_v2)
* [`quantized_concat`](../../api_docs/python/array_ops.md#quantized_concat)
* [`rank`](../../api_docs/python/array_ops.md#rank)
* [`required_space_to_batch_paddings`](../../api_docs/python/array_ops.md#required_space_to_batch_paddings)
* [`reshape`](../../api_docs/python/array_ops.md#reshape)