Add contrib.framework and contrib.losses to gendocs.

Improve docs for losses and metrics.
Change: 123889291
This commit is contained in:
A. Unique TensorFlower 2016-06-02 10:32:43 -08:00 committed by TensorFlower Gardener
parent 0a0e72b13e
commit 38a3a365b3
46 changed files with 1954 additions and 9 deletions

View File

@ -2,12 +2,19 @@
## losses
Loss operations, typically with the following signatures. `predicted` and
`target` generally have the same dimensions, and dim 0 is assumed to be batch.
Loss operations for use in training models, typically with signature like the
following:
`squared(predicted, target, name=None) : Tensor`
`sum_of_squares(predictions, targets, weight, scope) : Tensor`
Other examples of foo are `absolute`, `logistic`, and `softmax`.
All loss functions take a pair of tensors, `predictions` and ground truth
`targets`. It is assumed that the shape of both these tensors is of the form
`[batch_size, d1, ... dN]` where `batch_size` is the number
of samples in the batch and `d1` ... `dN` are the remaining dimensions.
THe `weight` parameter can be used to adjust the relative weight samples within
the batch. The result of each loss is a scalar average of all sample losses with
non-zero weights.
Any parameter named `logit` should be the raw model outputs, not a normalized
probablility distribution (i.e., `[0.0, 1.0]`). `target` for losses taking

View File

@ -2,11 +2,26 @@
## Evaluation metrics
Compare predictions and labels, producing an aggregate loss. Typically produce
a `value` and an `update_op`. The `update_op` is run with every batch to update
internal state (e.g. accumulated right/wrong predictions).
The `value` is extracted after all batches have been read (e.g. precision =
number correct / total).
Metrics are used in evaluation to assess the quality of a model. Most are
"streaming" ops, meaning they create variables to accumulate a running total,
and return an update tensor to update these variables, and a value tensor to
read the accumulated value. Example:
value, update_op = metrics.streaming_mean_squared_error(
predictions, targets, weight)
Most metric functions take a pair of tensors, `predictions` and ground truth
`targets` (`streaming_mean` is an exception, it takes a single value tensor,
usually a loss). It is assumed that the shape of both these tensors is of the
form `[batch_size, d1, ... dN]` where `batch_size` is the number of samples in
the batch and `d1` ... `dN` are the remaining dimensions.
The `weight` parameter can be used to adjust the relative weight of samples
within the batch. The result of each loss is a scalar average of all sample
losses with non-zero weights.
The result is 2 tensors that should be used like the following for each eval
run:
```python
predictions = ...

View File

@ -0,0 +1,736 @@
<!-- This file is machine generated: DO NOT EDIT! -->
# Framework (contrib)
[TOC]
Framework utilities.
- - -
### `tf.contrib.framework.assert_same_float_dtype(tensors=None, dtype=None)` {#assert_same_float_dtype}
Validate and return float type based on `tensors` and `dtype`.
For ops such as matrix multiplication, inputs and weights must be of the
same float type. This function validates that all `tensors` are the same type,
validates that type is `dtype` (if supplied), and returns the type. Type must
be `dtypes.float32` or `dtypes.float64`. If neither `tensors` nor
`dtype` is supplied, default to `dtypes.float32`.
##### Args:
* <b>`tensors`</b>: Tensors of input values. Can include `None` elements, which will be
ignored.
* <b>`dtype`</b>: Expected type.
##### Returns:
Validated type.
##### Raises:
* <b>`ValueError`</b>: if neither `tensors` nor `dtype` is supplied, or result is not
float.
- - -
### `tf.contrib.framework.assert_scalar_int(tensor)` {#assert_scalar_int}
Assert `tensor` is 0-D, of type `tf.int32` or `tf.int64`.
##### Args:
* <b>`tensor`</b>: Tensor to test.
##### Returns:
`tensor`, for chaining.
##### Raises:
* <b>`ValueError`</b>: if `tensor` is not 0-D, of type `tf.int32` or `tf.int64`.
- - -
### `tf.contrib.framework.convert_to_tensor_or_sparse_tensor(value, dtype=None, name=None, as_ref=False)` {#convert_to_tensor_or_sparse_tensor}
Converts value to a `SparseTensor` or `Tensor`.
##### Args:
* <b>`value`</b>: A `SparseTensor`, `SparseTensorValue`, or an object whose type has a
registered `Tensor` conversion function.
* <b>`dtype`</b>: Optional element type for the returned tensor. If missing, the
type is inferred from the type of `value`.
* <b>`name`</b>: Optional name to use if a new `Tensor` is created.
* <b>`as_ref`</b>: True if we want the result as a ref tensor. Only used if a new
`Tensor` is created.
##### Returns:
A `SparseTensor` or `Tensor` based on `value`.
##### Raises:
* <b>`RuntimeError`</b>: If result type is incompatible with `dtype`.
- - -
### `tf.contrib.framework.get_graph_from_inputs(op_input_list, graph=None)` {#get_graph_from_inputs}
Returns the appropriate graph to use for the given inputs.
1. If `graph` is provided, we validate that all inputs in `op_input_list` are
from the same graph.
2. Otherwise, we attempt to select a graph from the first Operation- or
Tensor-valued input in `op_input_list`, and validate that all other
such inputs are in the same graph.
3. If the graph was not specified and it could not be inferred from
`op_input_list`, we attempt to use the default graph.
##### Args:
* <b>`op_input_list`</b>: A list of inputs to an operation, which may include `Tensor`,
`Operation`, and other objects that may be converted to a graph element.
* <b>`graph`</b>: (Optional) The explicit graph to use.
##### Raises:
* <b>`TypeError`</b>: If `op_input_list` is not a list or tuple, or if graph is not a
Graph.
* <b>`ValueError`</b>: If a graph is explicitly passed and not all inputs are from it,
or if the inputs are from multiple graphs, or we could not find a graph
and there was no default graph.
##### Returns:
The appropriate graph to use for the given inputs.
- - -
### `tf.is_numeric_tensor(tensor)` {#is_numeric_tensor}
- - -
### `tf.is_non_decreasing(x, name=None)` {#is_non_decreasing}
Returns `True` if `x` is non-decreasing.
Elements of `x` are compared in row-major order. The tensor `[x[0],...]`
is non-decreasing if for every adjacent pair we have `x[i] <= x[i+1]`.
If `x` has less than two elements, it is trivially non-decreasing.
See also: `is_strictly_increasing`
##### Args:
* <b>`x`</b>: Numeric `Tensor`.
* <b>`name`</b>: A name for this operation (optional). Defaults to "is_non_decreasing"
##### Returns:
Boolean `Tensor`, equal to `True` iff `x` is non-decreasing.
##### Raises:
* <b>`TypeError`</b>: if `x` is not a numeric tensor.
- - -
### `tf.is_strictly_increasing(x, name=None)` {#is_strictly_increasing}
Returns `True` if `x` is strictly increasing.
Elements of `x` are compared in row-major order. The tensor `[x[0],...]`
is strictly increasing if for every adjacent pair we have `x[i] < x[i+1]`.
If `x` has less than two elements, it is trivially strictly increasing.
See also: `is_non_decreasing`
##### Args:
* <b>`x`</b>: Numeric `Tensor`.
* <b>`name`</b>: A name for this operation (optional).
Defaults to "is_strictly_increasing"
##### Returns:
Boolean `Tensor`, equal to `True` iff `x` is strictly increasing.
##### Raises:
* <b>`TypeError`</b>: if `x` is not a numeric tensor.
- - -
### `tf.contrib.framework.reduce_sum_n(tensors, name=None)` {#reduce_sum_n}
Reduce tensors to a scalar sum.
This reduces each tensor in `tensors` to a scalar via `tf.reduce_sum`, then
adds them via `tf.add_n`.
##### Args:
* <b>`tensors`</b>: List of tensors, all of the same numeric type.
* <b>`name`</b>: Tensor name, and scope for all other ops.
##### Returns:
Total loss tensor, or None if no losses have been configured.
##### Raises:
* <b>`ValueError`</b>: if `losses` is missing or empty.
- - -
### `tf.contrib.framework.safe_embedding_lookup_sparse(embedding_weights, sparse_ids, sparse_weights=None, combiner='mean', default_id=None, name=None, partition_strategy='div')` {#safe_embedding_lookup_sparse}
Lookup embedding results, accounting for invalid IDs and empty features.
The partitioned embedding in `embedding_weights` must all be the same shape
except for the first dimension. The first dimension is allowed to vary as the
vocabulary size is not necessarily a multiple of `P`.
Invalid IDs (< 0) are pruned from input IDs and weights, as well as any IDs
with non-positive weight. For an entry with no features, the embedding vector
for `default_id` is returned, or the 0-vector if `default_id` is not supplied.
##### Args:
* <b>`embedding_weights`</b>: A list of `P` float tensors or values representing
partitioned embedding tensors.
* <b>`sparse_ids`</b>: `SparseTensor` of shape `[batch_size, ?]` containing the ids.
* <b>`sparse_weights`</b>: `SparseTensor` of same shape as `sparse_ids`, containing
float weights corresponding to `sparse_ids`, or `None` if all weights
are be assumed to be 1.0.
* <b>`combiner`</b>: A string specifying how to combine embedding results for each
entry. Currently "mean", "sqrtn" and "sum" are supported, with "mean"
the default.
* <b>`default_id`</b>: The id to use for an entry with no features.
* <b>`name`</b>: A name for this operation (optional).
* <b>`partition_strategy`</b>: A string specifying the partitioning strategy.
Currently `"div"` and `"mod"` are supported. Default is `"div"`.
##### Returns:
Dense tensor of shape `[batch_size, embed_dim]`.
##### Raises:
* <b>`ValueError`</b>: if `embedding_weights` is empty.
- - -
### `tf.contrib.framework.with_shape(expected_shape, tensor)` {#with_shape}
Asserts tensor has expected shape.
If tensor shape and expected_shape, are fully defined, assert they match.
Otherwise, add assert op that will validate the shape when tensor is
evaluated, and set shape on tensor.
##### Args:
* <b>`expected_shape`</b>: Expected shape to assert, as a 1D array of ints, or tensor
of same.
* <b>`tensor`</b>: Tensor whose shape we're validating.
##### Returns:
tensor, perhaps with a dependent assert operation.
##### Raises:
* <b>`ValueError`</b>: if tensor has an invalid shape.
- - -
### `tf.contrib.framework.with_same_shape(expected_tensor, tensor)` {#with_same_shape}
Assert tensors are the same shape, from the same graph.
##### Args:
* <b>`expected_tensor`</b>: Tensor with expected shape.
* <b>`tensor`</b>: Tensor of actual values.
##### Returns:
Tuple of (actual_tensor, label_tensor), possibly with assert ops added.
## Arg_Scope
- - -
### `tf.contrib.framework.arg_scope(list_ops_or_scope, **kwargs)` {#arg_scope}
Stores the default arguments for the given set of list_ops.
For usage, please see examples at top of the file.
##### Args:
* <b>`list_ops_or_scope`</b>: List or tuple of operations to set argument scope for or
a dictionary containg the current scope. When list_ops_or_scope is a dict,
kwargs must be empty. When list_ops_or_scope is a list or tuple, then
every op in it need to be decorated with @add_arg_scope to work.
* <b>`**kwargs`</b>: keyword=value that will define the defaults for each op in
list_ops. All the ops need to accept the given set of arguments.
##### Yields:
the current_scope, which is a dictionary of {op: {arg: value}}
##### Raises:
* <b>`TypeError`</b>: if list_ops is not a list or a tuple.
* <b>`ValueError`</b>: if any op in list_ops has not be decorated with @add_arg_scope.
- - -
### `tf.contrib.framework.add_arg_scope(func)` {#add_arg_scope}
Decorates a function with args so it can be used within an arg_scope.
##### Args:
* <b>`func`</b>: function to decorate.
##### Returns:
A tuple with the decorated function func_with_args().
- - -
### `tf.contrib.framework.has_arg_scope(func)` {#has_arg_scope}
Checks whether a func has been decorated with @add_arg_scope or not.
##### Args:
* <b>`func`</b>: function to check.
##### Returns:
a boolean.
- - -
### `tf.contrib.framework.arg_scoped_arguments(func)` {#arg_scoped_arguments}
Returns the list kwargs that arg_scope can set for a func.
##### Args:
* <b>`func`</b>: function which has been decorated with @add_arg_scope.
##### Returns:
a list of kwargs names.
## Variables
- - -
### `tf.contrib.framework.add_model_variable(var)` {#add_model_variable}
Adds a variable to the MODEL_VARIABLES collection.
##### Args:
* <b>`var`</b>: a variable.
- - -
### `tf.contrib.framework.assert_global_step(global_step_tensor)` {#assert_global_step}
Asserts `global_step_tensor` is a scalar int `Variable` or `Tensor`.
##### Args:
* <b>`global_step_tensor`</b>: `Tensor` to test.
- - -
### `tf.contrib.framework.assert_or_get_global_step(graph=None, global_step_tensor=None)` {#assert_or_get_global_step}
Verifies that a global step tensor is valid or gets one if None is given.
If `global_step_tensor` is not None, check that it is a valid global step
tensor (using `assert_global_step`). Otherwise find a global step tensor using
`get_global_step` and return it.
##### Args:
* <b>`graph`</b>: The graph to find the global step tensor for.
* <b>`global_step_tensor`</b>: The tensor to check for suitability as a global step.
If None is given (the default), find a global step tensor.
##### Returns:
A tensor suitable as a global step, or `None` if none was provided and none
was found.
- - -
### `tf.contrib.framework.create_global_step(graph=None)` {#create_global_step}
Create global step tensor in graph.
##### Args:
* <b>`graph`</b>: The graph in which to create the global step. If missing, use default
graph.
##### Returns:
Global step tensor.
##### Raises:
* <b>`ValueError`</b>: if global step key is already defined.
- - -
### `tf.contrib.framework.get_global_step(graph=None)` {#get_global_step}
Get the global step tensor.
The global step tensor must be an integer variable. We first try to find it
in the collection `GLOBAL_STEP`, or by name `global_step:0`.
##### Args:
* <b>`graph`</b>: The graph to find the global step in. If missing, use default graph.
##### Returns:
The global step variable, or `None` if none was found.
##### Raises:
* <b>`TypeError`</b>: If the global step tensor has a non-integer type, or if it is not
a `Variable`.
- - -
### `tf.contrib.framework.get_or_create_global_step(graph=None)` {#get_or_create_global_step}
Returns and create (if necessary) the global step variable.
##### Args:
* <b>`graph`</b>: The graph in which to create the global step. If missing, use default
graph.
##### Returns:
the tensor representing the global step variable.
- - -
### `tf.contrib.framework.get_local_variables(scope=None, suffix=None)` {#get_local_variables}
Gets the list of model variables, filtered by scope and/or suffix.
##### Args:
* <b>`scope`</b>: an optional scope for filtering the variables to return.
* <b>`suffix`</b>: an optional suffix for filtering the variables to return.
##### Returns:
a list of variables in colelction with scope and suffix.
- - -
### `tf.contrib.framework.get_model_variables(scope=None, suffix=None)` {#get_model_variables}
Gets the list of model variables, filtered by scope and/or suffix.
##### Args:
* <b>`scope`</b>: an optional scope for filtering the variables to return.
* <b>`suffix`</b>: an optional suffix for filtering the variables to return.
##### Returns:
a list of variables in colelction with scope and suffix.
- - -
### `tf.contrib.framework.get_unique_variable(var_op_name)` {#get_unique_variable}
Gets the variable uniquely identified by that var_op_name.
##### Args:
* <b>`var_op_name`</b>: the full name of the variable op, including the scope.
##### Returns:
a tensorflow variable.
##### Raises:
* <b>`ValueError`</b>: if no variable uniquely identified by the name exists.
- - -
### `tf.contrib.framework.get_variables_by_name(given_name, scope=None)` {#get_variables_by_name}
Gets the list of variables that were given that name.
##### Args:
* <b>`given_name`</b>: name given to the variable without any scope.
* <b>`scope`</b>: an optional scope for filtering the variables to return.
##### Returns:
a copied list of variables with the given name and scope.
- - -
### `tf.contrib.framework.get_variables_by_suffix(suffix, scope=None)` {#get_variables_by_suffix}
Gets the list of variables that end with the given suffix.
##### Args:
* <b>`suffix`</b>: suffix for filtering the variables to return.
* <b>`scope`</b>: an optional scope for filtering the variables to return.
##### Returns:
a copied list of variables with the given name and prefix.
- - -
### `tf.contrib.framework.get_variables_to_restore(include=None, exclude=None)` {#get_variables_to_restore}
Gets the list of the variables to restore.
##### Args:
* <b>`include`</b>: an optional list/tuple of scope strings for filtering which
variables from the VARIABLES collection to include. None would include all
the variables.
* <b>`exclude`</b>: an optional list/tuple of scope strings for filtering which
variables from the VARIABLES collection to exclude. None it would not
exclude any.
##### Returns:
a list of variables to restore.
##### Raises:
* <b>`TypeError`</b>: include or exclude is provided but is not a list or a tuple.
- - -
### `tf.contrib.framework.get_variables(scope=None, suffix=None, collection='variables')` {#get_variables}
Gets the list of variables, filtered by scope and/or suffix.
##### Args:
* <b>`scope`</b>: an optional scope for filtering the variables to return.
* <b>`suffix`</b>: an optional suffix for filtering the variables to return.
* <b>`collection`</b>: in which collection search for. Defaults to GraphKeys.VARIABLES.
##### Returns:
a list of variables in colelction with scope and suffix.
- - -
### `tf.contrib.framework.local_variable(initial_value, validate_shape=True, name=None)` {#local_variable}
Create variable and add it to `GraphKeys.LOCAL_VARIABLES` collection.
##### Args:
* <b>`initial_value`</b>: See variables.Variable.__init__.
* <b>`validate_shape`</b>: See variables.Variable.__init__.
* <b>`name`</b>: See variables.Variable.__init__.
##### Returns:
New variable.
- - -
### `tf.contrib.framework.model_variable(*args, **kwargs)` {#model_variable}
Gets an existing model variable with these parameters or creates a new one.
##### Args:
* <b>`name`</b>: the name of the new or existing variable.
* <b>`shape`</b>: shape of the new or existing variable.
* <b>`dtype`</b>: type of the new or existing variable (defaults to `DT_FLOAT`).
* <b>`initializer`</b>: initializer for the variable if one is created.
* <b>`regularizer`</b>: a (Tensor -> Tensor or None) function; the result of
applying it on a newly created variable will be added to the collection
GraphKeys.REGULARIZATION_LOSSES and can be used for regularization.
* <b>`trainable`</b>: If `True` also add the variable to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
* <b>`collections`</b>: A list of collection names to which the Variable will be added.
Note that the variable is always also added to the tf.GraphKeys.VARIABLES
and MODEL_VARIABLES collections.
* <b>`caching_device`</b>: Optional device string or function describing where the
Variable should be cached for reading. Defaults to the Variable's
device.
* <b>`device`</b>: Optional device to place the variable. It can be an string or a
function that is called to get the device for the variable.
##### Returns:
The created or existing variable.
- - -
### `tf.contrib.framework.variable(*args, **kwargs)` {#variable}
Gets an existing variable with these parameters or creates a new one.
##### Args:
* <b>`name`</b>: the name of the new or existing variable.
* <b>`shape`</b>: shape of the new or existing variable.
* <b>`dtype`</b>: type of the new or existing variable (defaults to `DT_FLOAT`).
* <b>`initializer`</b>: initializer for the variable if one is created.
* <b>`regularizer`</b>: a (Tensor -> Tensor or None) function; the result of
applying it on a newly created variable will be added to the collection
GraphKeys.REGULARIZATION_LOSSES and can be used for regularization.
* <b>`trainable`</b>: If `True` also add the variable to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
* <b>`collections`</b>: A list of collection names to which the Variable will be added.
If None it would default to tf.GraphKeys.VARIABLES.
* <b>`caching_device`</b>: Optional device string or function describing where the
Variable should be cached for reading. Defaults to the Variable's
device.
* <b>`device`</b>: Optional device to place the variable. It can be an string or a
function that is called to get the device for the variable.
##### Returns:
The created or existing variable.
- - -
### `class tf.contrib.framework.VariableDeviceChooser` {#VariableDeviceChooser}
Device chooser for variables.
When using a parameter server it will assign them in a round-robin fashion.
When not using a parameter server it allows GPU or CPU placement.
- - -
#### `tf.contrib.framework.VariableDeviceChooser.__init__(num_tasks=0, job_name='ps', device_type='CPU', device_index=0)` {#VariableDeviceChooser.__init__}
Initialize VariableDeviceChooser.
##### Usage:
To use with 2 parameter servers:
VariableDeviceChooser(2)
To use without parameter servers:
VariableDeviceChooser()
VariableDeviceChooser(device_type='GPU') # For GPU placement
##### Args:
* <b>`num_tasks`</b>: number of tasks.
* <b>`job_name`</b>: String, a name for the parameter server job.
* <b>`device_type`</b>: Optional device type string (e.g. "CPU" or "GPU")
* <b>`device_index`</b>: int. Optional device index. If left
unspecified, device represents 'any' device_index.

View File

@ -0,0 +1,303 @@
<!-- This file is machine generated: DO NOT EDIT! -->
# Losses (contrib)
[TOC]
Ops for building neural network losses.
## Other Functions and Classes
- - -
### `tf.contrib.losses.absolute_difference(predictions, targets, weight=1.0, scope=None)` {#absolute_difference}
Adds an Absolute Difference loss to the training procedure.
`weight` acts as a coefficient for the loss. If a scalar is provided, then the
loss is simply scaled by the given value. If `weight` is a tensor of size
[batch_size], then the total loss for each sample of the batch is rescaled
by the corresponding element in the `weight` vector. If the shape of
`weight` matches the shape of `predictions`, then the loss of each
measurable element of `predictions` is scaled by the corresponding value of
`weight`.
##### Args:
* <b>`predictions`</b>: The predicted outputs.
* <b>`targets`</b>: The ground truth output tensor, same dimensions as 'predictions'.
* <b>`weight`</b>: Coefficients for the loss a scalar, a tensor of shape
[batch_size] or a tensor whose shape matches `predictions`.
* <b>`scope`</b>: The scope for the operations performed in computing the loss.
##### Returns:
A scalar `Tensor` representing the loss value.
##### Raises:
* <b>`ValueError`</b>: If the shape of `predictions` doesn't match that of `targets` or
if the shape of `weight` is invalid.
- - -
### `tf.contrib.losses.add_loss(loss)` {#add_loss}
Adds a externally defined loss to collection of losses.
##### Args:
* <b>`loss`</b>: A loss `Tensor`.
- - -
### `tf.contrib.losses.cosine_distance(predictions, targets, dim, weight=1.0, scope=None)` {#cosine_distance}
Adds a cosine-distance loss to the training procedure.
Note that the function assumes that the predictions and targets are already
unit-normalized.
##### Args:
* <b>`predictions`</b>: An arbitrary matrix.
* <b>`targets`</b>: A `Tensor` whose shape matches 'predictions'
* <b>`dim`</b>: The dimension along which the cosine distance is computed.
* <b>`weight`</b>: Coefficients for the loss a scalar, a tensor of shape
[batch_size] or a tensor whose shape matches `predictions`.
* <b>`scope`</b>: The scope for the operations performed in computing the loss.
##### Returns:
A scalar `Tensor` representing the loss value.
##### Raises:
* <b>`ValueError`</b>: If predictions.shape doesn't match targets.shape, if the ignore
mask is provided and its shape doesn't match targets.shape or if
the ignore mask is not boolean valued.
- - -
### `tf.contrib.losses.get_losses(scope=None)` {#get_losses}
Gets the list of loss variables.
##### Args:
* <b>`scope`</b>: an optional scope for filtering the losses to return.
##### Returns:
a list of loss variables.
- - -
### `tf.contrib.losses.get_regularization_losses(scope=None)` {#get_regularization_losses}
Gets the regularization losses.
##### Args:
* <b>`scope`</b>: an optional scope for filtering the losses to return.
##### Returns:
A list of loss variables.
- - -
### `tf.contrib.losses.get_total_loss(add_regularization_losses=True, name='total_loss')` {#get_total_loss}
Returns a tensor whose value represents the total loss.
Notice that the function adds the given losses to the regularization losses.
##### Args:
* <b>`add_regularization_losses`</b>: A boolean indicating whether or not to use the
regularization losses in the sum.
* <b>`name`</b>: The name of the returned tensor.
##### Returns:
A `Tensor` whose value represents the total loss.
##### Raises:
* <b>`ValueError`</b>: if `losses` is not iterable.
- - -
### `tf.contrib.losses.log_loss(predictions, targets, weight=1.0, epsilon=1e-07, scope=None)` {#log_loss}
Adds a Log Loss term to the training procedure.
`weight` acts as a coefficient for the loss. If a scalar is provided, then the
loss is simply scaled by the given value. If `weight` is a tensor of size
[batch_size], then the total loss for each sample of the batch is rescaled
by the corresponding element in the `weight` vector. If the shape of
`weight` matches the shape of `predictions`, then the loss of each
measurable element of `predictions` is scaled by the corresponding value of
`weight`.
##### Args:
* <b>`predictions`</b>: The predicted outputs.
* <b>`targets`</b>: The ground truth output tensor, same dimensions as 'predictions'.
* <b>`weight`</b>: Coefficients for the loss a scalar, a tensor of shape
[batch_size] or a tensor whose shape matches `predictions`.
* <b>`epsilon`</b>: A small increment to add to avoid taking a log of zero.
* <b>`scope`</b>: The scope for the operations performed in computing the loss.
##### Returns:
A scalar `Tensor` representing the loss value.
##### Raises:
* <b>`ValueError`</b>: If the shape of `predictions` doesn't match that of `targets` or
if the shape of `weight` is invalid.
- - -
### `tf.contrib.losses.sigmoid_cross_entropy(logits, multi_class_labels, weight=1.0, label_smoothing=0, scope=None)` {#sigmoid_cross_entropy}
Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits.
##### Args:
* <b>`logits`</b>: [batch_size, num_classes] logits outputs of the network .
* <b>`multi_class_labels`</b>: [batch_size, num_classes] target labels in (0, 1).
* <b>`weight`</b>: Coefficients for the loss. The tensor must be a scalar, a tensor of
shape [batch_size] or shape [batch_size, num_classes].
* <b>`label_smoothing`</b>: If greater than 0 then smooth the labels.
* <b>`scope`</b>: The scope for the operations performed in computing the loss.
##### Returns:
A scalar `Tensor` representing the loss value.
- - -
### `tf.contrib.losses.softmax_cross_entropy(logits, onehot_labels, weight=1.0, label_smoothing=0, scope=None)` {#softmax_cross_entropy}
Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits.
It can scale the loss by weight factor, and smooth the labels.
##### Args:
* <b>`logits`</b>: [batch_size, num_classes] logits outputs of the network .
* <b>`onehot_labels`</b>: [batch_size, num_classes] target one_hot_encoded labels.
* <b>`weight`</b>: Coefficients for the loss. The tensor must be a scalar or a tensor
of shape [batch_size].
* <b>`label_smoothing`</b>: If greater than 0 then smooth the labels.
* <b>`scope`</b>: the scope for the operations performed in computing the loss.
##### Returns:
A scalar `Tensor` representing the loss value.
- - -
### `tf.contrib.losses.sum_of_pairwise_squares(predictions, targets, weight=1.0, scope=None)` {#sum_of_pairwise_squares}
Adds a pairwise-errors-squared loss to the training procedure.
Unlike the sum_of_squares loss, which is a measure of the differences between
corresponding elements of `predictions` and `targets`, sum_of_pairwise_squares
is a measure of the differences between pairs of corresponding elements of
`predictions` and `targets`.
For example, if `targets`=[a, b, c] and `predictions`=[x, y, z], there are
three pairs of differences are summed to compute the loss:
loss = [ ((a-b) - (x-y)).^2 + ((a-c) - (x-z)).^2 + ((b-c) - (y-z)).^2 ] / 3
Note that since the inputs are of size [batch_size, d0, ... dN], the
corresponding pairs are computed within each batch sample but not across
samples within a batch. For example, if `predictions` represents a batch of
16 grayscale images of dimenion [batch_size, 100, 200], then the set of pairs
is drawn from each image, but not across images.
`weight` acts as a coefficient for the loss. If a scalar is provided, then the
loss is simply scaled by the given value. If `weight` is a tensor of size
[batch_size], then the total loss for each sample of the batch is rescaled
by the corresponding element in the `weight` vector.
##### Args:
* <b>`predictions`</b>: The predicted outputs, a tensor of size [batch_size, d0, .. dN]
where N+1 is the total number of dimensions in `predictions`.
* <b>`targets`</b>: The ground truth output tensor, whose shape must match the shape of
the `predictions` tensor.
* <b>`weight`</b>: Coefficients for the loss a scalar, a tensor of shape [batch_size]
or a tensor whose shape matches `predictions`.
* <b>`scope`</b>: The scope for the operations performed in computing the loss.
##### Returns:
A scalar `Tensor` representing the loss value.
##### Raises:
* <b>`ValueError`</b>: If the shape of `predictions` doesn't match that of `targets` or
if the shape of `weight` is invalid.
- - -
### `tf.contrib.losses.sum_of_squares(predictions, targets, weight=1.0, scope=None)` {#sum_of_squares}
Adds a Sum-of-Squares loss to the training procedure.
`weight` acts as a coefficient for the loss. If a scalar is provided, then the
loss is simply scaled by the given value. If `weight` is a tensor of size
[batch_size], then the total loss for each sample of the batch is rescaled
by the corresponding element in the `weight` vector. If the shape of
`weight` matches the shape of `predictions`, then the loss of each
measurable element of `predictions` is scaled by the corresponding value of
`weight`.
##### Args:
* <b>`predictions`</b>: The predicted outputs.
* <b>`targets`</b>: The ground truth output tensor, same dimensions as 'predictions'.
* <b>`weight`</b>: Coefficients for the loss a scalar, a tensor of shape
[batch_size] or a tensor whose shape matches `predictions`.
* <b>`scope`</b>: The scope for the operations performed in computing the loss.
##### Returns:
A scalar `Tensor` representing the loss value.
##### Raises:
* <b>`ValueError`</b>: If the shape of `predictions` doesn't match that of `targets` or
if the shape of `weight` is invalid.

View File

@ -0,0 +1,22 @@
### `tf.contrib.framework.get_global_step(graph=None)` {#get_global_step}
Get the global step tensor.
The global step tensor must be an integer variable. We first try to find it
in the collection `GLOBAL_STEP`, or by name `global_step:0`.
##### Args:
* <b>`graph`</b>: The graph to find the global step in. If missing, use default graph.
##### Returns:
The global step variable, or `None` if none was found.
##### Raises:
* <b>`TypeError`</b>: If the global step tensor has a non-integer type, or if it is not
a `Variable`.

View File

@ -0,0 +1,39 @@
### `tf.contrib.framework.safe_embedding_lookup_sparse(embedding_weights, sparse_ids, sparse_weights=None, combiner='mean', default_id=None, name=None, partition_strategy='div')` {#safe_embedding_lookup_sparse}
Lookup embedding results, accounting for invalid IDs and empty features.
The partitioned embedding in `embedding_weights` must all be the same shape
except for the first dimension. The first dimension is allowed to vary as the
vocabulary size is not necessarily a multiple of `P`.
Invalid IDs (< 0) are pruned from input IDs and weights, as well as any IDs
with non-positive weight. For an entry with no features, the embedding vector
for `default_id` is returned, or the 0-vector if `default_id` is not supplied.
##### Args:
* <b>`embedding_weights`</b>: A list of `P` float tensors or values representing
partitioned embedding tensors.
* <b>`sparse_ids`</b>: `SparseTensor` of shape `[batch_size, ?]` containing the ids.
* <b>`sparse_weights`</b>: `SparseTensor` of same shape as `sparse_ids`, containing
float weights corresponding to `sparse_ids`, or `None` if all weights
are be assumed to be 1.0.
* <b>`combiner`</b>: A string specifying how to combine embedding results for each
entry. Currently "mean", "sqrtn" and "sum" are supported, with "mean"
the default.
* <b>`default_id`</b>: The id to use for an entry with no features.
* <b>`name`</b>: A name for this operation (optional).
* <b>`partition_strategy`</b>: A string specifying the partitioning strategy.
Currently `"div"` and `"mod"` are supported. Default is `"div"`.
##### Returns:
Dense tensor of shape `[batch_size, embed_dim]`.
##### Raises:
* <b>`ValueError`</b>: if `embedding_weights` is empty.

View File

@ -0,0 +1,13 @@
### `tf.contrib.losses.get_losses(scope=None)` {#get_losses}
Gets the list of loss variables.
##### Args:
* <b>`scope`</b>: an optional scope for filtering the losses to return.
##### Returns:
a list of loss variables.

View File

@ -0,0 +1,32 @@
### `tf.contrib.framework.get_graph_from_inputs(op_input_list, graph=None)` {#get_graph_from_inputs}
Returns the appropriate graph to use for the given inputs.
1. If `graph` is provided, we validate that all inputs in `op_input_list` are
from the same graph.
2. Otherwise, we attempt to select a graph from the first Operation- or
Tensor-valued input in `op_input_list`, and validate that all other
such inputs are in the same graph.
3. If the graph was not specified and it could not be inferred from
`op_input_list`, we attempt to use the default graph.
##### Args:
* <b>`op_input_list`</b>: A list of inputs to an operation, which may include `Tensor`,
`Operation`, and other objects that may be converted to a graph element.
* <b>`graph`</b>: (Optional) The explicit graph to use.
##### Raises:
* <b>`TypeError`</b>: If `op_input_list` is not a list or tuple, or if graph is not a
Graph.
* <b>`ValueError`</b>: If a graph is explicitly passed and not all inputs are from it,
or if the inputs are from multiple graphs, or we could not find a graph
and there was no default graph.
##### Returns:
The appropriate graph to use for the given inputs.

View File

@ -0,0 +1,14 @@
### `tf.contrib.framework.get_local_variables(scope=None, suffix=None)` {#get_local_variables}
Gets the list of model variables, filtered by scope and/or suffix.
##### Args:
* <b>`scope`</b>: an optional scope for filtering the variables to return.
* <b>`suffix`</b>: an optional suffix for filtering the variables to return.
##### Returns:
a list of variables in colelction with scope and suffix.

View File

@ -0,0 +1,14 @@
### `tf.contrib.framework.get_variables_by_name(given_name, scope=None)` {#get_variables_by_name}
Gets the list of variables that were given that name.
##### Args:
* <b>`given_name`</b>: name given to the variable without any scope.
* <b>`scope`</b>: an optional scope for filtering the variables to return.
##### Returns:
a copied list of variables with the given name and scope.

View File

@ -0,0 +1,31 @@
### `tf.contrib.losses.absolute_difference(predictions, targets, weight=1.0, scope=None)` {#absolute_difference}
Adds an Absolute Difference loss to the training procedure.
`weight` acts as a coefficient for the loss. If a scalar is provided, then the
loss is simply scaled by the given value. If `weight` is a tensor of size
[batch_size], then the total loss for each sample of the batch is rescaled
by the corresponding element in the `weight` vector. If the shape of
`weight` matches the shape of `predictions`, then the loss of each
measurable element of `predictions` is scaled by the corresponding value of
`weight`.
##### Args:
* <b>`predictions`</b>: The predicted outputs.
* <b>`targets`</b>: The ground truth output tensor, same dimensions as 'predictions'.
* <b>`weight`</b>: Coefficients for the loss a scalar, a tensor of shape
[batch_size] or a tensor whose shape matches `predictions`.
* <b>`scope`</b>: The scope for the operations performed in computing the loss.
##### Returns:
A scalar `Tensor` representing the loss value.
##### Raises:
* <b>`ValueError`</b>: If the shape of `predictions` doesn't match that of `targets` or
if the shape of `weight` is invalid.

View File

@ -0,0 +1,45 @@
### `tf.contrib.losses.sum_of_pairwise_squares(predictions, targets, weight=1.0, scope=None)` {#sum_of_pairwise_squares}
Adds a pairwise-errors-squared loss to the training procedure.
Unlike the sum_of_squares loss, which is a measure of the differences between
corresponding elements of `predictions` and `targets`, sum_of_pairwise_squares
is a measure of the differences between pairs of corresponding elements of
`predictions` and `targets`.
For example, if `targets`=[a, b, c] and `predictions`=[x, y, z], there are
three pairs of differences are summed to compute the loss:
loss = [ ((a-b) - (x-y)).^2 + ((a-c) - (x-z)).^2 + ((b-c) - (y-z)).^2 ] / 3
Note that since the inputs are of size [batch_size, d0, ... dN], the
corresponding pairs are computed within each batch sample but not across
samples within a batch. For example, if `predictions` represents a batch of
16 grayscale images of dimenion [batch_size, 100, 200], then the set of pairs
is drawn from each image, but not across images.
`weight` acts as a coefficient for the loss. If a scalar is provided, then the
loss is simply scaled by the given value. If `weight` is a tensor of size
[batch_size], then the total loss for each sample of the batch is rescaled
by the corresponding element in the `weight` vector.
##### Args:
* <b>`predictions`</b>: The predicted outputs, a tensor of size [batch_size, d0, .. dN]
where N+1 is the total number of dimensions in `predictions`.
* <b>`targets`</b>: The ground truth output tensor, whose shape must match the shape of
the `predictions` tensor.
* <b>`weight`</b>: Coefficients for the loss a scalar, a tensor of shape [batch_size]
or a tensor whose shape matches `predictions`.
* <b>`scope`</b>: The scope for the operations performed in computing the loss.
##### Returns:
A scalar `Tensor` representing the loss value.
##### Raises:
* <b>`ValueError`</b>: If the shape of `predictions` doesn't match that of `targets` or
if the shape of `weight` is invalid.

View File

@ -0,0 +1,26 @@
### `tf.contrib.framework.arg_scope(list_ops_or_scope, **kwargs)` {#arg_scope}
Stores the default arguments for the given set of list_ops.
For usage, please see examples at top of the file.
##### Args:
* <b>`list_ops_or_scope`</b>: List or tuple of operations to set argument scope for or
a dictionary containg the current scope. When list_ops_or_scope is a dict,
kwargs must be empty. When list_ops_or_scope is a list or tuple, then
every op in it need to be decorated with @add_arg_scope to work.
* <b>`**kwargs`</b>: keyword=value that will define the defaults for each op in
list_ops. All the ops need to accept the given set of arguments.
##### Yields:
the current_scope, which is a dictionary of {op: {arg: value}}
##### Raises:
* <b>`TypeError`</b>: if list_ops is not a list or a tuple.
* <b>`ValueError`</b>: if any op in list_ops has not be decorated with @add_arg_scope.

View File

@ -0,0 +1,9 @@
### `tf.contrib.framework.assert_global_step(global_step_tensor)` {#assert_global_step}
Asserts `global_step_tensor` is a scalar int `Variable` or `Tensor`.
##### Args:
* <b>`global_step_tensor`</b>: `Tensor` to test.

View File

@ -0,0 +1,18 @@
### `tf.contrib.framework.assert_scalar_int(tensor)` {#assert_scalar_int}
Assert `tensor` is 0-D, of type `tf.int32` or `tf.int64`.
##### Args:
* <b>`tensor`</b>: Tensor to test.
##### Returns:
`tensor`, for chaining.
##### Raises:
* <b>`ValueError`</b>: if `tensor` is not 0-D, of type `tf.int32` or `tf.int64`.

View File

@ -0,0 +1,18 @@
### `tf.contrib.framework.get_unique_variable(var_op_name)` {#get_unique_variable}
Gets the variable uniquely identified by that var_op_name.
##### Args:
* <b>`var_op_name`</b>: the full name of the variable op, including the scope.
##### Returns:
a tensorflow variable.
##### Raises:
* <b>`ValueError`</b>: if no variable uniquely identified by the name exists.

View File

@ -0,0 +1,23 @@
### `tf.contrib.framework.get_variables_to_restore(include=None, exclude=None)` {#get_variables_to_restore}
Gets the list of the variables to restore.
##### Args:
* <b>`include`</b>: an optional list/tuple of scope strings for filtering which
variables from the VARIABLES collection to include. None would include all
the variables.
* <b>`exclude`</b>: an optional list/tuple of scope strings for filtering which
variables from the VARIABLES collection to exclude. None it would not
exclude any.
##### Returns:
a list of variables to restore.
##### Raises:
* <b>`TypeError`</b>: include or exclude is provided but is not a list or a tuple.

View File

@ -0,0 +1,14 @@
### `tf.contrib.framework.with_same_shape(expected_tensor, tensor)` {#with_same_shape}
Assert tensors are the same shape, from the same graph.
##### Args:
* <b>`expected_tensor`</b>: Tensor with expected shape.
* <b>`tensor`</b>: Tensor of actual values.
##### Returns:
Tuple of (actual_tensor, label_tensor), possibly with assert ops added.

View File

@ -0,0 +1,9 @@
### `tf.contrib.framework.add_model_variable(var)` {#add_model_variable}
Adds a variable to the MODEL_VARIABLES collection.
##### Args:
* <b>`var`</b>: a variable.

View File

@ -0,0 +1,14 @@
### `tf.contrib.framework.get_model_variables(scope=None, suffix=None)` {#get_model_variables}
Gets the list of model variables, filtered by scope and/or suffix.
##### Args:
* <b>`scope`</b>: an optional scope for filtering the variables to return.
* <b>`suffix`</b>: an optional suffix for filtering the variables to return.
##### Returns:
a list of variables in colelction with scope and suffix.

View File

@ -0,0 +1,15 @@
### `tf.contrib.framework.local_variable(initial_value, validate_shape=True, name=None)` {#local_variable}
Create variable and add it to `GraphKeys.LOCAL_VARIABLES` collection.
##### Args:
* <b>`initial_value`</b>: See variables.Variable.__init__.
* <b>`validate_shape`</b>: See variables.Variable.__init__.
* <b>`name`</b>: See variables.Variable.__init__.
##### Returns:
New variable.

View File

@ -0,0 +1,9 @@
### `tf.contrib.losses.add_loss(loss)` {#add_loss}
Adds a externally defined loss to collection of losses.
##### Args:
* <b>`loss`</b>: A loss `Tensor`.

View File

@ -0,0 +1,28 @@
### `tf.contrib.losses.cosine_distance(predictions, targets, dim, weight=1.0, scope=None)` {#cosine_distance}
Adds a cosine-distance loss to the training procedure.
Note that the function assumes that the predictions and targets are already
unit-normalized.
##### Args:
* <b>`predictions`</b>: An arbitrary matrix.
* <b>`targets`</b>: A `Tensor` whose shape matches 'predictions'
* <b>`dim`</b>: The dimension along which the cosine distance is computed.
* <b>`weight`</b>: Coefficients for the loss a scalar, a tensor of shape
[batch_size] or a tensor whose shape matches `predictions`.
* <b>`scope`</b>: The scope for the operations performed in computing the loss.
##### Returns:
A scalar `Tensor` representing the loss value.
##### Raises:
* <b>`ValueError`</b>: If predictions.shape doesn't match targets.shape, if the ignore
mask is provided and its shape doesn't match targets.shape or if
the ignore mask is not boolean valued.

View File

@ -0,0 +1,13 @@
### `tf.contrib.losses.get_regularization_losses(scope=None)` {#get_regularization_losses}
Gets the regularization losses.
##### Args:
* <b>`scope`</b>: an optional scope for filtering the losses to return.
##### Returns:
A list of loss variables.

View File

@ -0,0 +1,29 @@
Device chooser for variables.
When using a parameter server it will assign them in a round-robin fashion.
When not using a parameter server it allows GPU or CPU placement.
- - -
#### `tf.contrib.framework.VariableDeviceChooser.__init__(num_tasks=0, job_name='ps', device_type='CPU', device_index=0)` {#VariableDeviceChooser.__init__}
Initialize VariableDeviceChooser.
##### Usage:
To use with 2 parameter servers:
VariableDeviceChooser(2)
To use without parameter servers:
VariableDeviceChooser()
VariableDeviceChooser(device_type='GPU') # For GPU placement
##### Args:
* <b>`num_tasks`</b>: number of tasks.
* <b>`job_name`</b>: String, a name for the parameter server job.
* <b>`device_type`</b>: Optional device type string (e.g. "CPU" or "GPU")
* <b>`device_index`</b>: int. Optional device index. If left
unspecified, device represents 'any' device_index.

View File

@ -0,0 +1,13 @@
### `tf.contrib.framework.add_arg_scope(func)` {#add_arg_scope}
Decorates a function with args so it can be used within an arg_scope.
##### Args:
* <b>`func`</b>: function to decorate.
##### Returns:
A tuple with the decorated function func_with_args().

View File

@ -0,0 +1,13 @@
### `tf.contrib.framework.arg_scoped_arguments(func)` {#arg_scoped_arguments}
Returns the list kwargs that arg_scope can set for a func.
##### Args:
* <b>`func`</b>: function which has been decorated with @add_arg_scope.
##### Returns:
a list of kwargs names.

View File

@ -0,0 +1,15 @@
### `tf.contrib.framework.get_variables(scope=None, suffix=None, collection='variables')` {#get_variables}
Gets the list of variables, filtered by scope and/or suffix.
##### Args:
* <b>`scope`</b>: an optional scope for filtering the variables to return.
* <b>`suffix`</b>: an optional suffix for filtering the variables to return.
* <b>`collection`</b>: in which collection search for. Defaults to GraphKeys.VARIABLES.
##### Returns:
a list of variables in colelction with scope and suffix.

View File

@ -0,0 +1,28 @@
### `tf.contrib.framework.variable(*args, **kwargs)` {#variable}
Gets an existing variable with these parameters or creates a new one.
##### Args:
* <b>`name`</b>: the name of the new or existing variable.
* <b>`shape`</b>: shape of the new or existing variable.
* <b>`dtype`</b>: type of the new or existing variable (defaults to `DT_FLOAT`).
* <b>`initializer`</b>: initializer for the variable if one is created.
* <b>`regularizer`</b>: a (Tensor -> Tensor or None) function; the result of
applying it on a newly created variable will be added to the collection
GraphKeys.REGULARIZATION_LOSSES and can be used for regularization.
* <b>`trainable`</b>: If `True` also add the variable to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
* <b>`collections`</b>: A list of collection names to which the Variable will be added.
If None it would default to tf.GraphKeys.VARIABLES.
* <b>`caching_device`</b>: Optional device string or function describing where the
Variable should be cached for reading. Defaults to the Variable's
device.
* <b>`device`</b>: Optional device to place the variable. It can be an string or a
function that is called to get the device for the variable.
##### Returns:
The created or existing variable.

View File

@ -0,0 +1,18 @@
### `tf.contrib.losses.sigmoid_cross_entropy(logits, multi_class_labels, weight=1.0, label_smoothing=0, scope=None)` {#sigmoid_cross_entropy}
Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits.
##### Args:
* <b>`logits`</b>: [batch_size, num_classes] logits outputs of the network .
* <b>`multi_class_labels`</b>: [batch_size, num_classes] target labels in (0, 1).
* <b>`weight`</b>: Coefficients for the loss. The tensor must be a scalar, a tensor of
shape [batch_size] or shape [batch_size, num_classes].
* <b>`label_smoothing`</b>: If greater than 0 then smooth the labels.
* <b>`scope`</b>: The scope for the operations performed in computing the loss.
##### Returns:
A scalar `Tensor` representing the loss value.

View File

@ -0,0 +1,19 @@
### `tf.contrib.framework.create_global_step(graph=None)` {#create_global_step}
Create global step tensor in graph.
##### Args:
* <b>`graph`</b>: The graph in which to create the global step. If missing, use default
graph.
##### Returns:
Global step tensor.
##### Raises:
* <b>`ValueError`</b>: if global step key is already defined.

View File

@ -0,0 +1,22 @@
### `tf.contrib.framework.reduce_sum_n(tensors, name=None)` {#reduce_sum_n}
Reduce tensors to a scalar sum.
This reduces each tensor in `tensors` to a scalar via `tf.reduce_sum`, then
adds them via `tf.add_n`.
##### Args:
* <b>`tensors`</b>: List of tensors, all of the same numeric type.
* <b>`name`</b>: Tensor name, and scope for all other ops.
##### Returns:
Total loss tensor, or None if no losses have been configured.
##### Raises:
* <b>`ValueError`</b>: if `losses` is missing or empty.

View File

@ -0,0 +1,32 @@
### `tf.contrib.losses.log_loss(predictions, targets, weight=1.0, epsilon=1e-07, scope=None)` {#log_loss}
Adds a Log Loss term to the training procedure.
`weight` acts as a coefficient for the loss. If a scalar is provided, then the
loss is simply scaled by the given value. If `weight` is a tensor of size
[batch_size], then the total loss for each sample of the batch is rescaled
by the corresponding element in the `weight` vector. If the shape of
`weight` matches the shape of `predictions`, then the loss of each
measurable element of `predictions` is scaled by the corresponding value of
`weight`.
##### Args:
* <b>`predictions`</b>: The predicted outputs.
* <b>`targets`</b>: The ground truth output tensor, same dimensions as 'predictions'.
* <b>`weight`</b>: Coefficients for the loss a scalar, a tensor of shape
[batch_size] or a tensor whose shape matches `predictions`.
* <b>`epsilon`</b>: A small increment to add to avoid taking a log of zero.
* <b>`scope`</b>: The scope for the operations performed in computing the loss.
##### Returns:
A scalar `Tensor` representing the loss value.
##### Raises:
* <b>`ValueError`</b>: If the shape of `predictions` doesn't match that of `targets` or
if the shape of `weight` is invalid.

View File

@ -0,0 +1,31 @@
### `tf.contrib.losses.sum_of_squares(predictions, targets, weight=1.0, scope=None)` {#sum_of_squares}
Adds a Sum-of-Squares loss to the training procedure.
`weight` acts as a coefficient for the loss. If a scalar is provided, then the
loss is simply scaled by the given value. If `weight` is a tensor of size
[batch_size], then the total loss for each sample of the batch is rescaled
by the corresponding element in the `weight` vector. If the shape of
`weight` matches the shape of `predictions`, then the loss of each
measurable element of `predictions` is scaled by the corresponding value of
`weight`.
##### Args:
* <b>`predictions`</b>: The predicted outputs.
* <b>`targets`</b>: The ground truth output tensor, same dimensions as 'predictions'.
* <b>`weight`</b>: Coefficients for the loss a scalar, a tensor of shape
[batch_size] or a tensor whose shape matches `predictions`.
* <b>`scope`</b>: The scope for the operations performed in computing the loss.
##### Returns:
A scalar `Tensor` representing the loss value.
##### Raises:
* <b>`ValueError`</b>: If the shape of `predictions` doesn't match that of `targets` or
if the shape of `weight` is invalid.

View File

@ -0,0 +1,20 @@
### `tf.contrib.framework.assert_or_get_global_step(graph=None, global_step_tensor=None)` {#assert_or_get_global_step}
Verifies that a global step tensor is valid or gets one if None is given.
If `global_step_tensor` is not None, check that it is a valid global step
tensor (using `assert_global_step`). Otherwise find a global step tensor using
`get_global_step` and return it.
##### Args:
* <b>`graph`</b>: The graph to find the global step tensor for.
* <b>`global_step_tensor`</b>: The tensor to check for suitability as a global step.
If None is given (the default), find a global step tensor.
##### Returns:
A tensor suitable as a global step, or `None` if none was provided and none
was found.

View File

@ -0,0 +1,14 @@
### `tf.contrib.framework.get_variables_by_suffix(suffix, scope=None)` {#get_variables_by_suffix}
Gets the list of variables that end with the given suffix.
##### Args:
* <b>`suffix`</b>: suffix for filtering the variables to return.
* <b>`scope`</b>: an optional scope for filtering the variables to return.
##### Returns:
a copied list of variables with the given name and prefix.

View File

@ -0,0 +1,13 @@
### `tf.contrib.framework.has_arg_scope(func)` {#has_arg_scope}
Checks whether a func has been decorated with @add_arg_scope or not.
##### Args:
* <b>`func`</b>: function to check.
##### Returns:
a boolean.

View File

@ -0,0 +1,24 @@
### `tf.contrib.framework.with_shape(expected_shape, tensor)` {#with_shape}
Asserts tensor has expected shape.
If tensor shape and expected_shape, are fully defined, assert they match.
Otherwise, add assert op that will validate the shape when tensor is
evaluated, and set shape on tensor.
##### Args:
* <b>`expected_shape`</b>: Expected shape to assert, as a 1D array of ints, or tensor
of same.
* <b>`tensor`</b>: Tensor whose shape we're validating.
##### Returns:
tensor, perhaps with a dependent assert operation.
##### Raises:
* <b>`ValueError`</b>: if tensor has an invalid shape.

View File

@ -0,0 +1,20 @@
### `tf.contrib.losses.softmax_cross_entropy(logits, onehot_labels, weight=1.0, label_smoothing=0, scope=None)` {#softmax_cross_entropy}
Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits.
It can scale the loss by weight factor, and smooth the labels.
##### Args:
* <b>`logits`</b>: [batch_size, num_classes] logits outputs of the network .
* <b>`onehot_labels`</b>: [batch_size, num_classes] target one_hot_encoded labels.
* <b>`weight`</b>: Coefficients for the loss. The tensor must be a scalar or a tensor
of shape [batch_size].
* <b>`label_smoothing`</b>: If greater than 0 then smooth the labels.
* <b>`scope`</b>: the scope for the operations performed in computing the loss.
##### Returns:
A scalar `Tensor` representing the loss value.

View File

@ -0,0 +1,27 @@
### `tf.contrib.framework.assert_same_float_dtype(tensors=None, dtype=None)` {#assert_same_float_dtype}
Validate and return float type based on `tensors` and `dtype`.
For ops such as matrix multiplication, inputs and weights must be of the
same float type. This function validates that all `tensors` are the same type,
validates that type is `dtype` (if supplied), and returns the type. Type must
be `dtypes.float32` or `dtypes.float64`. If neither `tensors` nor
`dtype` is supplied, default to `dtypes.float32`.
##### Args:
* <b>`tensors`</b>: Tensors of input values. Can include `None` elements, which will be
ignored.
* <b>`dtype`</b>: Expected type.
##### Returns:
Validated type.
##### Raises:
* <b>`ValueError`</b>: if neither `tensors` nor `dtype` is supplied, or result is not
float.

View File

@ -0,0 +1,24 @@
### `tf.contrib.framework.convert_to_tensor_or_sparse_tensor(value, dtype=None, name=None, as_ref=False)` {#convert_to_tensor_or_sparse_tensor}
Converts value to a `SparseTensor` or `Tensor`.
##### Args:
* <b>`value`</b>: A `SparseTensor`, `SparseTensorValue`, or an object whose type has a
registered `Tensor` conversion function.
* <b>`dtype`</b>: Optional element type for the returned tensor. If missing, the
type is inferred from the type of `value`.
* <b>`name`</b>: Optional name to use if a new `Tensor` is created.
* <b>`as_ref`</b>: True if we want the result as a ref tensor. Only used if a new
`Tensor` is created.
##### Returns:
A `SparseTensor` or `Tensor` based on `value`.
##### Raises:
* <b>`RuntimeError`</b>: If result type is incompatible with `dtype`.

View File

@ -0,0 +1,14 @@
### `tf.contrib.framework.get_or_create_global_step(graph=None)` {#get_or_create_global_step}
Returns and create (if necessary) the global step variable.
##### Args:
* <b>`graph`</b>: The graph in which to create the global step. If missing, use default
graph.
##### Returns:
the tensor representing the global step variable.

View File

@ -0,0 +1,29 @@
### `tf.contrib.framework.model_variable(*args, **kwargs)` {#model_variable}
Gets an existing model variable with these parameters or creates a new one.
##### Args:
* <b>`name`</b>: the name of the new or existing variable.
* <b>`shape`</b>: shape of the new or existing variable.
* <b>`dtype`</b>: type of the new or existing variable (defaults to `DT_FLOAT`).
* <b>`initializer`</b>: initializer for the variable if one is created.
* <b>`regularizer`</b>: a (Tensor -> Tensor or None) function; the result of
applying it on a newly created variable will be added to the collection
GraphKeys.REGULARIZATION_LOSSES and can be used for regularization.
* <b>`trainable`</b>: If `True` also add the variable to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
* <b>`collections`</b>: A list of collection names to which the Variable will be added.
Note that the variable is always also added to the tf.GraphKeys.VARIABLES
and MODEL_VARIABLES collections.
* <b>`caching_device`</b>: Optional device string or function describing where the
Variable should be cached for reading. Defaults to the Variable's
device.
* <b>`device`</b>: Optional device to place the variable. It can be an string or a
function that is called to get the device for the variable.
##### Returns:
The created or existing variable.

View File

@ -0,0 +1,22 @@
### `tf.contrib.losses.get_total_loss(add_regularization_losses=True, name='total_loss')` {#get_total_loss}
Returns a tensor whose value represents the total loss.
Notice that the function adds the given losses to the regularization losses.
##### Args:
* <b>`add_regularization_losses`</b>: A boolean indicating whether or not to use the
regularization losses in the sum.
* <b>`name`</b>: The name of the returned tensor.
##### Returns:
A `Tensor` whose value represents the total loss.
##### Raises:
* <b>`ValueError`</b>: if `losses` is not iterable.

View File

@ -542,6 +542,40 @@
* [`decode_audio`](../../api_docs/python/contrib.ffmpeg.md#decode_audio)
* [`encode_audio`](../../api_docs/python/contrib.ffmpeg.md#encode_audio)
* **[Framework (contrib)](../../api_docs/python/contrib.framework.md)**:
* [`add_arg_scope`](../../api_docs/python/contrib.framework.md#add_arg_scope)
* [`add_model_variable`](../../api_docs/python/contrib.framework.md#add_model_variable)
* [`arg_scope`](../../api_docs/python/contrib.framework.md#arg_scope)
* [`arg_scoped_arguments`](../../api_docs/python/contrib.framework.md#arg_scoped_arguments)
* [`assert_global_step`](../../api_docs/python/contrib.framework.md#assert_global_step)
* [`assert_or_get_global_step`](../../api_docs/python/contrib.framework.md#assert_or_get_global_step)
* [`assert_same_float_dtype`](../../api_docs/python/contrib.framework.md#assert_same_float_dtype)
* [`assert_scalar_int`](../../api_docs/python/contrib.framework.md#assert_scalar_int)
* [`convert_to_tensor_or_sparse_tensor`](../../api_docs/python/contrib.framework.md#convert_to_tensor_or_sparse_tensor)
* [`create_global_step`](../../api_docs/python/contrib.framework.md#create_global_step)
* [`get_global_step`](../../api_docs/python/contrib.framework.md#get_global_step)
* [`get_graph_from_inputs`](../../api_docs/python/contrib.framework.md#get_graph_from_inputs)
* [`get_local_variables`](../../api_docs/python/contrib.framework.md#get_local_variables)
* [`get_model_variables`](../../api_docs/python/contrib.framework.md#get_model_variables)
* [`get_or_create_global_step`](../../api_docs/python/contrib.framework.md#get_or_create_global_step)
* [`get_unique_variable`](../../api_docs/python/contrib.framework.md#get_unique_variable)
* [`get_variables`](../../api_docs/python/contrib.framework.md#get_variables)
* [`get_variables_by_name`](../../api_docs/python/contrib.framework.md#get_variables_by_name)
* [`get_variables_by_suffix`](../../api_docs/python/contrib.framework.md#get_variables_by_suffix)
* [`get_variables_to_restore`](../../api_docs/python/contrib.framework.md#get_variables_to_restore)
* [`has_arg_scope`](../../api_docs/python/contrib.framework.md#has_arg_scope)
* [`is_non_decreasing`](../../api_docs/python/contrib.framework.md#is_non_decreasing)
* [`is_numeric_tensor`](../../api_docs/python/contrib.framework.md#is_numeric_tensor)
* [`is_strictly_increasing`](../../api_docs/python/contrib.framework.md#is_strictly_increasing)
* [`local_variable`](../../api_docs/python/contrib.framework.md#local_variable)
* [`model_variable`](../../api_docs/python/contrib.framework.md#model_variable)
* [`reduce_sum_n`](../../api_docs/python/contrib.framework.md#reduce_sum_n)
* [`safe_embedding_lookup_sparse`](../../api_docs/python/contrib.framework.md#safe_embedding_lookup_sparse)
* [`variable`](../../api_docs/python/contrib.framework.md#variable)
* [`VariableDeviceChooser`](../../api_docs/python/contrib.framework.md#VariableDeviceChooser)
* [`with_same_shape`](../../api_docs/python/contrib.framework.md#with_same_shape)
* [`with_shape`](../../api_docs/python/contrib.framework.md#with_shape)
* **[Layers (contrib)](../../api_docs/python/contrib.layers.md)**:
* [`apply_regularization`](../../api_docs/python/contrib.layers.md#apply_regularization)
* [`convolution2d`](../../api_docs/python/contrib.layers.md#convolution2d)
@ -592,6 +626,19 @@
* [`TensorFlowRNNRegressor`](../../api_docs/python/contrib.learn.md#TensorFlowRNNRegressor)
* [`train`](../../api_docs/python/contrib.learn.md#train)
* **[Losses (contrib)](../../api_docs/python/contrib.losses.md)**:
* [`absolute_difference`](../../api_docs/python/contrib.losses.md#absolute_difference)
* [`add_loss`](../../api_docs/python/contrib.losses.md#add_loss)
* [`cosine_distance`](../../api_docs/python/contrib.losses.md#cosine_distance)
* [`get_losses`](../../api_docs/python/contrib.losses.md#get_losses)
* [`get_regularization_losses`](../../api_docs/python/contrib.losses.md#get_regularization_losses)
* [`get_total_loss`](../../api_docs/python/contrib.losses.md#get_total_loss)
* [`log_loss`](../../api_docs/python/contrib.losses.md#log_loss)
* [`sigmoid_cross_entropy`](../../api_docs/python/contrib.losses.md#sigmoid_cross_entropy)
* [`softmax_cross_entropy`](../../api_docs/python/contrib.losses.md#softmax_cross_entropy)
* [`sum_of_pairwise_squares`](../../api_docs/python/contrib.losses.md#sum_of_pairwise_squares)
* [`sum_of_squares`](../../api_docs/python/contrib.losses.md#sum_of_squares)
* **[Metrics (contrib)](../../api_docs/python/contrib.metrics.md)**:
* [`accuracy`](../../api_docs/python/contrib.metrics.md#accuracy)
* [`auc_using_histogram`](../../api_docs/python/contrib.metrics.md#auc_using_histogram)

View File

@ -54,8 +54,10 @@ def get_module_to_name():
tf.contrib.copy_graph: "tf.contrib.copy_graph",
tf.contrib.distributions: "tf.contrib.distributions",
tf.contrib.ffmpeg: "tf.contrib.ffmpeg",
tf.contrib.framework: "tf.contrib.framework",
tf.contrib.layers: "tf.contrib.layers",
tf.contrib.learn: "tf.contrib.learn",
tf.contrib.losses: "tf.contrib.losses",
tf.contrib.metrics: "tf.contrib.metrics",
tf.contrib.util: "tf.contrib.util",
}
@ -140,8 +142,10 @@ def all_libraries(module_to_name, members, documented):
library("contrib.distributions", "Statistical distributions (contrib)",
tf.contrib.distributions),
library("contrib.ffmpeg", "FFmpeg (contrib)", ffmpeg),
library("contrib.framework", "Framework (contrib)", tf.contrib.framework),
library("contrib.layers", "Layers (contrib)", tf.contrib.layers),
library("contrib.learn", "Learn (contrib)", tf.contrib.learn),
library("contrib.losses", "Losses (contrib)", tf.contrib.losses),
library("contrib.metrics", "Metrics (contrib)", tf.contrib.metrics),
library("contrib.util", "Utilities (contrib)", tf.contrib.util),
library("contrib.copy_graph", "Copying Graph Elements (contrib)",