Adding documentation about contrib.losses (#11600)

* Update contrib.lossses.md

Adding note that this module is deprecated

* Update __init__.py

Add spaces so that this shows up well on the website

* Update __init__.py

Alignment does not comply with PEP8, minor cleanup of comments

* Update README.md

Notify readers that this module is deprecated

* Update losses_op.py

-Clarify deprecated information
-Correct minor error
-Minor grammar fixes

* Grammar fix
This commit is contained in:
Alan Yee 2017-07-19 15:08:44 -07:00 committed by Jonathan Hseu
parent 5a44a711bd
commit ad46b6f9de
4 changed files with 35 additions and 19 deletions

View File

@ -4,7 +4,7 @@
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
# You may obtain a copy of the License at # You may obtain a copy of the License at
# #
# http://www.apache.org/licenses/LICENSE-2.0 # https://www.apache.org/licenses/LICENSE-2.0
# #
# Unless required by applicable law or agreed to in writing, software # Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, # distributed under the License is distributed on an "AS IS" BASIS,
@ -14,15 +14,20 @@
# ============================================================================== # ==============================================================================
"""Non-core alias for the deprecated tf.X_summary ops. """Non-core alias for the deprecated tf.X_summary ops.
For TensorFlow 1.0, we have re-organized the TensorFlow summary ops into a For TensorFlow 1.0, we have reorganized the TensorFlow summary ops into a
submodule, and made some semantic tweaks. The first thing to note is that we submodule, and made some semantic tweaks. The first thing to note is that we
moved the APIs around as follows: moved the APIs around as follows:
tf.scalar_summary -> tf.summary.scalar tf.scalar_summary -> tf.summary.scalar
tf.histogram_summary -> tf.summary.histogram
tf.audio_summary -> tf.summary.audio tf.histogram_summary -> tf.summary.histogram
tf.image_summary -> tf.summary.image
tf.merge_summary -> tf.summary.merge tf.audio_summary -> tf.summary.audio
tf.image_summary -> tf.summary.image
tf.merge_summary -> tf.summary.merge
tf.merge_all_summaries -> tf.summary.merge_all tf.merge_all_summaries -> tf.summary.merge_all
We think this is a cleaner API and will improve long-term discoverability and We think this is a cleaner API and will improve long-term discoverability and
@ -35,14 +40,14 @@ Previously, the tag was allowed to be any unique string, and had no relation
to the summary op generating it, and no relation to the TensorFlow name system. to the summary op generating it, and no relation to the TensorFlow name system.
This made it very difficult to write re-usable code that would add summary This made it very difficult to write re-usable code that would add summary
ops to the graph. If you had a function that would add summary ops, you would ops to the graph. If you had a function that would add summary ops, you would
need to manually pass in a name scope to that function to create de-duplicated need to manually pass in a name scope to that function to create deduplicated
tags, otherwise your program would fail with a runtime error due to tag tags, otherwise your program would fail with a runtime error due to tag
collision. collision.
The new summary APIs under tf.summary throw away the "tag" as an independent The new summary APIs under tf.summary throw away the "tag" as an independent
concept; instead, the first argument is the node name. This means that summary concept; instead, the first argument is the node name. So summary tags now
tags now automatically inherit the surrounding TF name scope, and automatically automatically inherit the surrounding TF name scope, and automatically
are deduplicated if there is a conflict. However, now the only allowed are deduplicated if there is a conflict. Now however, the only allowed
characters are alphanumerics, underscores, and forward slashes. To make characters are alphanumerics, underscores, and forward slashes. To make
migration easier, the new APIs automatically convert illegal characters to migration easier, the new APIs automatically convert illegal characters to
underscores. underscores.
@ -75,7 +80,7 @@ to the new summary ops:
tf.summary.scalar requires a single scalar name and scalar value. In most tf.summary.scalar requires a single scalar name and scalar value. In most
cases, you can create tf.summary.scalars in a loop to get the same behavior cases, you can create tf.summary.scalars in a loop to get the same behavior
As before, TensorBoard will group charts by the top-level name scope. This may As before, TensorBoard groups charts by the top-level name scope. This may
be inconvenient, since in the new summary ops the summary will inherit that be inconvenient, since in the new summary ops the summary will inherit that
name scope without user control. We plan to add more grouping mechanisms to name scope without user control. We plan to add more grouping mechanisms to
TensorBoard, so it will be possible to specify the TensorBoard group for TensorBoard, so it will be possible to specify the TensorBoard group for

View File

@ -1,7 +1,13 @@
# TensorFlow contrib losses. # TensorFlow contrib losses.
## Deprecated
This module is deprecated. Instructions for updating: Use tf.losses instead.
## losses ## losses
Note: By default all the losses are collected into the GraphKeys.LOSSES collection.
Loss operations for use in training models, typically with signature like the Loss operations for use in training models, typically with signature like the
following: following:

View File

@ -301,7 +301,7 @@ def absolute_difference(predictions, labels=None, weights=1.0, scope=None):
@deprecated("2016-12-30", @deprecated("2016-12-30",
"Use tf.losses.sigmoid_cross_entropy instead. Note that the order " "Use tf.losses.sigmoid_cross_entropy instead. Note that the order "
"of the predictions and labels arguments was changed.") "of the predictions and labels arguments has been changed.")
def sigmoid_cross_entropy( def sigmoid_cross_entropy(
logits, multi_class_labels, weights=1.0, label_smoothing=0, scope=None): logits, multi_class_labels, weights=1.0, label_smoothing=0, scope=None):
"""Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits. """Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits.
@ -436,7 +436,7 @@ def sparse_softmax_cross_entropy(logits, labels, weights=1.0, scope=None):
@deprecated("2016-12-30", @deprecated("2016-12-30",
"Use tf.losses.log_loss instead. Note that the order of the " "Use tf.losses.log_loss instead. Note that the order of the "
"predictions and labels arguments was changed.") "predictions and labels arguments has been changed.")
def log_loss(predictions, labels=None, weights=1.0, epsilon=1e-7, scope=None): def log_loss(predictions, labels=None, weights=1.0, epsilon=1e-7, scope=None):
"""Adds a Log Loss term to the training procedure. """Adds a Log Loss term to the training procedure.
@ -477,7 +477,8 @@ def log_loss(predictions, labels=None, weights=1.0, epsilon=1e-7, scope=None):
@deprecated("2016-12-30", @deprecated("2016-12-30",
"Use tf.losses.hinge_loss instead. Note that the order of the " "Use tf.losses.hinge_loss instead. Note that the order of the "
"predictions and labels arguments were changed.") "logits and labels arguments has been changed, and to stay "
"unweighted, reduction=Reduction.NONE")
def hinge_loss(logits, labels=None, scope=None): def hinge_loss(logits, labels=None, scope=None):
"""Method that returns the loss tensor for hinge loss. """Method that returns the loss tensor for hinge loss.
@ -488,8 +489,8 @@ def hinge_loss(logits, labels=None, scope=None):
scope: The scope for the operations performed in computing the loss. scope: The scope for the operations performed in computing the loss.
Returns: Returns:
A `Tensor` of same shape as `logits` and `labels` representing the loss An unweighted `Tensor` of same shape as `logits` and `labels` representing the
values across the batch. loss values across the batch.
Raises: Raises:
ValueError: If the shapes of `logits` and `labels` don't match. ValueError: If the shapes of `logits` and `labels` don't match.
@ -541,7 +542,7 @@ def mean_squared_error(predictions, labels=None, weights=1.0, scope=None):
@deprecated("2016-12-30", @deprecated("2016-12-30",
"Use tf.losses.mean_pairwise_squared_error instead. Note that the " "Use tf.losses.mean_pairwise_squared_error instead. Note that the "
"order of the predictions and labels arguments was changed.") "order of the predictions and labels arguments has been changed.")
def mean_pairwise_squared_error( def mean_pairwise_squared_error(
predictions, labels=None, weights=1.0, scope=None): predictions, labels=None, weights=1.0, scope=None):
"""Adds a pairwise-errors-squared loss to the training procedure. """Adds a pairwise-errors-squared loss to the training procedure.

View File

@ -1,8 +1,12 @@
# Losses (contrib) # Losses (contrib)
## Deprecated
This module is deprecated. Instructions for updating: Use @{tf.losses} instead.
## Loss operations for use in neural networks. ## Loss operations for use in neural networks.
Note: By default all the losses are collected into the `GraphKeys.LOSSES` Note: By default, all the losses are collected into the `GraphKeys.LOSSES`
collection. collection.
All of the loss functions take a pair of predictions and ground truth labels, All of the loss functions take a pair of predictions and ground truth labels,