Adding documentation about contrib.losses (#11600)
* Update contrib.lossses.md Adding note that this module is deprecated * Update __init__.py Add spaces so that this shows up well on the website * Update __init__.py Alignment does not comply with PEP8, minor cleanup of comments * Update README.md Notify readers that this module is deprecated * Update losses_op.py -Clarify deprecated information -Correct minor error -Minor grammar fixes * Grammar fix
This commit is contained in:
parent
5a44a711bd
commit
ad46b6f9de
@ -4,7 +4,7 @@
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
# https://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
@ -14,15 +14,20 @@
|
||||
# ==============================================================================
|
||||
"""Non-core alias for the deprecated tf.X_summary ops.
|
||||
|
||||
For TensorFlow 1.0, we have re-organized the TensorFlow summary ops into a
|
||||
For TensorFlow 1.0, we have reorganized the TensorFlow summary ops into a
|
||||
submodule, and made some semantic tweaks. The first thing to note is that we
|
||||
moved the APIs around as follows:
|
||||
|
||||
tf.scalar_summary -> tf.summary.scalar
|
||||
tf.histogram_summary -> tf.summary.histogram
|
||||
tf.audio_summary -> tf.summary.audio
|
||||
tf.image_summary -> tf.summary.image
|
||||
tf.merge_summary -> tf.summary.merge
|
||||
tf.scalar_summary -> tf.summary.scalar
|
||||
|
||||
tf.histogram_summary -> tf.summary.histogram
|
||||
|
||||
tf.audio_summary -> tf.summary.audio
|
||||
|
||||
tf.image_summary -> tf.summary.image
|
||||
|
||||
tf.merge_summary -> tf.summary.merge
|
||||
|
||||
tf.merge_all_summaries -> tf.summary.merge_all
|
||||
|
||||
We think this is a cleaner API and will improve long-term discoverability and
|
||||
@ -35,14 +40,14 @@ Previously, the tag was allowed to be any unique string, and had no relation
|
||||
to the summary op generating it, and no relation to the TensorFlow name system.
|
||||
This made it very difficult to write re-usable code that would add summary
|
||||
ops to the graph. If you had a function that would add summary ops, you would
|
||||
need to manually pass in a name scope to that function to create de-duplicated
|
||||
need to manually pass in a name scope to that function to create deduplicated
|
||||
tags, otherwise your program would fail with a runtime error due to tag
|
||||
collision.
|
||||
|
||||
The new summary APIs under tf.summary throw away the "tag" as an independent
|
||||
concept; instead, the first argument is the node name. This means that summary
|
||||
tags now automatically inherit the surrounding TF name scope, and automatically
|
||||
are deduplicated if there is a conflict. However, now the only allowed
|
||||
concept; instead, the first argument is the node name. So summary tags now
|
||||
automatically inherit the surrounding TF name scope, and automatically
|
||||
are deduplicated if there is a conflict. Now however, the only allowed
|
||||
characters are alphanumerics, underscores, and forward slashes. To make
|
||||
migration easier, the new APIs automatically convert illegal characters to
|
||||
underscores.
|
||||
@ -75,7 +80,7 @@ to the new summary ops:
|
||||
tf.summary.scalar requires a single scalar name and scalar value. In most
|
||||
cases, you can create tf.summary.scalars in a loop to get the same behavior
|
||||
|
||||
As before, TensorBoard will group charts by the top-level name scope. This may
|
||||
As before, TensorBoard groups charts by the top-level name scope. This may
|
||||
be inconvenient, since in the new summary ops the summary will inherit that
|
||||
name scope without user control. We plan to add more grouping mechanisms to
|
||||
TensorBoard, so it will be possible to specify the TensorBoard group for
|
||||
|
@ -1,7 +1,13 @@
|
||||
# TensorFlow contrib losses.
|
||||
|
||||
## Deprecated
|
||||
|
||||
This module is deprecated. Instructions for updating: Use tf.losses instead.
|
||||
|
||||
## losses
|
||||
|
||||
Note: By default all the losses are collected into the GraphKeys.LOSSES collection.
|
||||
|
||||
Loss operations for use in training models, typically with signature like the
|
||||
following:
|
||||
|
||||
|
@ -301,7 +301,7 @@ def absolute_difference(predictions, labels=None, weights=1.0, scope=None):
|
||||
|
||||
@deprecated("2016-12-30",
|
||||
"Use tf.losses.sigmoid_cross_entropy instead. Note that the order "
|
||||
"of the predictions and labels arguments was changed.")
|
||||
"of the predictions and labels arguments has been changed.")
|
||||
def sigmoid_cross_entropy(
|
||||
logits, multi_class_labels, weights=1.0, label_smoothing=0, scope=None):
|
||||
"""Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits.
|
||||
@ -436,7 +436,7 @@ def sparse_softmax_cross_entropy(logits, labels, weights=1.0, scope=None):
|
||||
|
||||
@deprecated("2016-12-30",
|
||||
"Use tf.losses.log_loss instead. Note that the order of the "
|
||||
"predictions and labels arguments was changed.")
|
||||
"predictions and labels arguments has been changed.")
|
||||
def log_loss(predictions, labels=None, weights=1.0, epsilon=1e-7, scope=None):
|
||||
"""Adds a Log Loss term to the training procedure.
|
||||
|
||||
@ -477,7 +477,8 @@ def log_loss(predictions, labels=None, weights=1.0, epsilon=1e-7, scope=None):
|
||||
|
||||
@deprecated("2016-12-30",
|
||||
"Use tf.losses.hinge_loss instead. Note that the order of the "
|
||||
"predictions and labels arguments were changed.")
|
||||
"logits and labels arguments has been changed, and to stay "
|
||||
"unweighted, reduction=Reduction.NONE")
|
||||
def hinge_loss(logits, labels=None, scope=None):
|
||||
"""Method that returns the loss tensor for hinge loss.
|
||||
|
||||
@ -488,8 +489,8 @@ def hinge_loss(logits, labels=None, scope=None):
|
||||
scope: The scope for the operations performed in computing the loss.
|
||||
|
||||
Returns:
|
||||
A `Tensor` of same shape as `logits` and `labels` representing the loss
|
||||
values across the batch.
|
||||
An unweighted `Tensor` of same shape as `logits` and `labels` representing the
|
||||
loss values across the batch.
|
||||
|
||||
Raises:
|
||||
ValueError: If the shapes of `logits` and `labels` don't match.
|
||||
@ -541,7 +542,7 @@ def mean_squared_error(predictions, labels=None, weights=1.0, scope=None):
|
||||
|
||||
@deprecated("2016-12-30",
|
||||
"Use tf.losses.mean_pairwise_squared_error instead. Note that the "
|
||||
"order of the predictions and labels arguments was changed.")
|
||||
"order of the predictions and labels arguments has been changed.")
|
||||
def mean_pairwise_squared_error(
|
||||
predictions, labels=None, weights=1.0, scope=None):
|
||||
"""Adds a pairwise-errors-squared loss to the training procedure.
|
||||
|
@ -1,8 +1,12 @@
|
||||
# Losses (contrib)
|
||||
|
||||
## Deprecated
|
||||
|
||||
This module is deprecated. Instructions for updating: Use @{tf.losses} instead.
|
||||
|
||||
## Loss operations for use in neural networks.
|
||||
|
||||
Note: By default all the losses are collected into the `GraphKeys.LOSSES`
|
||||
Note: By default, all the losses are collected into the `GraphKeys.LOSSES`
|
||||
collection.
|
||||
|
||||
All of the loss functions take a pair of predictions and ground truth labels,
|
||||
|
Loading…
Reference in New Issue
Block a user