Fix various links in Keras docstrings.

PiperOrigin-RevId: 306769967
Change-Id: I21491ccbe9679f34bbd4b28aa27c46c6ba01ac4e
This commit is contained in:
Francois Chollet 2020-04-15 19:49:27 -07:00 committed by TensorFlower Gardener
parent 40a351a745
commit 310465b783
14 changed files with 103 additions and 105 deletions

View File

@ -86,8 +86,8 @@ def set_floatx(value):
likely cause numeric stability issues. Instead, mixed precision, which is likely cause numeric stability issues. Instead, mixed precision, which is
using a mix of float16 and float32, can be used by calling using a mix of float16 and float32, can be used by calling
`tf.keras.mixed_precision.experimental.set_policy('mixed_float16')`. See the `tf.keras.mixed_precision.experimental.set_policy('mixed_float16')`. See the
[mixed precision [mixed precision guide](
guide](https://www.tensorflow.org/guide/keras/mixed_precision) for details. https://www.tensorflow.org/guide/keras/mixed_precision) for details.
Arguments: Arguments:
value: String; `'float16'`, `'float32'`, or `'float64'`. value: String; `'float16'`, `'float32'`, or `'float64'`.

View File

@ -83,15 +83,16 @@ class TensorBoard(callbacks.TensorBoard):
embeddings_layer_names: a list of names of layers to keep eye on. If None embeddings_layer_names: a list of names of layers to keep eye on. If None
or empty list all the embedding layer will be watched. or empty list all the embedding layer will be watched.
embeddings_metadata: a dictionary which maps layer name to a file name in embeddings_metadata: a dictionary which maps layer name to a file name in
which metadata for this embedding layer is saved. See the which metadata for this embedding layer is saved.
[details](https://www.tensorflow.org/how_tos/embedding_viz/#metadata_optional) [Here are details](
https://www.tensorflow.org/how_tos/embedding_viz/#metadata_optional)
about metadata files format. In case if the same metadata file is about metadata files format. In case if the same metadata file is
used for all embedding layers, string can be passed. used for all embedding layers, string can be passed.
embeddings_data: data to be embedded at layers specified in embeddings_data: data to be embedded at layers specified in
`embeddings_layer_names`. Numpy array (if the model has a single input) `embeddings_layer_names`. Numpy array (if the model has a single input)
or list of Numpy arrays (if the model has multiple inputs). Learn [more or list of Numpy arrays (if the model has multiple inputs). Learn more
about about embeddings [in this guide](
embeddings](https://www.tensorflow.org/programmers_guide/embedding) https://www.tensorflow.org/programmers_guide/embedding).
update_freq: `'batch'` or `'epoch'` or integer. When using `'batch'`, update_freq: `'batch'` or `'epoch'` or integer. When using `'batch'`,
writes the losses and metrics to TensorBoard after each batch. The same writes the losses and metrics to TensorBoard after each batch. The same
applies for `'epoch'`. If using an integer, let's say `1000`, the applies for `'epoch'`. If using an integer, let's say `1000`, the

View File

@ -80,7 +80,7 @@ class InputLayer(base_layer.Layer):
ragged: Boolean, whether the placeholder created is meant to be ragged. ragged: Boolean, whether the placeholder created is meant to be ragged.
In this case, values of 'None' in the 'shape' argument represent In this case, values of 'None' in the 'shape' argument represent
ragged dimensions. For more information about RaggedTensors, see ragged dimensions. For more information about RaggedTensors, see
https://www.tensorflow.org/guide/ragged_tensors. [this guide](https://www.tensorflow.org/guide/ragged_tensors).
Default to False. Default to False.
name: Optional name of the layer (string). name: Optional name of the layer (string).
""" """
@ -231,7 +231,7 @@ def Input( # pylint: disable=invalid-name
ragged. Only one of 'ragged' and 'sparse' can be True. In this case, ragged. Only one of 'ragged' and 'sparse' can be True. In this case,
values of 'None' in the 'shape' argument represent ragged dimensions. values of 'None' in the 'shape' argument represent ragged dimensions.
For more information about RaggedTensors, see For more information about RaggedTensors, see
https://www.tensorflow.org/guide/ragged_tensors. [this guide](https://www.tensorflow.org/guide/ragged_tensors).
**kwargs: deprecated arguments support. Supports `batch_shape` and **kwargs: deprecated arguments support. Supports `batch_shape` and
`batch_input_shape`. `batch_input_shape`.

View File

@ -163,9 +163,6 @@ class Model(network.Network, version_utils.ModelVersionSelector):
Once the model is created, you can config the model with losses and metrics Once the model is created, you can config the model with losses and metrics
with `model.compile()`, train the model with `model.fit()`, or use the model with `model.compile()`, train the model with `model.fit()`, or use the model
to do prediction with `model.predict()`. to do prediction with `model.predict()`.
Checkout [guide](https://www.tensorflow.org/guide/keras/overview) for
additional details.
""" """
_TF_MODULE_IGNORED_PROPERTIES = frozenset( _TF_MODULE_IGNORED_PROPERTIES = frozenset(
itertools.chain(('_train_counter', '_test_counter', '_predict_counter', itertools.chain(('_train_counter', '_test_counter', '_predict_counter',

View File

@ -46,8 +46,8 @@ def model_to_estimator(
model to an Estimator for use with downstream systems. model to an Estimator for use with downstream systems.
For usage example, please see: For usage example, please see:
[Creating estimators from Keras [Creating estimators from Keras Models](
Models](https://www.tensorflow.org/guide/estimators#creating_estimators_from_keras_models). https://www.tensorflow.org/guide/estimators#creating_estimators_from_keras_models).
Sample Weights: Sample Weights:
Estimators returned by `model_to_estimator` are configured so that they can Estimators returned by `model_to_estimator` are configured so that they can
@ -144,8 +144,8 @@ def model_to_estimator_v2(keras_model=None,
model to an Estimator for use with downstream systems. model to an Estimator for use with downstream systems.
For usage example, please see: For usage example, please see:
[Creating estimators from Keras [Creating estimators from Keras Models](
Models](https://www.tensorflow.org/guide/estimators#creating_estimators_from_keras_models). https://www.tensorflow.org/guide/estimators#creating_estimators_from_keras_models).
Sample Weights: Sample Weights:
Estimators returned by `model_to_estimator` are configured so that they can Estimators returned by `model_to_estimator` are configured so that they can

View File

@ -458,8 +458,8 @@ class ConvLSTM2DCell(DropoutRNNCellMixin, Layer):
unit_forget_bias: Boolean. unit_forget_bias: Boolean.
If True, add 1 to the bias of the forget gate at initialization. If True, add 1 to the bias of the forget gate at initialization.
Use in combination with `bias_initializer="zeros"`. Use in combination with `bias_initializer="zeros"`.
This is recommended in [Jozefowicz et al.] This is recommended in [Jozefowicz et al., 2015](
(http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf) http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf)
kernel_regularizer: Regularizer function applied to kernel_regularizer: Regularizer function applied to
the `kernel` weights matrix. the `kernel` weights matrix.
recurrent_regularizer: Regularizer function applied to recurrent_regularizer: Regularizer function applied to
@ -739,8 +739,8 @@ class ConvLSTM2D(ConvRNN2D):
unit_forget_bias: Boolean. unit_forget_bias: Boolean.
If True, add 1 to the bias of the forget gate at initialization. If True, add 1 to the bias of the forget gate at initialization.
Use in combination with `bias_initializer="zeros"`. Use in combination with `bias_initializer="zeros"`.
This is recommended in [Jozefowicz et al.] This is recommended in [Jozefowicz et al., 2015](
(http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf) http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf)
kernel_regularizer: Regularizer function applied to kernel_regularizer: Regularizer function applied to
the `kernel` weights matrix. the `kernel` weights matrix.
recurrent_regularizer: Regularizer function applied to recurrent_regularizer: Regularizer function applied to
@ -807,10 +807,9 @@ class ConvLSTM2D(ConvRNN2D):
ValueError: in case of invalid constructor arguments. ValueError: in case of invalid constructor arguments.
References: References:
- [Convolutional LSTM Network: A Machine Learning Approach for - [Shi et al., 2015](http://arxiv.org/abs/1506.04214v1)
Precipitation Nowcasting](http://arxiv.org/abs/1506.04214v1) (the current implementation does not include the feedback loop on the
The current implementation does not include the feedback loop on the cells output).
cells output.
""" """
def __init__(self, def __init__(self,

View File

@ -92,8 +92,8 @@ class Masking(Layer):
# The time step 3 and 5 will be skipped from LSTM calculation. # The time step 3 and 5 will be skipped from LSTM calculation.
``` ```
See [the masking and padding See [the masking and padding guide](
guide](https://www.tensorflow.org/guide/keras/masking_and_padding) https://www.tensorflow.org/guide/keras/masking_and_padding)
for more details. for more details.
""" """
@ -734,15 +734,15 @@ class Lambda(Layer):
The `Lambda` layer exists so that arbitrary TensorFlow functions The `Lambda` layer exists so that arbitrary TensorFlow functions
can be used when constructing `Sequential` and Functional API can be used when constructing `Sequential` and Functional API
models. `Lambda` layers are best suited for simple operations or models. `Lambda` layers are best suited for simple operations or
quick experimentation. For more advanced usecases, follow quick experimentation. For more advanced usecases, follow
[this guide](https://www.tensorflow.org/guide/keras/custom_layers_and_models) [this guide](https://www.tensorflow.org/guide/keras/custom_layers_and_models)
for subclassing `tf.keras.layers.Layer`. for subclassing `tf.keras.layers.Layer`.
The main reason to subclass `tf.keras.layers.Layer` instead of using a The main reason to subclass `tf.keras.layers.Layer` instead of using a
`Lambda` layer is saving and inspecting a Model. `Lambda` layers `Lambda` layer is saving and inspecting a Model. `Lambda` layers
are saved by serializing the Python bytecode, whereas subclassed are saved by serializing the Python bytecode, whereas subclassed
Layers can be saved via overriding their `get_config` method. Overriding Layers can be saved via overriding their `get_config` method. Overriding
`get_config` improves the portability of Models. Models that rely on `get_config` improves the portability of Models. Models that rely on
subclassed Layers are also often easier to visualize and reason about. subclassed Layers are also often easier to visualize and reason about.
Examples: Examples:

View File

@ -88,8 +88,8 @@ class BatchNormalizationBase(Layer):
gamma_regularizer: Optional regularizer for the gamma weight. gamma_regularizer: Optional regularizer for the gamma weight.
beta_constraint: Optional constraint for the beta weight. beta_constraint: Optional constraint for the beta weight.
gamma_constraint: Optional constraint for the gamma weight. gamma_constraint: Optional constraint for the gamma weight.
renorm: Whether to use Batch Renormalization renorm: Whether to use [Batch Renormalization](
(https://arxiv.org/abs/1702.03275). This adds extra variables during https://arxiv.org/abs/1702.03275). This adds extra variables during
training. The inference is the same for either value of this parameter. training. The inference is the same for either value of this parameter.
renorm_clipping: A dictionary that may map keys 'rmax', 'rmin', 'dmax' to renorm_clipping: A dictionary that may map keys 'rmax', 'rmin', 'dmax' to
scalar `Tensors` used to clip the renorm correction. The correction scalar `Tensors` used to clip the renorm correction. The correction
@ -164,9 +164,9 @@ class BatchNormalizationBase(Layer):
\\({y_i} = {\gamma * \hat{x_i} + \beta}\\) \\({y_i} = {\gamma * \hat{x_i} + \beta}\\)
References: Reference:
- [Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift](https://arxiv.org/abs/1502.03167) - [Ioffe and Szegedy, 2015](https://arxiv.org/abs/1502.03167).
""" """
# By default, the base class uses V2 behavior. The BatchNormalization V1 # By default, the base class uses V2 behavior. The BatchNormalization V1
@ -998,8 +998,8 @@ class LayerNormalization(Layer):
Output shape: Output shape:
Same shape as input. Same shape as input.
References: Reference:
- [Layer Normalization](https://arxiv.org/abs/1607.06450) - [Lei Ba et al., 2016](https://arxiv.org/abs/1607.06450).
""" """
def __init__(self, def __init__(self,

View File

@ -76,8 +76,8 @@ class SyncBatchNormalization(normalization.BatchNormalizationBase):
gamma_regularizer: Optional regularizer for the gamma weight. gamma_regularizer: Optional regularizer for the gamma weight.
beta_constraint: Optional constraint for the beta weight. beta_constraint: Optional constraint for the beta weight.
gamma_constraint: Optional constraint for the gamma weight. gamma_constraint: Optional constraint for the gamma weight.
renorm: Whether to use Batch Renormalization renorm: Whether to use [Batch Renormalization](
(https://arxiv.org/abs/1702.03275). This adds extra variables during https://arxiv.org/abs/1702.03275). This adds extra variables during
training. The inference is the same for either value of this parameter. training. The inference is the same for either value of this parameter.
renorm_clipping: A dictionary that may map keys 'rmax', 'rmin', 'dmax' to renorm_clipping: A dictionary that may map keys 'rmax', 'rmin', 'dmax' to
scalar `Tensors` used to clip the renorm correction. The correction scalar `Tensors` used to clip the renorm correction. The correction

View File

@ -2233,8 +2233,8 @@ class LSTMCell(DropoutRNNCellMixin, Layer):
unit_forget_bias: Boolean. unit_forget_bias: Boolean.
If True, add 1 to the bias of the forget gate at initialization. If True, add 1 to the bias of the forget gate at initialization.
Setting it to true will also force `bias_initializer="zeros"`. Setting it to true will also force `bias_initializer="zeros"`.
This is recommended in [Jozefowicz et This is recommended in [Jozefowicz et al., 2015](
al.](http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf) http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf)
kernel_regularizer: Regularizer function applied to kernel_regularizer: Regularizer function applied to
the `kernel` weights matrix. the `kernel` weights matrix.
recurrent_regularizer: Regularizer function applied to recurrent_regularizer: Regularizer function applied to
@ -2503,7 +2503,8 @@ class PeepholeLSTMCell(LSTMCell):
well as the previous hidden state (which is what LSTMCell is limited to). well as the previous hidden state (which is what LSTMCell is limited to).
This allows PeepholeLSTMCell to better learn precise timings over LSTMCell. This allows PeepholeLSTMCell to better learn precise timings over LSTMCell.
From [Gers et al.](http://www.jmlr.org/papers/volume3/gers02a/gers02a.pdf): From [Gers et al., 2002](
http://www.jmlr.org/papers/volume3/gers02a/gers02a.pdf):
"We find that LSTM augmented by 'peephole connections' from its internal "We find that LSTM augmented by 'peephole connections' from its internal
cells to its multiplicative gates can learn the fine distinction between cells to its multiplicative gates can learn the fine distinction between
@ -2512,9 +2513,7 @@ class PeepholeLSTMCell(LSTMCell):
The peephole implementation is based on: The peephole implementation is based on:
[Long short-term memory recurrent neural network architectures for [Sak et al., 2014](https://research.google.com/pubs/archive/43905.pdf)
large scale acoustic modeling.
](https://research.google.com/pubs/archive/43905.pdf)
Example: Example:
@ -2601,8 +2600,8 @@ class LSTM(RNN):
unit_forget_bias: Boolean. unit_forget_bias: Boolean.
If True, add 1 to the bias of the forget gate at initialization. If True, add 1 to the bias of the forget gate at initialization.
Setting it to true will also force `bias_initializer="zeros"`. Setting it to true will also force `bias_initializer="zeros"`.
This is recommended in [Jozefowicz et This is recommended in [Jozefowicz et al., 2015](
al.](http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf). http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf).
kernel_regularizer: Regularizer function applied to kernel_regularizer: Regularizer function applied to
the `kernel` weights matrix. the `kernel` weights matrix.
recurrent_regularizer: Regularizer function applied to recurrent_regularizer: Regularizer function applied to

View File

@ -63,8 +63,8 @@ class Loss(object):
types, and reduce losses explicitly in your training loop. Using 'AUTO' or types, and reduce losses explicitly in your training loop. Using 'AUTO' or
'SUM_OVER_BATCH_SIZE' will raise an error. 'SUM_OVER_BATCH_SIZE' will raise an error.
Please see this custom training [tutorial] Please see this custom training [tutorial](
(https://www.tensorflow.org/tutorials/distribute/custom_training) for more https://www.tensorflow.org/tutorials/distribute/custom_training) for more
details on this. details on this.
You can implement 'SUM_OVER_BATCH_SIZE' using global batch size like: You can implement 'SUM_OVER_BATCH_SIZE' using global batch size like:
@ -88,8 +88,8 @@ class Loss(object):
this defaults to `SUM_OVER_BATCH_SIZE`. When used with this defaults to `SUM_OVER_BATCH_SIZE`. When used with
`tf.distribute.Strategy`, outside of built-in training loops such as `tf.distribute.Strategy`, outside of built-in training loops such as
`tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE` `tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE`
will raise an error. Please see this custom training [tutorial] will raise an error. Please see this custom training [tutorial](
(https://www.tensorflow.org/tutorials/distribute/custom_training) https://www.tensorflow.org/tutorials/distribute/custom_training)
for more details. for more details.
name: Optional name for the op. name: Optional name for the op.
""" """
@ -222,8 +222,8 @@ class LossFunctionWrapper(Loss):
this defaults to `SUM_OVER_BATCH_SIZE`. When used with this defaults to `SUM_OVER_BATCH_SIZE`. When used with
`tf.distribute.Strategy`, outside of built-in training loops such as `tf.distribute.Strategy`, outside of built-in training loops such as
`tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE` `tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE`
will raise an error. Please see this custom training [tutorial] will raise an error. Please see this custom training [tutorial](
(https://www.tensorflow.org/tutorials/distribute/custom_training) https://www.tensorflow.org/tutorials/distribute/custom_training)
for more details. for more details.
name: (Optional) name for the loss. name: (Optional) name for the loss.
**kwargs: The keyword arguments that are passed on to `fn`. **kwargs: The keyword arguments that are passed on to `fn`.
@ -305,8 +305,8 @@ class MeanSquaredError(LossFunctionWrapper):
this defaults to `SUM_OVER_BATCH_SIZE`. When used with this defaults to `SUM_OVER_BATCH_SIZE`. When used with
`tf.distribute.Strategy`, outside of built-in training loops such as `tf.distribute.Strategy`, outside of built-in training loops such as
`tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE` `tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE`
will raise an error. Please see this custom training [tutorial] will raise an error. Please see this custom training [tutorial](
(https://www.tensorflow.org/tutorials/distribute/custom_training) https://www.tensorflow.org/tutorials/distribute/custom_training)
for more details. for more details.
name: Optional name for the op. Defaults to 'mean_squared_error'. name: Optional name for the op. Defaults to 'mean_squared_error'.
""" """
@ -364,8 +364,8 @@ class MeanAbsoluteError(LossFunctionWrapper):
this defaults to `SUM_OVER_BATCH_SIZE`. When used with this defaults to `SUM_OVER_BATCH_SIZE`. When used with
`tf.distribute.Strategy`, outside of built-in training loops such as `tf.distribute.Strategy`, outside of built-in training loops such as
`tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE` `tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE`
will raise an error. Please see this custom training [tutorial] will raise an error. Please see this custom training [tutorial](
(https://www.tensorflow.org/tutorials/distribute/custom_training) https://www.tensorflow.org/tutorials/distribute/custom_training)
for more details. for more details.
name: Optional name for the op. Defaults to 'mean_absolute_error'. name: Optional name for the op. Defaults to 'mean_absolute_error'.
""" """
@ -424,8 +424,8 @@ class MeanAbsolutePercentageError(LossFunctionWrapper):
this defaults to `SUM_OVER_BATCH_SIZE`. When used with this defaults to `SUM_OVER_BATCH_SIZE`. When used with
`tf.distribute.Strategy`, outside of built-in training loops such as `tf.distribute.Strategy`, outside of built-in training loops such as
`tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE` `tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE`
will raise an error. Please see this custom training [tutorial] will raise an error. Please see this custom training [tutorial](
(https://www.tensorflow.org/tutorials/distribute/custom_training) https://www.tensorflow.org/tutorials/distribute/custom_training)
for more details. for more details.
name: Optional name for the op. Defaults to name: Optional name for the op. Defaults to
'mean_absolute_percentage_error'. 'mean_absolute_percentage_error'.
@ -485,8 +485,8 @@ class MeanSquaredLogarithmicError(LossFunctionWrapper):
this defaults to `SUM_OVER_BATCH_SIZE`. When used with this defaults to `SUM_OVER_BATCH_SIZE`. When used with
`tf.distribute.Strategy`, outside of built-in training loops such as `tf.distribute.Strategy`, outside of built-in training loops such as
`tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE` `tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE`
will raise an error. Please see this custom training [tutorial] will raise an error. Please see this custom training [tutorial](
(https://www.tensorflow.org/tutorials/distribute/custom_training) https://www.tensorflow.org/tutorials/distribute/custom_training)
for more details. for more details.
name: Optional name for the op. Defaults to name: Optional name for the op. Defaults to
'mean_squared_logarithmic_error'. 'mean_squared_logarithmic_error'.
@ -561,8 +561,8 @@ class BinaryCrossentropy(LossFunctionWrapper):
this defaults to `SUM_OVER_BATCH_SIZE`. When used with this defaults to `SUM_OVER_BATCH_SIZE`. When used with
`tf.distribute.Strategy`, outside of built-in training loops such as `tf.distribute.Strategy`, outside of built-in training loops such as
`tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE` `tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE`
will raise an error. Please see this custom training [tutorial] will raise an error. Please see this custom training [tutorial](
(https://www.tensorflow.org/tutorials/distribute/custom_training) https://www.tensorflow.org/tutorials/distribute/custom_training)
for more details. for more details.
name: (Optional) Name for the op. Defaults to 'binary_crossentropy'. name: (Optional) Name for the op. Defaults to 'binary_crossentropy'.
""" """
@ -641,8 +641,8 @@ class CategoricalCrossentropy(LossFunctionWrapper):
this defaults to `SUM_OVER_BATCH_SIZE`. When used with this defaults to `SUM_OVER_BATCH_SIZE`. When used with
`tf.distribute.Strategy`, outside of built-in training loops such as `tf.distribute.Strategy`, outside of built-in training loops such as
`tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE` `tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE`
will raise an error. Please see this custom training [tutorial] will raise an error. Please see this custom training [tutorial](
(https://www.tensorflow.org/tutorials/distribute/custom_training) https://www.tensorflow.org/tutorials/distribute/custom_training)
for more details. for more details.
name: Optional name for the op. Defaults to 'categorical_crossentropy'. name: Optional name for the op. Defaults to 'categorical_crossentropy'.
""" """
@ -718,8 +718,8 @@ class SparseCategoricalCrossentropy(LossFunctionWrapper):
this defaults to `SUM_OVER_BATCH_SIZE`. When used with this defaults to `SUM_OVER_BATCH_SIZE`. When used with
`tf.distribute.Strategy`, outside of built-in training loops such as `tf.distribute.Strategy`, outside of built-in training loops such as
`tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE` `tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE`
will raise an error. Please see this custom training [tutorial] will raise an error. Please see this custom training [tutorial](
(https://www.tensorflow.org/tutorials/distribute/custom_training) https://www.tensorflow.org/tutorials/distribute/custom_training)
for more details. for more details.
name: Optional name for the op. Defaults to name: Optional name for the op. Defaults to
'sparse_categorical_crossentropy'. 'sparse_categorical_crossentropy'.
@ -782,8 +782,8 @@ class Hinge(LossFunctionWrapper):
this defaults to `SUM_OVER_BATCH_SIZE`. When used with this defaults to `SUM_OVER_BATCH_SIZE`. When used with
`tf.distribute.Strategy`, outside of built-in training loops such as `tf.distribute.Strategy`, outside of built-in training loops such as
`tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE` `tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE`
will raise an error. Please see this custom training [tutorial] will raise an error. Please see this custom training [tutorial](
(https://www.tensorflow.org/tutorials/distribute/custom_training) https://www.tensorflow.org/tutorials/distribute/custom_training)
for more details. for more details.
name: Optional name for the op. Defaults to 'hinge'. name: Optional name for the op. Defaults to 'hinge'.
""" """
@ -843,8 +843,8 @@ class SquaredHinge(LossFunctionWrapper):
this defaults to `SUM_OVER_BATCH_SIZE`. When used with this defaults to `SUM_OVER_BATCH_SIZE`. When used with
`tf.distribute.Strategy`, outside of built-in training loops such as `tf.distribute.Strategy`, outside of built-in training loops such as
`tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE` `tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE`
will raise an error. Please see this custom training [tutorial] will raise an error. Please see this custom training [tutorial](
(https://www.tensorflow.org/tutorials/distribute/custom_training) https://www.tensorflow.org/tutorials/distribute/custom_training)
for more details. for more details.
name: Optional name for the op. Defaults to 'squared_hinge'. name: Optional name for the op. Defaults to 'squared_hinge'.
""" """
@ -903,8 +903,8 @@ class CategoricalHinge(LossFunctionWrapper):
this defaults to `SUM_OVER_BATCH_SIZE`. When used with this defaults to `SUM_OVER_BATCH_SIZE`. When used with
`tf.distribute.Strategy`, outside of built-in training loops such as `tf.distribute.Strategy`, outside of built-in training loops such as
`tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE` `tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE`
will raise an error. Please see this custom training [tutorial] will raise an error. Please see this custom training [tutorial](
(https://www.tensorflow.org/tutorials/distribute/custom_training) https://www.tensorflow.org/tutorials/distribute/custom_training)
for more details. for more details.
name: Optional name for the op. Defaults to 'categorical_hinge'. name: Optional name for the op. Defaults to 'categorical_hinge'.
""" """
@ -960,8 +960,8 @@ class Poisson(LossFunctionWrapper):
this defaults to `SUM_OVER_BATCH_SIZE`. When used with this defaults to `SUM_OVER_BATCH_SIZE`. When used with
`tf.distribute.Strategy`, outside of built-in training loops such as `tf.distribute.Strategy`, outside of built-in training loops such as
`tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE` `tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE`
will raise an error. Please see this custom training [tutorial] will raise an error. Please see this custom training [tutorial](
(https://www.tensorflow.org/tutorials/distribute/custom_training) https://www.tensorflow.org/tutorials/distribute/custom_training)
for more details. for more details.
name: Optional name for the op. Defaults to 'poisson'. name: Optional name for the op. Defaults to 'poisson'.
""" """
@ -1017,8 +1017,8 @@ class LogCosh(LossFunctionWrapper):
this defaults to `SUM_OVER_BATCH_SIZE`. When used with this defaults to `SUM_OVER_BATCH_SIZE`. When used with
`tf.distribute.Strategy`, outside of built-in training loops such as `tf.distribute.Strategy`, outside of built-in training loops such as
`tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE` `tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE`
will raise an error. Please see this custom training [tutorial] will raise an error. Please see this custom training [tutorial](
(https://www.tensorflow.org/tutorials/distribute/custom_training) https://www.tensorflow.org/tutorials/distribute/custom_training)
for more details. for more details.
name: Optional name for the op. Defaults to 'log_cosh'. name: Optional name for the op. Defaults to 'log_cosh'.
""" """
@ -1077,8 +1077,8 @@ class KLDivergence(LossFunctionWrapper):
this defaults to `SUM_OVER_BATCH_SIZE`. When used with this defaults to `SUM_OVER_BATCH_SIZE`. When used with
`tf.distribute.Strategy`, outside of built-in training loops such as `tf.distribute.Strategy`, outside of built-in training loops such as
`tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE` `tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE`
will raise an error. Please see this custom training [tutorial] will raise an error. Please see this custom training [tutorial](
(https://www.tensorflow.org/tutorials/distribute/custom_training) https://www.tensorflow.org/tutorials/distribute/custom_training)
for more details. for more details.
name: Optional name for the op. Defaults to 'kl_divergence'. name: Optional name for the op. Defaults to 'kl_divergence'.
""" """
@ -1145,8 +1145,8 @@ class Huber(LossFunctionWrapper):
this defaults to `SUM_OVER_BATCH_SIZE`. When used with this defaults to `SUM_OVER_BATCH_SIZE`. When used with
`tf.distribute.Strategy`, outside of built-in training loops such as `tf.distribute.Strategy`, outside of built-in training loops such as
`tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE` `tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE`
will raise an error. Please see this custom training [tutorial] will raise an error. Please see this custom training [tutorial](
(https://www.tensorflow.org/tutorials/distribute/custom_training) https://www.tensorflow.org/tutorials/distribute/custom_training)
for more details. for more details.
name: Optional name for the op. Defaults to 'huber_loss'. name: Optional name for the op. Defaults to 'huber_loss'.
""" """

View File

@ -1416,8 +1416,8 @@ class Recall(Metric):
class SensitivitySpecificityBase(Metric): class SensitivitySpecificityBase(Metric):
"""Abstract base class for computing sensitivity and specificity. """Abstract base class for computing sensitivity and specificity.
For additional information about specificity and sensitivity, see the For additional information about specificity and sensitivity, see
following: https://en.wikipedia.org/wiki/Sensitivity_and_specificity [the following](https://en.wikipedia.org/wiki/Sensitivity_and_specificity).
""" """
def __init__(self, value, num_thresholds=200, name=None, dtype=None): def __init__(self, value, num_thresholds=200, name=None, dtype=None):
@ -1523,8 +1523,8 @@ class SensitivityAtSpecificity(SensitivitySpecificityBase):
If `sample_weight` is `None`, weights default to 1. If `sample_weight` is `None`, weights default to 1.
Use `sample_weight` of 0 to mask values. Use `sample_weight` of 0 to mask values.
For additional information about specificity and sensitivity, see the For additional information about specificity and sensitivity, see
following: https://en.wikipedia.org/wiki/Sensitivity_and_specificity [the following](https://en.wikipedia.org/wiki/Sensitivity_and_specificity).
Args: Args:
specificity: A scalar value in range `[0, 1]`. specificity: A scalar value in range `[0, 1]`.
@ -1598,8 +1598,8 @@ class SpecificityAtSensitivity(SensitivitySpecificityBase):
If `sample_weight` is `None`, weights default to 1. If `sample_weight` is `None`, weights default to 1.
Use `sample_weight` of 0 to mask values. Use `sample_weight` of 0 to mask values.
For additional information about specificity and sensitivity, see the For additional information about specificity and sensitivity, see
following: https://en.wikipedia.org/wiki/Sensitivity_and_specificity [the following](https://en.wikipedia.org/wiki/Sensitivity_and_specificity).
Args: Args:
sensitivity: A scalar value in range `[0, 1]`. sensitivity: A scalar value in range `[0, 1]`.
@ -1828,13 +1828,14 @@ class AUC(Metric):
use when discretizing the roc curve. Values must be > 1. use when discretizing the roc curve. Values must be > 1.
curve: (Optional) Specifies the name of the curve to be computed, 'ROC' curve: (Optional) Specifies the name of the curve to be computed, 'ROC'
[default] or 'PR' for the Precision-Recall-curve. [default] or 'PR' for the Precision-Recall-curve.
summation_method: (Optional) Specifies the Riemann summation method used summation_method: (Optional) Specifies the [Riemann summation method](
(https://en.wikipedia.org/wiki/Riemann_sum): 'interpolation' [default], https://en.wikipedia.org/wiki/Riemann_sum) used.
applies mid-point summation scheme for `ROC`. For PR-AUC, interpolates 'interpolation' (default) applies mid-point summation scheme for `ROC`.
(true/false) positives but not the ratio that is precision (see Davis For PR-AUC, interpolates (true/false) positives but not the ratio that
& Goadrich 2006 for details); 'minoring' that applies left summation is precision (see Davis & Goadrich 2006 for details);
'minoring' applies left summation
for increasing intervals and right summation for decreasing intervals; for increasing intervals and right summation for decreasing intervals;
'majoring' that does the opposite. 'majoring' does the opposite.
name: (Optional) string name of the metric instance. name: (Optional) string name of the metric instance.
dtype: (Optional) data type of the metric result. dtype: (Optional) data type of the metric result.
thresholds: (Optional) A list of floating point values to use as the thresholds: (Optional) A list of floating point values to use as the
@ -2226,8 +2227,9 @@ class AUC(Metric):
class CosineSimilarity(MeanMetricWrapper): class CosineSimilarity(MeanMetricWrapper):
"""Computes the cosine similarity between the labels and predictions. """Computes the cosine similarity between the labels and predictions.
cosine similarity = (a . b) / ||a|| ||b|| `cosine similarity = (a . b) / ||a|| ||b||`
[Cosine Similarity](https://en.wikipedia.org/wiki/Cosine_similarity)
See: [Cosine Similarity](https://en.wikipedia.org/wiki/Cosine_similarity).
This metric keeps the average cosine similarity between `predictions` and This metric keeps the average cosine similarity between `predictions` and
`labels` over a stream of data. `labels` over a stream of data.

View File

@ -15,8 +15,8 @@
"""Keras mixed precision API. """Keras mixed precision API.
See [the mixed precision See [the mixed precision guide](
guide](https://www.tensorflow.org/guide/keras/mixed_precision) to learn how to https://www.tensorflow.org/guide/keras/mixed_precision) to learn how to
use the API. use the API.
""" """
from __future__ import absolute_import from __future__ import absolute_import

View File

@ -57,8 +57,8 @@ class Policy(object):
not have a single dtype. When the variable dtype does not match the compute not have a single dtype. When the variable dtype does not match the compute
dtype, variables will be automatically casted to the compute dtype to avoid dtype, variables will be automatically casted to the compute dtype to avoid
type errors. In this case, `tf.keras.layers.Layer.dtype` refers to the type errors. In this case, `tf.keras.layers.Layer.dtype` refers to the
variable dtype, not the compute dtype. See [the mixed precision variable dtype, not the compute dtype. See [the mixed precision guide](
guide](https://www.tensorflow.org/guide/keras/mixed_precision) for more https://www.tensorflow.org/guide/keras/mixed_precision) for more
information on how to use mixed precision. information on how to use mixed precision.
Certain policies also have a `tf.mixed_precision.experimental.LossScale` Certain policies also have a `tf.mixed_precision.experimental.LossScale`
@ -119,8 +119,8 @@ class Policy(object):
`'mixed_bfloat16'`, no loss scaling is done and loss scaling never needs to be `'mixed_bfloat16'`, no loss scaling is done and loss scaling never needs to be
manually applied. manually applied.
See [the mixed precision See [the mixed precision guide](
guide](https://www.tensorflow.org/guide/keras/mixed_precision) for more https://www.tensorflow.org/guide/keras/mixed_precision) for more
information on using mixed precision information on using mixed precision
### How to use float64 in a Keras model ### How to use float64 in a Keras model