Fix various links in Keras docstrings.

PiperOrigin-RevId: 306769967
Change-Id: I21491ccbe9679f34bbd4b28aa27c46c6ba01ac4e
This commit is contained in:
Francois Chollet 2020-04-15 19:49:27 -07:00 committed by TensorFlower Gardener
parent 40a351a745
commit 310465b783
14 changed files with 103 additions and 105 deletions

View File

@ -86,8 +86,8 @@ def set_floatx(value):
likely cause numeric stability issues. Instead, mixed precision, which is
using a mix of float16 and float32, can be used by calling
`tf.keras.mixed_precision.experimental.set_policy('mixed_float16')`. See the
[mixed precision
guide](https://www.tensorflow.org/guide/keras/mixed_precision) for details.
[mixed precision guide](
https://www.tensorflow.org/guide/keras/mixed_precision) for details.
Arguments:
value: String; `'float16'`, `'float32'`, or `'float64'`.

View File

@ -83,15 +83,16 @@ class TensorBoard(callbacks.TensorBoard):
embeddings_layer_names: a list of names of layers to keep eye on. If None
or empty list all the embedding layer will be watched.
embeddings_metadata: a dictionary which maps layer name to a file name in
which metadata for this embedding layer is saved. See the
[details](https://www.tensorflow.org/how_tos/embedding_viz/#metadata_optional)
which metadata for this embedding layer is saved.
[Here are details](
https://www.tensorflow.org/how_tos/embedding_viz/#metadata_optional)
about metadata files format. In case if the same metadata file is
used for all embedding layers, string can be passed.
embeddings_data: data to be embedded at layers specified in
`embeddings_layer_names`. Numpy array (if the model has a single input)
or list of Numpy arrays (if the model has multiple inputs). Learn [more
about
embeddings](https://www.tensorflow.org/programmers_guide/embedding)
or list of Numpy arrays (if the model has multiple inputs). Learn more
about embeddings [in this guide](
https://www.tensorflow.org/programmers_guide/embedding).
update_freq: `'batch'` or `'epoch'` or integer. When using `'batch'`,
writes the losses and metrics to TensorBoard after each batch. The same
applies for `'epoch'`. If using an integer, let's say `1000`, the

View File

@ -80,7 +80,7 @@ class InputLayer(base_layer.Layer):
ragged: Boolean, whether the placeholder created is meant to be ragged.
In this case, values of 'None' in the 'shape' argument represent
ragged dimensions. For more information about RaggedTensors, see
https://www.tensorflow.org/guide/ragged_tensors.
[this guide](https://www.tensorflow.org/guide/ragged_tensors).
Default to False.
name: Optional name of the layer (string).
"""
@ -231,7 +231,7 @@ def Input( # pylint: disable=invalid-name
ragged. Only one of 'ragged' and 'sparse' can be True. In this case,
values of 'None' in the 'shape' argument represent ragged dimensions.
For more information about RaggedTensors, see
https://www.tensorflow.org/guide/ragged_tensors.
[this guide](https://www.tensorflow.org/guide/ragged_tensors).
**kwargs: deprecated arguments support. Supports `batch_shape` and
`batch_input_shape`.

View File

@ -163,9 +163,6 @@ class Model(network.Network, version_utils.ModelVersionSelector):
Once the model is created, you can config the model with losses and metrics
with `model.compile()`, train the model with `model.fit()`, or use the model
to do prediction with `model.predict()`.
Checkout [guide](https://www.tensorflow.org/guide/keras/overview) for
additional details.
"""
_TF_MODULE_IGNORED_PROPERTIES = frozenset(
itertools.chain(('_train_counter', '_test_counter', '_predict_counter',

View File

@ -46,8 +46,8 @@ def model_to_estimator(
model to an Estimator for use with downstream systems.
For usage example, please see:
[Creating estimators from Keras
Models](https://www.tensorflow.org/guide/estimators#creating_estimators_from_keras_models).
[Creating estimators from Keras Models](
https://www.tensorflow.org/guide/estimators#creating_estimators_from_keras_models).
Sample Weights:
Estimators returned by `model_to_estimator` are configured so that they can
@ -144,8 +144,8 @@ def model_to_estimator_v2(keras_model=None,
model to an Estimator for use with downstream systems.
For usage example, please see:
[Creating estimators from Keras
Models](https://www.tensorflow.org/guide/estimators#creating_estimators_from_keras_models).
[Creating estimators from Keras Models](
https://www.tensorflow.org/guide/estimators#creating_estimators_from_keras_models).
Sample Weights:
Estimators returned by `model_to_estimator` are configured so that they can

View File

@ -458,8 +458,8 @@ class ConvLSTM2DCell(DropoutRNNCellMixin, Layer):
unit_forget_bias: Boolean.
If True, add 1 to the bias of the forget gate at initialization.
Use in combination with `bias_initializer="zeros"`.
This is recommended in [Jozefowicz et al.]
(http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf)
This is recommended in [Jozefowicz et al., 2015](
http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf)
kernel_regularizer: Regularizer function applied to
the `kernel` weights matrix.
recurrent_regularizer: Regularizer function applied to
@ -739,8 +739,8 @@ class ConvLSTM2D(ConvRNN2D):
unit_forget_bias: Boolean.
If True, add 1 to the bias of the forget gate at initialization.
Use in combination with `bias_initializer="zeros"`.
This is recommended in [Jozefowicz et al.]
(http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf)
This is recommended in [Jozefowicz et al., 2015](
http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf)
kernel_regularizer: Regularizer function applied to
the `kernel` weights matrix.
recurrent_regularizer: Regularizer function applied to
@ -807,10 +807,9 @@ class ConvLSTM2D(ConvRNN2D):
ValueError: in case of invalid constructor arguments.
References:
- [Convolutional LSTM Network: A Machine Learning Approach for
Precipitation Nowcasting](http://arxiv.org/abs/1506.04214v1)
The current implementation does not include the feedback loop on the
cells output.
- [Shi et al., 2015](http://arxiv.org/abs/1506.04214v1)
(the current implementation does not include the feedback loop on the
cells output).
"""
def __init__(self,

View File

@ -92,8 +92,8 @@ class Masking(Layer):
# The time step 3 and 5 will be skipped from LSTM calculation.
```
See [the masking and padding
guide](https://www.tensorflow.org/guide/keras/masking_and_padding)
See [the masking and padding guide](
https://www.tensorflow.org/guide/keras/masking_and_padding)
for more details.
"""
@ -734,15 +734,15 @@ class Lambda(Layer):
The `Lambda` layer exists so that arbitrary TensorFlow functions
can be used when constructing `Sequential` and Functional API
models. `Lambda` layers are best suited for simple operations or
quick experimentation. For more advanced usecases, follow
quick experimentation. For more advanced usecases, follow
[this guide](https://www.tensorflow.org/guide/keras/custom_layers_and_models)
for subclassing `tf.keras.layers.Layer`.
The main reason to subclass `tf.keras.layers.Layer` instead of using a
`Lambda` layer is saving and inspecting a Model. `Lambda` layers
are saved by serializing the Python bytecode, whereas subclassed
Layers can be saved via overriding their `get_config` method. Overriding
`get_config` improves the portability of Models. Models that rely on
for subclassing `tf.keras.layers.Layer`.
The main reason to subclass `tf.keras.layers.Layer` instead of using a
`Lambda` layer is saving and inspecting a Model. `Lambda` layers
are saved by serializing the Python bytecode, whereas subclassed
Layers can be saved via overriding their `get_config` method. Overriding
`get_config` improves the portability of Models. Models that rely on
subclassed Layers are also often easier to visualize and reason about.
Examples:

View File

@ -88,8 +88,8 @@ class BatchNormalizationBase(Layer):
gamma_regularizer: Optional regularizer for the gamma weight.
beta_constraint: Optional constraint for the beta weight.
gamma_constraint: Optional constraint for the gamma weight.
renorm: Whether to use Batch Renormalization
(https://arxiv.org/abs/1702.03275). This adds extra variables during
renorm: Whether to use [Batch Renormalization](
https://arxiv.org/abs/1702.03275). This adds extra variables during
training. The inference is the same for either value of this parameter.
renorm_clipping: A dictionary that may map keys 'rmax', 'rmin', 'dmax' to
scalar `Tensors` used to clip the renorm correction. The correction
@ -164,9 +164,9 @@ class BatchNormalizationBase(Layer):
\\({y_i} = {\gamma * \hat{x_i} + \beta}\\)
References:
- [Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift](https://arxiv.org/abs/1502.03167)
Reference:
- [Ioffe and Szegedy, 2015](https://arxiv.org/abs/1502.03167).
"""
# By default, the base class uses V2 behavior. The BatchNormalization V1
@ -998,8 +998,8 @@ class LayerNormalization(Layer):
Output shape:
Same shape as input.
References:
- [Layer Normalization](https://arxiv.org/abs/1607.06450)
Reference:
- [Lei Ba et al., 2016](https://arxiv.org/abs/1607.06450).
"""
def __init__(self,

View File

@ -76,8 +76,8 @@ class SyncBatchNormalization(normalization.BatchNormalizationBase):
gamma_regularizer: Optional regularizer for the gamma weight.
beta_constraint: Optional constraint for the beta weight.
gamma_constraint: Optional constraint for the gamma weight.
renorm: Whether to use Batch Renormalization
(https://arxiv.org/abs/1702.03275). This adds extra variables during
renorm: Whether to use [Batch Renormalization](
https://arxiv.org/abs/1702.03275). This adds extra variables during
training. The inference is the same for either value of this parameter.
renorm_clipping: A dictionary that may map keys 'rmax', 'rmin', 'dmax' to
scalar `Tensors` used to clip the renorm correction. The correction

View File

@ -2233,8 +2233,8 @@ class LSTMCell(DropoutRNNCellMixin, Layer):
unit_forget_bias: Boolean.
If True, add 1 to the bias of the forget gate at initialization.
Setting it to true will also force `bias_initializer="zeros"`.
This is recommended in [Jozefowicz et
al.](http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf)
This is recommended in [Jozefowicz et al., 2015](
http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf)
kernel_regularizer: Regularizer function applied to
the `kernel` weights matrix.
recurrent_regularizer: Regularizer function applied to
@ -2503,7 +2503,8 @@ class PeepholeLSTMCell(LSTMCell):
well as the previous hidden state (which is what LSTMCell is limited to).
This allows PeepholeLSTMCell to better learn precise timings over LSTMCell.
From [Gers et al.](http://www.jmlr.org/papers/volume3/gers02a/gers02a.pdf):
From [Gers et al., 2002](
http://www.jmlr.org/papers/volume3/gers02a/gers02a.pdf):
"We find that LSTM augmented by 'peephole connections' from its internal
cells to its multiplicative gates can learn the fine distinction between
@ -2512,9 +2513,7 @@ class PeepholeLSTMCell(LSTMCell):
The peephole implementation is based on:
[Long short-term memory recurrent neural network architectures for
large scale acoustic modeling.
](https://research.google.com/pubs/archive/43905.pdf)
[Sak et al., 2014](https://research.google.com/pubs/archive/43905.pdf)
Example:
@ -2601,8 +2600,8 @@ class LSTM(RNN):
unit_forget_bias: Boolean.
If True, add 1 to the bias of the forget gate at initialization.
Setting it to true will also force `bias_initializer="zeros"`.
This is recommended in [Jozefowicz et
al.](http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf).
This is recommended in [Jozefowicz et al., 2015](
http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf).
kernel_regularizer: Regularizer function applied to
the `kernel` weights matrix.
recurrent_regularizer: Regularizer function applied to

View File

@ -63,8 +63,8 @@ class Loss(object):
types, and reduce losses explicitly in your training loop. Using 'AUTO' or
'SUM_OVER_BATCH_SIZE' will raise an error.
Please see this custom training [tutorial]
(https://www.tensorflow.org/tutorials/distribute/custom_training) for more
Please see this custom training [tutorial](
https://www.tensorflow.org/tutorials/distribute/custom_training) for more
details on this.
You can implement 'SUM_OVER_BATCH_SIZE' using global batch size like:
@ -88,8 +88,8 @@ class Loss(object):
this defaults to `SUM_OVER_BATCH_SIZE`. When used with
`tf.distribute.Strategy`, outside of built-in training loops such as
`tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE`
will raise an error. Please see this custom training [tutorial]
(https://www.tensorflow.org/tutorials/distribute/custom_training)
will raise an error. Please see this custom training [tutorial](
https://www.tensorflow.org/tutorials/distribute/custom_training)
for more details.
name: Optional name for the op.
"""
@ -222,8 +222,8 @@ class LossFunctionWrapper(Loss):
this defaults to `SUM_OVER_BATCH_SIZE`. When used with
`tf.distribute.Strategy`, outside of built-in training loops such as
`tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE`
will raise an error. Please see this custom training [tutorial]
(https://www.tensorflow.org/tutorials/distribute/custom_training)
will raise an error. Please see this custom training [tutorial](
https://www.tensorflow.org/tutorials/distribute/custom_training)
for more details.
name: (Optional) name for the loss.
**kwargs: The keyword arguments that are passed on to `fn`.
@ -305,8 +305,8 @@ class MeanSquaredError(LossFunctionWrapper):
this defaults to `SUM_OVER_BATCH_SIZE`. When used with
`tf.distribute.Strategy`, outside of built-in training loops such as
`tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE`
will raise an error. Please see this custom training [tutorial]
(https://www.tensorflow.org/tutorials/distribute/custom_training)
will raise an error. Please see this custom training [tutorial](
https://www.tensorflow.org/tutorials/distribute/custom_training)
for more details.
name: Optional name for the op. Defaults to 'mean_squared_error'.
"""
@ -364,8 +364,8 @@ class MeanAbsoluteError(LossFunctionWrapper):
this defaults to `SUM_OVER_BATCH_SIZE`. When used with
`tf.distribute.Strategy`, outside of built-in training loops such as
`tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE`
will raise an error. Please see this custom training [tutorial]
(https://www.tensorflow.org/tutorials/distribute/custom_training)
will raise an error. Please see this custom training [tutorial](
https://www.tensorflow.org/tutorials/distribute/custom_training)
for more details.
name: Optional name for the op. Defaults to 'mean_absolute_error'.
"""
@ -424,8 +424,8 @@ class MeanAbsolutePercentageError(LossFunctionWrapper):
this defaults to `SUM_OVER_BATCH_SIZE`. When used with
`tf.distribute.Strategy`, outside of built-in training loops such as
`tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE`
will raise an error. Please see this custom training [tutorial]
(https://www.tensorflow.org/tutorials/distribute/custom_training)
will raise an error. Please see this custom training [tutorial](
https://www.tensorflow.org/tutorials/distribute/custom_training)
for more details.
name: Optional name for the op. Defaults to
'mean_absolute_percentage_error'.
@ -485,8 +485,8 @@ class MeanSquaredLogarithmicError(LossFunctionWrapper):
this defaults to `SUM_OVER_BATCH_SIZE`. When used with
`tf.distribute.Strategy`, outside of built-in training loops such as
`tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE`
will raise an error. Please see this custom training [tutorial]
(https://www.tensorflow.org/tutorials/distribute/custom_training)
will raise an error. Please see this custom training [tutorial](
https://www.tensorflow.org/tutorials/distribute/custom_training)
for more details.
name: Optional name for the op. Defaults to
'mean_squared_logarithmic_error'.
@ -561,8 +561,8 @@ class BinaryCrossentropy(LossFunctionWrapper):
this defaults to `SUM_OVER_BATCH_SIZE`. When used with
`tf.distribute.Strategy`, outside of built-in training loops such as
`tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE`
will raise an error. Please see this custom training [tutorial]
(https://www.tensorflow.org/tutorials/distribute/custom_training)
will raise an error. Please see this custom training [tutorial](
https://www.tensorflow.org/tutorials/distribute/custom_training)
for more details.
name: (Optional) Name for the op. Defaults to 'binary_crossentropy'.
"""
@ -641,8 +641,8 @@ class CategoricalCrossentropy(LossFunctionWrapper):
this defaults to `SUM_OVER_BATCH_SIZE`. When used with
`tf.distribute.Strategy`, outside of built-in training loops such as
`tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE`
will raise an error. Please see this custom training [tutorial]
(https://www.tensorflow.org/tutorials/distribute/custom_training)
will raise an error. Please see this custom training [tutorial](
https://www.tensorflow.org/tutorials/distribute/custom_training)
for more details.
name: Optional name for the op. Defaults to 'categorical_crossentropy'.
"""
@ -718,8 +718,8 @@ class SparseCategoricalCrossentropy(LossFunctionWrapper):
this defaults to `SUM_OVER_BATCH_SIZE`. When used with
`tf.distribute.Strategy`, outside of built-in training loops such as
`tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE`
will raise an error. Please see this custom training [tutorial]
(https://www.tensorflow.org/tutorials/distribute/custom_training)
will raise an error. Please see this custom training [tutorial](
https://www.tensorflow.org/tutorials/distribute/custom_training)
for more details.
name: Optional name for the op. Defaults to
'sparse_categorical_crossentropy'.
@ -782,8 +782,8 @@ class Hinge(LossFunctionWrapper):
this defaults to `SUM_OVER_BATCH_SIZE`. When used with
`tf.distribute.Strategy`, outside of built-in training loops such as
`tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE`
will raise an error. Please see this custom training [tutorial]
(https://www.tensorflow.org/tutorials/distribute/custom_training)
will raise an error. Please see this custom training [tutorial](
https://www.tensorflow.org/tutorials/distribute/custom_training)
for more details.
name: Optional name for the op. Defaults to 'hinge'.
"""
@ -843,8 +843,8 @@ class SquaredHinge(LossFunctionWrapper):
this defaults to `SUM_OVER_BATCH_SIZE`. When used with
`tf.distribute.Strategy`, outside of built-in training loops such as
`tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE`
will raise an error. Please see this custom training [tutorial]
(https://www.tensorflow.org/tutorials/distribute/custom_training)
will raise an error. Please see this custom training [tutorial](
https://www.tensorflow.org/tutorials/distribute/custom_training)
for more details.
name: Optional name for the op. Defaults to 'squared_hinge'.
"""
@ -903,8 +903,8 @@ class CategoricalHinge(LossFunctionWrapper):
this defaults to `SUM_OVER_BATCH_SIZE`. When used with
`tf.distribute.Strategy`, outside of built-in training loops such as
`tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE`
will raise an error. Please see this custom training [tutorial]
(https://www.tensorflow.org/tutorials/distribute/custom_training)
will raise an error. Please see this custom training [tutorial](
https://www.tensorflow.org/tutorials/distribute/custom_training)
for more details.
name: Optional name for the op. Defaults to 'categorical_hinge'.
"""
@ -960,8 +960,8 @@ class Poisson(LossFunctionWrapper):
this defaults to `SUM_OVER_BATCH_SIZE`. When used with
`tf.distribute.Strategy`, outside of built-in training loops such as
`tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE`
will raise an error. Please see this custom training [tutorial]
(https://www.tensorflow.org/tutorials/distribute/custom_training)
will raise an error. Please see this custom training [tutorial](
https://www.tensorflow.org/tutorials/distribute/custom_training)
for more details.
name: Optional name for the op. Defaults to 'poisson'.
"""
@ -1017,8 +1017,8 @@ class LogCosh(LossFunctionWrapper):
this defaults to `SUM_OVER_BATCH_SIZE`. When used with
`tf.distribute.Strategy`, outside of built-in training loops such as
`tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE`
will raise an error. Please see this custom training [tutorial]
(https://www.tensorflow.org/tutorials/distribute/custom_training)
will raise an error. Please see this custom training [tutorial](
https://www.tensorflow.org/tutorials/distribute/custom_training)
for more details.
name: Optional name for the op. Defaults to 'log_cosh'.
"""
@ -1077,8 +1077,8 @@ class KLDivergence(LossFunctionWrapper):
this defaults to `SUM_OVER_BATCH_SIZE`. When used with
`tf.distribute.Strategy`, outside of built-in training loops such as
`tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE`
will raise an error. Please see this custom training [tutorial]
(https://www.tensorflow.org/tutorials/distribute/custom_training)
will raise an error. Please see this custom training [tutorial](
https://www.tensorflow.org/tutorials/distribute/custom_training)
for more details.
name: Optional name for the op. Defaults to 'kl_divergence'.
"""
@ -1145,8 +1145,8 @@ class Huber(LossFunctionWrapper):
this defaults to `SUM_OVER_BATCH_SIZE`. When used with
`tf.distribute.Strategy`, outside of built-in training loops such as
`tf.keras` `compile` and `fit`, using `AUTO` or `SUM_OVER_BATCH_SIZE`
will raise an error. Please see this custom training [tutorial]
(https://www.tensorflow.org/tutorials/distribute/custom_training)
will raise an error. Please see this custom training [tutorial](
https://www.tensorflow.org/tutorials/distribute/custom_training)
for more details.
name: Optional name for the op. Defaults to 'huber_loss'.
"""

View File

@ -1416,8 +1416,8 @@ class Recall(Metric):
class SensitivitySpecificityBase(Metric):
"""Abstract base class for computing sensitivity and specificity.
For additional information about specificity and sensitivity, see the
following: https://en.wikipedia.org/wiki/Sensitivity_and_specificity
For additional information about specificity and sensitivity, see
[the following](https://en.wikipedia.org/wiki/Sensitivity_and_specificity).
"""
def __init__(self, value, num_thresholds=200, name=None, dtype=None):
@ -1523,8 +1523,8 @@ class SensitivityAtSpecificity(SensitivitySpecificityBase):
If `sample_weight` is `None`, weights default to 1.
Use `sample_weight` of 0 to mask values.
For additional information about specificity and sensitivity, see the
following: https://en.wikipedia.org/wiki/Sensitivity_and_specificity
For additional information about specificity and sensitivity, see
[the following](https://en.wikipedia.org/wiki/Sensitivity_and_specificity).
Args:
specificity: A scalar value in range `[0, 1]`.
@ -1598,8 +1598,8 @@ class SpecificityAtSensitivity(SensitivitySpecificityBase):
If `sample_weight` is `None`, weights default to 1.
Use `sample_weight` of 0 to mask values.
For additional information about specificity and sensitivity, see the
following: https://en.wikipedia.org/wiki/Sensitivity_and_specificity
For additional information about specificity and sensitivity, see
[the following](https://en.wikipedia.org/wiki/Sensitivity_and_specificity).
Args:
sensitivity: A scalar value in range `[0, 1]`.
@ -1828,13 +1828,14 @@ class AUC(Metric):
use when discretizing the roc curve. Values must be > 1.
curve: (Optional) Specifies the name of the curve to be computed, 'ROC'
[default] or 'PR' for the Precision-Recall-curve.
summation_method: (Optional) Specifies the Riemann summation method used
(https://en.wikipedia.org/wiki/Riemann_sum): 'interpolation' [default],
applies mid-point summation scheme for `ROC`. For PR-AUC, interpolates
(true/false) positives but not the ratio that is precision (see Davis
& Goadrich 2006 for details); 'minoring' that applies left summation
summation_method: (Optional) Specifies the [Riemann summation method](
https://en.wikipedia.org/wiki/Riemann_sum) used.
'interpolation' (default) applies mid-point summation scheme for `ROC`.
For PR-AUC, interpolates (true/false) positives but not the ratio that
is precision (see Davis & Goadrich 2006 for details);
'minoring' applies left summation
for increasing intervals and right summation for decreasing intervals;
'majoring' that does the opposite.
'majoring' does the opposite.
name: (Optional) string name of the metric instance.
dtype: (Optional) data type of the metric result.
thresholds: (Optional) A list of floating point values to use as the
@ -2226,8 +2227,9 @@ class AUC(Metric):
class CosineSimilarity(MeanMetricWrapper):
"""Computes the cosine similarity between the labels and predictions.
cosine similarity = (a . b) / ||a|| ||b||
[Cosine Similarity](https://en.wikipedia.org/wiki/Cosine_similarity)
`cosine similarity = (a . b) / ||a|| ||b||`
See: [Cosine Similarity](https://en.wikipedia.org/wiki/Cosine_similarity).
This metric keeps the average cosine similarity between `predictions` and
`labels` over a stream of data.

View File

@ -15,8 +15,8 @@
"""Keras mixed precision API.
See [the mixed precision
guide](https://www.tensorflow.org/guide/keras/mixed_precision) to learn how to
See [the mixed precision guide](
https://www.tensorflow.org/guide/keras/mixed_precision) to learn how to
use the API.
"""
from __future__ import absolute_import

View File

@ -57,8 +57,8 @@ class Policy(object):
not have a single dtype. When the variable dtype does not match the compute
dtype, variables will be automatically casted to the compute dtype to avoid
type errors. In this case, `tf.keras.layers.Layer.dtype` refers to the
variable dtype, not the compute dtype. See [the mixed precision
guide](https://www.tensorflow.org/guide/keras/mixed_precision) for more
variable dtype, not the compute dtype. See [the mixed precision guide](
https://www.tensorflow.org/guide/keras/mixed_precision) for more
information on how to use mixed precision.
Certain policies also have a `tf.mixed_precision.experimental.LossScale`
@ -119,8 +119,8 @@ class Policy(object):
`'mixed_bfloat16'`, no loss scaling is done and loss scaling never needs to be
manually applied.
See [the mixed precision
guide](https://www.tensorflow.org/guide/keras/mixed_precision) for more
See [the mixed precision guide](
https://www.tensorflow.org/guide/keras/mixed_precision) for more
information on using mixed precision
### How to use float64 in a Keras model