Before, we had the following loss scale base classes, which did the exact same thing: * tf.keras.mixed_precision.experimental.LossScale, which only worked for the keras OptimizerV2 * An unexposed LossScale in tensorflow/python/training/experimental, which only worked for the V1 Optimizer This change removes the Keras LossScale and merges it into the training LossScale, which now works for both Optimizers. The training LossScale is exposed as tf.train.experimental.LossScale. I moved over some functionality, comments, and style conventions from the Keras LossScale to the training LossScale. Because the LossScale class can not rely on Keras, the Keras OptimizerV2 now calls backend.track_variable on the LossScale variables instead. Note: I intend to cherrypick this into TF 1.14. PiperOrigin-RevId: 248213961
32 lines
730 B
Plaintext
32 lines
730 B
Plaintext
path: "tensorflow.train.experimental"
|
|
tf_module {
|
|
member {
|
|
name: "DynamicLossScale"
|
|
mtype: "<type \'type\'>"
|
|
}
|
|
member {
|
|
name: "FixedLossScale"
|
|
mtype: "<type \'type\'>"
|
|
}
|
|
member {
|
|
name: "LossScale"
|
|
mtype: "<type \'type\'>"
|
|
}
|
|
member {
|
|
name: "MixedPrecisionLossScaleOptimizer"
|
|
mtype: "<type \'type\'>"
|
|
}
|
|
member {
|
|
name: "PythonState"
|
|
mtype: "<type \'type\'>"
|
|
}
|
|
member_method {
|
|
name: "disable_mixed_precision_graph_rewrite"
|
|
argspec: "args=[], varargs=None, keywords=None, defaults=None"
|
|
}
|
|
member_method {
|
|
name: "enable_mixed_precision_graph_rewrite"
|
|
argspec: "args=[\'opt\', \'loss_scale\'], varargs=None, keywords=None, defaults=[\'dynamic\'], "
|
|
}
|
|
}
|