fix typo in Adam optimizer docstring

PiperOrigin-RevId: 233451176
This commit is contained in:
A. Unique TensorFlower 2019-02-11 11:59:55 -08:00 committed by TensorFlower Gardener
parent f5593ff762
commit 4978ec3b45
2 changed files with 3 additions and 3 deletions

View File

@ -64,7 +64,7 @@ class Adam(optimizer_v2.OptimizerV2):
$$t := 0 \text{(Initialize timestep)}$$
The update rule for `variable` with gradient `g` uses an optimization
described at the end of section2 of the paper:
described at the end of section 2 of the paper:
$$t := t + 1$$
$$lr_t := \text{learning\_rate} * \sqrt{1 - beta_2^t} / (1 - beta_1^t)$$
@ -82,7 +82,7 @@ class Adam(optimizer_v2.OptimizerV2):
$$t := 0 \text{(Initialize timestep)}$$
The update rule for `variable` with gradient `g` uses an optimization
described at the end of section2 of the paper:
described at the end of section 2 of the paper:
$$t := t + 1$$
$$lr_t := \text{learning\_rate} * \sqrt{1 - beta_2^t} / (1 - beta_1^t)$$

View File

@ -52,7 +52,7 @@ class AdamOptimizer(optimizer.Optimizer):
$$t := 0 \text{(Initialize timestep)}$$
The update rule for `variable` with gradient `g` uses an optimization
described at the end of section2 of the paper:
described at the end of section 2 of the paper:
$$t := t + 1$$
$$lr_t := \text{learning\_rate} * \sqrt{1 - beta_2^t} / (1 - beta_1^t)$$