Merge pull request #17057 from deroneriksson/low_level_intro_typos
Fix typos in low-level introduction documentation
This commit is contained in:
commit
61f64e825c
@ -295,7 +295,7 @@ the same input. @{tf.layers$Layers} are the preferred way to add trainable
|
|||||||
parameters to a graph.
|
parameters to a graph.
|
||||||
|
|
||||||
Layers package together both the variables and the operations that act
|
Layers package together both the variables and the operations that act
|
||||||
on them, . For example a
|
on them. For example a
|
||||||
[densely-connected layer](https://developers.google.com/machine-learning/glossary/#fully_connected_layer)
|
[densely-connected layer](https://developers.google.com/machine-learning/glossary/#fully_connected_layer)
|
||||||
performs a weighted sum across all inputs
|
performs a weighted sum across all inputs
|
||||||
for each output and applies an optional
|
for each output and applies an optional
|
||||||
@ -478,7 +478,7 @@ good. Here's what we got; your own output will almost certainly differ:
|
|||||||
[ 0.10527515]]
|
[ 0.10527515]]
|
||||||
```
|
```
|
||||||
|
|
||||||
### loss
|
### Loss
|
||||||
|
|
||||||
To optimize a model, you first need to define the loss. We'll use the mean
|
To optimize a model, you first need to define the loss. We'll use the mean
|
||||||
square error, a standard loss for regression problems.
|
square error, a standard loss for regression problems.
|
||||||
@ -504,7 +504,7 @@ TensorFlow provides
|
|||||||
[**optimizers**](https://developers.google.com/machine-learning/glossary/#optimizer)
|
[**optimizers**](https://developers.google.com/machine-learning/glossary/#optimizer)
|
||||||
implementing standard optimization algorithms. These are implemented as
|
implementing standard optimization algorithms. These are implemented as
|
||||||
sub-classes of @{tf.train.Optimizer}. They incrementally change each
|
sub-classes of @{tf.train.Optimizer}. They incrementally change each
|
||||||
variable in order to minimizethe loss. The simplest optimization algorithm is
|
variable in order to minimize the loss. The simplest optimization algorithm is
|
||||||
[**gradient descent**](https://developers.google.com/machine-learning/glossary/#gradient_descent),
|
[**gradient descent**](https://developers.google.com/machine-learning/glossary/#gradient_descent),
|
||||||
implemented by @{tf.train.GradientDescentOptimizer}. It modifies each
|
implemented by @{tf.train.GradientDescentOptimizer}. It modifies each
|
||||||
variable according to the magnitude of the derivative of loss with respect to
|
variable according to the magnitude of the derivative of loss with respect to
|
||||||
|
Loading…
Reference in New Issue
Block a user