Merge pull request #17057 from deroneriksson/low_level_intro_typos

Fix typos in low-level introduction documentation
This commit is contained in:
Martin Wicke 2018-02-15 17:36:02 -08:00 committed by GitHub
commit 61f64e825c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -295,7 +295,7 @@ the same input. @{tf.layers$Layers} are the preferred way to add trainable
parameters to a graph.
Layers package together both the variables and the operations that act
on them, . For example a
on them. For example a
[densely-connected layer](https://developers.google.com/machine-learning/glossary/#fully_connected_layer)
performs a weighted sum across all inputs
for each output and applies an optional
@ -478,7 +478,7 @@ good. Here's what we got; your own output will almost certainly differ:
[ 0.10527515]]
```
### loss
### Loss
To optimize a model, you first need to define the loss. We'll use the mean
square error, a standard loss for regression problems.
@ -504,7 +504,7 @@ TensorFlow provides
[**optimizers**](https://developers.google.com/machine-learning/glossary/#optimizer)
implementing standard optimization algorithms. These are implemented as
sub-classes of @{tf.train.Optimizer}. They incrementally change each
variable in order to minimizethe loss. The simplest optimization algorithm is
variable in order to minimize the loss. The simplest optimization algorithm is
[**gradient descent**](https://developers.google.com/machine-learning/glossary/#gradient_descent),
implemented by @{tf.train.GradientDescentOptimizer}. It modifies each
variable according to the magnitude of the derivative of loss with respect to