More simplifications to the text.

This commit is contained in:
Mark Daoust 2018-06-18 16:33:33 -07:00 committed by GitHub
parent d7c971c156
commit ace209ce76
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -475,20 +475,13 @@ loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
Let's take a closer look at what's happening above. Let's take a closer look at what's happening above.
Our `labels` tensor contains a list of predictions for our examples, e.g. `[1, Our `labels` tensor contains a list of prediction indices for our examples, e.g. `[1,
9, ...]`. By using `tf.losses.sparse_softmax_cross_entropy()` we do not need to convert `labels` 9, ...]`. `logits` contains the linear outputs of our last layer.
to the corresponding
[one-hot encoding](https://www.quora.com/What-is-one-hot-encoding-and-when-is-it-used-in-data-science)
that is commonly used in machine learning applications.
Next, we compute cross-entropy of `labels` and the softmax of the `tf.losses.sparse_softmax_cross_entropy`, calculates the softmax crossentropy
predictions from our logits layer. `tf.losses.sparse_softmax_cross_entropy()` takes (aka: categorical crossentropy, negative log-likelihood) from these two inputs
`labels` and `logits` as arguments, performs softmax activation on in an efficient, numerically stable way.
`logits`, calculates cross-entropy, and returns our `loss` as a scalar `Tensor`:
```python
loss = tf.losses.sparse_softmax_cross_entropy(labels=onehot_labels, logits=logits)
```
### Configure the Training Op ### Configure the Training Op