Update the misleading comment for cifar10.py's softmax_linear layer (#5259)

* Update the misunderstanding comment for cifar10.py

A fix for the issue #5251, make the comment more meaningful.

* Update comment to be a bit more precise.

* wrap to 80
This commit is contained in:
a7744hsc 2016-11-04 01:55:27 +08:00 committed by Vijay Vasudevan
parent 799162580c
commit d131c5d35b

View File

@ -256,7 +256,10 @@ def inference(images):
local4 = tf.nn.relu(tf.matmul(local3, weights) + biases, name=scope.name)
_activation_summary(local4)
# softmax, i.e. softmax(WX + b)
# linear layer(WX + b),
# We don't apply softmax here because
# tf.nn.sparse_softmax_cross_entropy_with_logits accepts the unscaled logits
# and performs the softmax internally for efficiency.
with tf.variable_scope('softmax_linear') as scope:
weights = _variable_with_weight_decay('weights', [192, NUM_CLASSES],
stddev=1/192.0, wd=0.0)