Fix shapes in comments of nmt_with_attention.ipynb
It is a bit misleading and confusing that the output shape of decoder is currently commented as `(batch_size * max_length, vocab)`. However the correct shape should be `(batch_size * 1, vocab)`, since the input x of GRU layer has shape == `(batch_size, 1, embedding_dim + hidden_size)`.
This commit is contained in:
parent
abd645085b
commit
842cd17c10
@ -552,10 +552,10 @@
|
|||||||
" # passing the concatenated vector to the GRU\n",
|
" # passing the concatenated vector to the GRU\n",
|
||||||
" output, state = self.gru(x)\n",
|
" output, state = self.gru(x)\n",
|
||||||
" \n",
|
" \n",
|
||||||
" # output shape == (batch_size * max_length, hidden_size)\n",
|
" # output shape == (batch_size * 1, hidden_size)\n",
|
||||||
" output = tf.reshape(output, (-1, output.shape[2]))\n",
|
" output = tf.reshape(output, (-1, output.shape[2]))\n",
|
||||||
" \n",
|
" \n",
|
||||||
" # output shape == (batch_size * max_length, vocab)\n",
|
" # output shape == (batch_size * 1, vocab)\n",
|
||||||
" x = self.fc(output)\n",
|
" x = self.fc(output)\n",
|
||||||
" \n",
|
" \n",
|
||||||
" return x, state, attention_weights\n",
|
" return x, state, attention_weights\n",
|
||||||
|
Loading…
Reference in New Issue
Block a user