Fix shapes in comments of nmt_with_attention.ipynb

It is a bit misleading and confusing that the output shape of decoder is currently commented as `(batch_size * max_length, vocab)`. However the correct shape should be `(batch_size * 1, vocab)`, since the input x of GRU layer has shape == `(batch_size, 1, embedding_dim + hidden_size)`.
This commit is contained in:
Ruizhi 2018-08-01 16:55:33 +08:00 committed by GitHub
parent abd645085b
commit 842cd17c10
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -552,10 +552,10 @@
" # passing the concatenated vector to the GRU\n",
" output, state = self.gru(x)\n",
" \n",
" # output shape == (batch_size * max_length, hidden_size)\n",
" # output shape == (batch_size * 1, hidden_size)\n",
" output = tf.reshape(output, (-1, output.shape[2]))\n",
" \n",
" # output shape == (batch_size * max_length, vocab)\n",
" # output shape == (batch_size * 1, vocab)\n",
" x = self.fc(output)\n",
" \n",
" return x, state, attention_weights\n",