diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.batch.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.batch.md
index e1cd8aa7c07..9112cf531d4 100644
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.batch.md
+++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.batch.md
@@ -15,7 +15,7 @@ with shape `[batch_size, x, y, z]`.
If `enqueue_many` is `True`, `tensors` is assumed to represent a batch of
examples, where the first dimension is indexed by example, and all members of
-`tensor_list` should have the same size in the first dimension. If an input
+`tensors` should have the same size in the first dimension. If an input
tensor has shape `[*, x, y, z]`, the output will have shape `[batch_size, x,
y, z]`. The `capacity` argument controls the how long the prefetching is
allowed to grow the queues.
@@ -51,11 +51,11 @@ operations that depend on fixed batch_size would fail.
* `tensors`: The list or dictionary of tensors to enqueue.
* `batch_size`: The new batch size pulled from the queue.
-* `num_threads`: The number of threads enqueuing `tensor_list`.
+* `num_threads`: The number of threads enqueuing `tensors`.
* `capacity`: An integer. The maximum number of elements in the queue.
-* `enqueue_many`: Whether each tensor in `tensor_list` is a single example.
+* `enqueue_many`: Whether each tensor in `tensors` is a single example.
* `shapes`: (Optional) The shapes for each example. Defaults to the
- inferred shapes for `tensor_list`.
+ inferred shapes for `tensors`.
* `dynamic_pad`: Boolean. Allow variable dimensions in input shapes.
The given dimensions are padded upon dequeue so that tensors within a
batch have the same shapes.
diff --git a/tensorflow/g3doc/api_docs/python/io_ops.md b/tensorflow/g3doc/api_docs/python/io_ops.md
index af6479723a4..b5c799a0367 100644
--- a/tensorflow/g3doc/api_docs/python/io_ops.md
+++ b/tensorflow/g3doc/api_docs/python/io_ops.md
@@ -2434,7 +2434,7 @@ with shape `[batch_size, x, y, z]`.
If `enqueue_many` is `True`, `tensors` is assumed to represent a batch of
examples, where the first dimension is indexed by example, and all members of
-`tensor_list` should have the same size in the first dimension. If an input
+`tensors` should have the same size in the first dimension. If an input
tensor has shape `[*, x, y, z]`, the output will have shape `[batch_size, x,
y, z]`. The `capacity` argument controls the how long the prefetching is
allowed to grow the queues.
@@ -2470,11 +2470,11 @@ operations that depend on fixed batch_size would fail.
* `tensors`: The list or dictionary of tensors to enqueue.
* `batch_size`: The new batch size pulled from the queue.
-* `num_threads`: The number of threads enqueuing `tensor_list`.
+* `num_threads`: The number of threads enqueuing `tensors`.
* `capacity`: An integer. The maximum number of elements in the queue.
-* `enqueue_many`: Whether each tensor in `tensor_list` is a single example.
+* `enqueue_many`: Whether each tensor in `tensors` is a single example.
* `shapes`: (Optional) The shapes for each example. Defaults to the
- inferred shapes for `tensor_list`.
+ inferred shapes for `tensors`.
* `dynamic_pad`: Boolean. Allow variable dimensions in input shapes.
The given dimensions are padded upon dequeue so that tensors within a
batch have the same shapes.