diff --git a/tensorflow/core/ops/array_ops.cc b/tensorflow/core/ops/array_ops.cc
index bbdcfd337e2..e528ae47aa7 100644
--- a/tensorflow/core/ops/array_ops.cc
+++ b/tensorflow/core/ops/array_ops.cc
@@ -209,7 +209,7 @@ The input tensors are all required to have size 1 in the first dimension.
For example:
-```prettyprint
+```
# 'x' is [[1, 4]]
# 'y' is [[2, 5]]
# 'z' is [[3, 6]]
@@ -277,7 +277,7 @@ Etc.
For example:
-```prettyprint
+```
# 'x' is [1, 4]
# 'y' is [2, 5]
# 'z' is [3, 6]
@@ -432,7 +432,7 @@ Computes offsets of concat inputs within its output.
For example:
-```prettyprint
+```
# 'x' is [2, 2, 7]
# 'y' is [2, 3, 7]
# 'z' is [2, 5, 7]
@@ -670,7 +670,7 @@ rank 2k with dimensions [D1,..., Dk, D1,..., Dk] where:
For example:
-```prettyprint
+```
# 'diagonal' is [1, 2, 3, 4]
tf.diag(diagonal) ==> [[1, 0, 0, 0]
[0, 2, 0, 0]
@@ -722,7 +722,7 @@ tensor of rank `k` with dimensions `[D1,..., Dk]` where:
For example:
-```prettyprint
+```
# 'input' is [[1, 0, 0, 0]
[0, 2, 0, 0]
[0, 0, 3, 0]
@@ -768,7 +768,7 @@ tensor of rank `k+1` with dimensions [I, J, K, ..., N, N]` where:
For example:
-```prettyprint
+```
# 'diagonal' is [[1, 2, 3, 4], [5, 6, 7, 8]]
and diagonal.shape = (2, 4)
@@ -880,7 +880,7 @@ The input must be at least a matrix.
For example:
-```prettyprint
+```
# 'input' is [[[1, 0, 0, 0]
[0, 2, 0, 0]
[0, 0, 3, 0]
@@ -927,7 +927,7 @@ The indicator function
For example:
-```prettyprint
+```
# if 'input' is [[ 0, 1, 2, 3]
[-1, 0, 1, 2]
[-2, -1, 0, 1]
@@ -946,7 +946,7 @@ tf.matrix_band_part(input, 2, 1) ==> [[ 0, 1, 0, 0]
Useful special cases:
-```prettyprint
+```
tf.matrix_band_part(input, 0, -1) ==> Upper triangular part.
tf.matrix_band_part(input, -1, 0) ==> Lower triangular part.
tf.matrix_band_part(input, 0, 0) ==> Diagonal.
@@ -998,7 +998,7 @@ of `tensor` must equal the number of elements in `dims`. In other words:
For example:
-```prettyprint
+```
# tensor 't' is [[[[ 0, 1, 2, 3],
# [ 4, 5, 6, 7],
# [ 8, 9, 10, 11]],
@@ -1074,7 +1074,7 @@ once, a InvalidArgument error is raised.
For example:
-```prettyprint
+```
# tensor 't' is [[[[ 0, 1, 2, 3],
# [ 4, 5, 6, 7],
# [ 8, 9, 10, 11]],
@@ -1245,7 +1245,7 @@ This operation creates a tensor of shape `dims` and fills it with `value`.
For example:
-```prettyprint
+```
# Output tensor has shape [2, 3].
fill([2, 3], 9) ==> [[9, 9, 9]
[9, 9, 9]]
@@ -1354,7 +1354,7 @@ out-of-bound indices result in safe but unspecified behavior, which may include
raising an error.
-
+
)doc");
@@ -1610,7 +1610,7 @@ implied by `shape` must be the same as the number of elements in `tensor`.
For example:
-```prettyprint
+```
# tensor 't' is [1, 2, 3, 4, 5, 6, 7, 8, 9]
# tensor 't' has shape [9]
reshape(t, [3, 3]) ==> [[1, 2, 3],
@@ -1697,7 +1697,7 @@ The values must include 0. There can be no duplicate values or negative values.
For example:
-```prettyprint
+```
# tensor `x` is [3, 4, 0, 2, 1]
invert_permutation(x) ==> [2, 4, 3, 0, 1]
```
@@ -1802,7 +1802,7 @@ in the unique output `y`. In other words:
For example:
-```prettyprint
+```
# tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8]
y, idx = unique(x)
y ==> [1, 2, 4, 7, 8]
@@ -1842,7 +1842,7 @@ contains the count of each element of `y` in `x`. In other words:
For example:
-```prettyprint
+```
# tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8]
y, idx, count = unique_with_counts(x)
y ==> [1, 2, 4, 7, 8]
@@ -1887,7 +1887,7 @@ This operation returns a 1-D integer tensor representing the shape of `input`.
For example:
-```prettyprint
+```
# 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]
shape(t) ==> [2, 2, 3]
```
@@ -1968,7 +1968,7 @@ slice `i`, with the first `seq_lengths[i]` slices along dimension
For example:
-```prettyprint
+```
# Given this:
batch_dim = 0
seq_dim = 1
@@ -1990,7 +1990,7 @@ output[3, 2:, :, ...] = input[3, 2:, :, ...]
In contrast, if:
-```prettyprint
+```
# Given this:
batch_dim = 2
seq_dim = 0
@@ -2031,7 +2031,7 @@ This operation returns an integer representing the rank of `input`.
For example:
-```prettyprint
+```
# 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]
# shape of tensor 't' is [2, 2, 3]
rank(t) ==> 3
@@ -2057,7 +2057,7 @@ This operation returns an integer representing the number of elements in
For example:
-```prettyprint
+```
# 't' is [[[1, 1,, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]]
size(t) ==> 12
```
@@ -2290,7 +2290,7 @@ encoding is best understand by considering a non-trivial example. In
particular,
`foo[1, 2:4, None, ..., :-3:-1, :]` will be encoded as
-```prettyprint
+```
begin = [1, 2, x, x, 0, x] # x denotes don't care (usually 0)
end = [2, 4, x, x, -3, x]
strides = [1, 1, x, x, -1, 1]
@@ -2512,7 +2512,7 @@ the output tensor can vary depending on how many true values there are in
For example:
-```prettyprint
+```
# 'input' tensor is [[True, False]
# [True, False]]
# 'input' has two true values, so output has two coordinates.
@@ -2616,7 +2616,7 @@ The padded size of each dimension D of the output is:
For example:
-```prettyprint
+```
# 't' is [[1, 1], [2, 2]]
# 'paddings' is [[1, 1], [2, 2]]
# rank of 't' is 2
@@ -2655,7 +2655,7 @@ The padded size of each dimension D of the output is:
For example:
-```prettyprint
+```
# 't' is [[1, 2, 3], [4, 5, 6]].
# 'paddings' is [[1, 1]], [2, 2]].
# 'mode' is SYMMETRIC.
@@ -2751,7 +2751,7 @@ The folded size of each dimension D of the output is:
For example:
-```prettyprint
+```
# 't' is [[1, 2, 3], [4, 5, 6], [7, 8, 9]].
# 'paddings' is [[0, 1]], [0, 1]].
# 'mode' is SYMMETRIC.
@@ -2927,7 +2927,7 @@ which will make the shape `[1, height, width, channels]`.
Other examples:
-```prettyprint
+```
# 't' is a tensor of shape [2]
shape(expand_dims(t, 0)) ==> [1, 2]
shape(expand_dims(t, 1)) ==> [2, 1]
@@ -3029,14 +3029,14 @@ dimensions, you can remove specific size 1 dimensions by specifying
For example:
-```prettyprint
+```
# 't' is a tensor of shape [1, 2, 1, 3, 1, 1]
shape(squeeze(t)) ==> [2, 3]
```
Or, to remove specific size 1 dimensions:
-```prettyprint
+```
# 't' is a tensor of shape [1, 2, 1, 3, 1, 1]
shape(squeeze(t, [2, 4])) ==> [1, 2, 3, 1]
```
@@ -3079,14 +3079,14 @@ position of each `out` element in `x`. In other words:
For example, given this input:
-```prettyprint
+```
x = [1, 2, 3, 4, 5, 6]
y = [1, 3, 5]
```
This operation would return:
-```prettyprint
+```
out ==> [2, 4, 6]
idx ==> [1, 3, 5]
```
@@ -3345,34 +3345,34 @@ Some examples:
(1) For the following input of shape `[1, 2, 2, 1]`, `block_shape = [2, 2]`, and
`paddings = [[0, 0], [0, 0]]`:
-```prettyprint
+```
x = [[[[1], [2]], [[3], [4]]]]
```
The output tensor has shape `[4, 1, 1, 1]` and value:
-```prettyprint
+```
[[[[1]]], [[[2]]], [[[3]]], [[[4]]]]
```
(2) For the following input of shape `[1, 2, 2, 3]`, `block_shape = [2, 2]`, and
`paddings = [[0, 0], [0, 0]]`:
-```prettyprint
+```
x = [[[[1, 2, 3], [4, 5, 6]],
[[7, 8, 9], [10, 11, 12]]]]
```
The output tensor has shape `[4, 1, 1, 3]` and value:
-```prettyprint
+```
[[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]]
```
(3) For the following input of shape `[1, 4, 4, 1]`, `block_shape = [2, 2]`, and
`paddings = [[0, 0], [0, 0]]`:
-```prettyprint
+```
x = [[[[1], [2], [3], [4]],
[[5], [6], [7], [8]],
[[9], [10], [11], [12]],
@@ -3381,7 +3381,7 @@ x = [[[[1], [2], [3], [4]],
The output tensor has shape `[4, 2, 2, 1]` and value:
-```prettyprint
+```
x = [[[[1], [3]], [[9], [11]]],
[[[2], [4]], [[10], [12]]],
[[[5], [7]], [[13], [15]]],
@@ -3391,7 +3391,7 @@ x = [[[[1], [3]], [[9], [11]]],
(4) For the following input of shape `[2, 2, 4, 1]`, block_shape = `[2, 2]`, and
paddings = `[[0, 0], [2, 0]]`:
-```prettyprint
+```
x = [[[[1], [2], [3], [4]],
[[5], [6], [7], [8]]],
[[[9], [10], [11], [12]],
@@ -3400,7 +3400,7 @@ x = [[[[1], [2], [3], [4]],
The output tensor has shape `[8, 1, 3, 1]` and value:
-```prettyprint
+```
x = [[[[0], [1], [3]]], [[[0], [9], [11]]],
[[[0], [2], [4]]], [[[0], [10], [12]]],
[[[0], [5], [7]]], [[[0], [13], [15]]],
@@ -3474,32 +3474,32 @@ Some examples:
(1) For the following input of shape `[1, 2, 2, 1]` and block_size of 2:
-```prettyprint
+```
x = [[[[1], [2]], [[3], [4]]]]
```
The output tensor has shape `[4, 1, 1, 1]` and value:
-```prettyprint
+```
[[[[1]]], [[[2]]], [[[3]]], [[[4]]]]
```
(2) For the following input of shape `[1, 2, 2, 3]` and block_size of 2:
-```prettyprint
+```
x = [[[[1, 2, 3], [4, 5, 6]],
[[7, 8, 9], [10, 11, 12]]]]
```
The output tensor has shape `[4, 1, 1, 3]` and value:
-```prettyprint
+```
[[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]]
```
(3) For the following input of shape `[1, 4, 4, 1]` and block_size of 2:
-```prettyprint
+```
x = [[[[1], [2], [3], [4]],
[[5], [6], [7], [8]],
[[9], [10], [11], [12]],
@@ -3508,7 +3508,7 @@ x = [[[[1], [2], [3], [4]],
The output tensor has shape `[4, 2, 2, 1]` and value:
-```prettyprint
+```
x = [[[[1], [3]], [[9], [11]]],
[[[2], [4]], [[10], [12]]],
[[[5], [7]], [[13], [15]]],
@@ -3517,7 +3517,7 @@ x = [[[[1], [3]], [[9], [11]]],
(4) For the following input of shape `[2, 2, 4, 1]` and block_size of 2:
-```prettyprint
+```
x = [[[[1], [2], [3], [4]],
[[5], [6], [7], [8]]],
[[[9], [10], [11], [12]],
@@ -3526,7 +3526,7 @@ x = [[[[1], [2], [3], [4]],
The output tensor has shape `[8, 1, 2, 1]` and value:
-```prettyprint
+```
x = [[[[1], [3]]], [[[9], [11]]], [[[2], [4]]], [[[10], [12]]],
[[[5], [7]]], [[[13], [15]]], [[[6], [8]]], [[[14], [16]]]]
```
@@ -3612,26 +3612,26 @@ Some examples:
(1) For the following input of shape `[4, 1, 1, 1]`, `block_shape = [2, 2]`, and
`crops = [[0, 0], [0, 0]]`:
-```prettyprint
+```
[[[[1]]], [[[2]]], [[[3]]], [[[4]]]]
```
The output tensor has shape `[1, 2, 2, 1]` and value:
-```prettyprint
+```
x = [[[[1], [2]], [[3], [4]]]]
```
(2) For the following input of shape `[4, 1, 1, 3]`, `block_shape = [2, 2]`, and
`crops = [[0, 0], [0, 0]]`:
-```prettyprint
+```
[[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]]
```
The output tensor has shape `[1, 2, 2, 3]` and value:
-```prettyprint
+```
x = [[[[1, 2, 3], [4, 5, 6]],
[[7, 8, 9], [10, 11, 12]]]]
```
@@ -3639,7 +3639,7 @@ x = [[[[1, 2, 3], [4, 5, 6]],
(3) For the following input of shape `[4, 2, 2, 1]`, `block_shape = [2, 2]`, and
`crops = [[0, 0], [0, 0]]`:
-```prettyprint
+```
x = [[[[1], [3]], [[9], [11]]],
[[[2], [4]], [[10], [12]]],
[[[5], [7]], [[13], [15]]],
@@ -3648,7 +3648,7 @@ x = [[[[1], [3]], [[9], [11]]],
The output tensor has shape `[1, 4, 4, 1]` and value:
-```prettyprint
+```
x = [[[1], [2], [3], [4]],
[[5], [6], [7], [8]],
[[9], [10], [11], [12]],
@@ -3658,7 +3658,7 @@ x = [[[1], [2], [3], [4]],
(4) For the following input of shape `[8, 1, 3, 1]`, `block_shape = [2, 2]`, and
`crops = [[0, 0], [2, 0]]`:
-```prettyprint
+```
x = [[[[0], [1], [3]]], [[[0], [9], [11]]],
[[[0], [2], [4]]], [[[0], [10], [12]]],
[[[0], [5], [7]]], [[[0], [13], [15]]],
@@ -3667,7 +3667,7 @@ x = [[[[0], [1], [3]]], [[[0], [9], [11]]],
The output tensor has shape `[2, 2, 4, 1]` and value:
-```prettyprint
+```
x = [[[[1], [2], [3], [4]],
[[5], [6], [7], [8]]],
[[[9], [10], [11], [12]],
@@ -3732,32 +3732,32 @@ Some examples:
(1) For the following input of shape `[4, 1, 1, 1]` and block_size of 2:
-```prettyprint
+```
[[[[1]]], [[[2]]], [[[3]]], [[[4]]]]
```
The output tensor has shape `[1, 2, 2, 1]` and value:
-```prettyprint
+```
x = [[[[1], [2]], [[3], [4]]]]
```
(2) For the following input of shape `[4, 1, 1, 3]` and block_size of 2:
-```prettyprint
+```
[[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]]
```
The output tensor has shape `[1, 2, 2, 3]` and value:
-```prettyprint
+```
x = [[[[1, 2, 3], [4, 5, 6]],
[[7, 8, 9], [10, 11, 12]]]]
```
(3) For the following input of shape `[4, 2, 2, 1]` and block_size of 2:
-```prettyprint
+```
x = [[[[1], [3]], [[9], [11]]],
[[[2], [4]], [[10], [12]]],
[[[5], [7]], [[13], [15]]],
@@ -3766,7 +3766,7 @@ x = [[[[1], [3]], [[9], [11]]],
The output tensor has shape `[1, 4, 4, 1]` and value:
-```prettyprint
+```
x = [[[1], [2], [3], [4]],
[[5], [6], [7], [8]],
[[9], [10], [11], [12]],
@@ -3775,14 +3775,14 @@ x = [[[1], [2], [3], [4]],
(4) For the following input of shape `[8, 1, 2, 1]` and block_size of 2:
-```prettyprint
+```
x = [[[[1], [3]]], [[[9], [11]]], [[[2], [4]]], [[[10], [12]]],
[[[5], [7]]], [[[13], [15]]], [[[6], [8]]], [[[14], [16]]]]
```
The output tensor has shape `[2, 2, 4, 1]` and value:
-```prettyprint
+```
x = [[[[1], [3]], [[5], [7]]],
[[[2], [4]], [[10], [12]]],
[[[5], [7]], [[13], [15]]],
@@ -3848,14 +3848,14 @@ purely convolutional models.
For example, given this input of shape `[1, 2, 2, 1]`, and block_size of 2:
-```prettyprint
+```
x = [[[[1], [2]],
[[3], [4]]]]
```
This operation will output a tensor of shape `[1, 1, 1, 4]`:
-```prettyprint
+```
[[[[1, 2, 3, 4]]]]
```
@@ -3866,7 +3866,7 @@ The output element shape is `[1, 1, 4]`.
For an input tensor with larger depth, here of shape `[1, 2, 2, 3]`, e.g.
-```prettyprint
+```
x = [[[[1, 2, 3], [4, 5, 6]],
[[7, 8, 9], [10, 11, 12]]]]
```
@@ -3874,13 +3874,13 @@ x = [[[[1, 2, 3], [4, 5, 6]],
This operation, for block_size of 2, will return the following tensor of shape
`[1, 1, 1, 12]`
-```prettyprint
+```
[[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]]
```
Similarly, for the following input of shape `[1 4 4 1]`, and a block size of 2:
-```prettyprint
+```
x = [[[[1], [2], [5], [6]],
[[3], [4], [7], [8]],
[[9], [10], [13], [14]],
@@ -3889,7 +3889,7 @@ x = [[[[1], [2], [5], [6]],
the operator will return the following tensor of shape `[1 2 2 4]`:
-```prettyprint
+```
x = [[[[1, 2, 3, 4],
[5, 6, 7, 8]],
[[9, 10, 11, 12],
@@ -3958,14 +3958,14 @@ purely convolutional models.
For example, given this input of shape `[1, 1, 1, 4]`, and a block size of 2:
-```prettyprint
+```
x = [[[[1, 2, 3, 4]]]]
```
This operation will output a tensor of shape `[1, 2, 2, 1]`:
-```prettyprint
+```
[[[[1], [2]],
[[3], [4]]]]
```
@@ -3977,14 +3977,14 @@ The output element shape is `[2, 2, 1]`.
For an input tensor with larger depth, here of shape `[1, 1, 1, 12]`, e.g.
-```prettyprint
+```
x = [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]]
```
This operation, for block size of 2, will return the following tensor of shape
`[1, 2, 2, 3]`
-```prettyprint
+```
[[[[1, 2, 3], [4, 5, 6]],
[[7, 8, 9], [10, 11, 12]]]]
@@ -3992,7 +3992,7 @@ This operation, for block size of 2, will return the following tensor of shape
Similarly, for the following input of shape `[1 2 2 4]`, and a block size of 2:
-```prettyprint
+```
x = [[[[1, 2, 3, 4],
[5, 6, 7, 8]],
[[9, 10, 11, 12],
@@ -4001,7 +4001,7 @@ x = [[[[1, 2, 3, 4],
the operator will return the following tensor of shape `[1 4 4 1]`:
-```prettyprint
+```
x = [[ [1], [2], [5], [6]],
[ [3], [4], [7], [8]],
[ [9], [10], [13], [14]],
@@ -4775,7 +4775,7 @@ index. For example, say we want to insert 4 scattered elements in a rank-1
tensor with 8 elements.
-
+
In Python, this scatter operation would look like this:
@@ -4798,7 +4798,7 @@ example, if we wanted to insert two slices in the first dimension of a
rank-3 tensor with two matrices of new values.
-
+
In Python, this scatter operation would look like this:
diff --git a/tensorflow/core/ops/data_flow_ops.cc b/tensorflow/core/ops/data_flow_ops.cc
index 8b9c92859d7..b34dd4ae90b 100644
--- a/tensorflow/core/ops/data_flow_ops.cc
+++ b/tensorflow/core/ops/data_flow_ops.cc
@@ -102,7 +102,7 @@ For example:
```
-
+
partitions: Any shape. Indices in the range `[0, num_partitions)`.
@@ -190,7 +190,7 @@ For example:
```
-
+
)doc");
diff --git a/tensorflow/core/ops/resource_variable_ops.cc b/tensorflow/core/ops/resource_variable_ops.cc
index c190b81dde3..c060aa6be91 100644
--- a/tensorflow/core/ops/resource_variable_ops.cc
+++ b/tensorflow/core/ops/resource_variable_ops.cc
@@ -295,7 +295,7 @@ the same location, their contributions add.
Requires `updates.shape = indices.shape + ref.shape[1:]`.
-
+
resource: Should be from a `Variable` node.
diff --git a/tensorflow/core/ops/state_ops.cc b/tensorflow/core/ops/state_ops.cc
index cfb3ea71411..0890d5fc7c7 100644
--- a/tensorflow/core/ops/state_ops.cc
+++ b/tensorflow/core/ops/state_ops.cc
@@ -288,7 +288,7 @@ for each value is undefined.
Requires `updates.shape = indices.shape + ref.shape[1:]`.
-
+
ref: Should be from a `Variable` node.
@@ -332,7 +332,7 @@ the same location, their contributions add.
Requires `updates.shape = indices.shape + ref.shape[1:]`.
-
+
ref: Should be from a `Variable` node.
@@ -376,7 +376,7 @@ the same location, their (negated) contributions add.
Requires `updates.shape = indices.shape + ref.shape[1:]`.
-
+
ref: Should be from a `Variable` node.
diff --git a/tensorflow/docs_src/api_guides/python/contrib.integrate.md b/tensorflow/docs_src/api_guides/python/contrib.integrate.md
index e6b730b2035..e95b5a2e686 100644
--- a/tensorflow/docs_src/api_guides/python/contrib.integrate.md
+++ b/tensorflow/docs_src/api_guides/python/contrib.integrate.md
@@ -33,7 +33,7 @@ plt.plot(x, z)
```
-
+
## Ops
diff --git a/tensorflow/docs_src/extend/architecture.md b/tensorflow/docs_src/extend/architecture.md
index 42721eb488c..21816502ace 100644
--- a/tensorflow/docs_src/extend/architecture.md
+++ b/tensorflow/docs_src/extend/architecture.md
@@ -25,7 +25,7 @@ The TensorFlow runtime is a cross-platform library. Figure 1 illustrates its
general architecture. A C API separates user level code in different languages
from the core runtime.
-{: width="300"}
+{: width="300"}
**Figure 1**
@@ -57,7 +57,7 @@ Other tasks send updates to these parameters as they work on optimizing the
parameters. This particular division of labor between tasks is not required, but
it is common for distributed training.
-{: width="500"}
+{: width="500"}
**Figure 2**
@@ -91,7 +91,7 @@ In Figure 3, the client has built a graph that applies weights (w) to a
feature vector (x), adds a bias term (b) and saves the result in a variable
(s).
-{: width="700"}
+{: width="700"}
**Figure 3**
@@ -114,7 +114,7 @@ a step, it applies standard optimizations such as common subexpression
elimination and constant folding. It then coordinates execution of the
optimized subgraphs across a set of tasks.
-{: width="700"}
+{: width="700"}
**Figure 4**
@@ -123,7 +123,7 @@ Figure 5 shows a possible partition of our example graph. The distributed
master has grouped the model parameters in order to place them together on the
parameter server.
-{: width="700"}
+{: width="700"}
**Figure 5**
@@ -132,14 +132,14 @@ Where graph edges are cut by the partition, the distributed master inserts
send and receive nodes to pass information between the distributed tasks
(Figure 6).
-{: width="700"}
+{: width="700"}
**Figure 6**
The distributed master then ships the graph pieces to the distributed tasks.
-{: width="700"}
+{: width="700"}
**Figure 7**
@@ -181,7 +181,7 @@ We also have preliminary support for NVIDIA's NCCL library for multi-GPU
communication (see [`tf.contrib.nccl`](
https://www.tensorflow.org/code/tensorflow/contrib/nccl/python/ops/nccl_ops.py)).
-{: width="700"}
+{: width="700"}
**Figure 8**
diff --git a/tensorflow/docs_src/extend/estimators.md b/tensorflow/docs_src/extend/estimators.md
index 28f62e01ab0..c5444c59cab 100644
--- a/tensorflow/docs_src/extend/estimators.md
+++ b/tensorflow/docs_src/extend/estimators.md
@@ -72,7 +72,7 @@ for abalone:
The label to predict is number of rings, as a proxy for abalone age.
- **[“Abalone
+ **[“Abalone
shell”](https://www.flickr.com/photos/thenickster/16641048623/) (by [Nicki Dugan
Pogue](https://www.flickr.com/photos/thenickster/), CC BY-SA 2.0)**
diff --git a/tensorflow/docs_src/get_started/embedding_viz.md b/tensorflow/docs_src/get_started/embedding_viz.md
index f512d5d809b..84245b11bea 100644
--- a/tensorflow/docs_src/get_started/embedding_viz.md
+++ b/tensorflow/docs_src/get_started/embedding_viz.md
@@ -21,7 +21,7 @@ interested in word embeddings,
gives a good introduction.
@@ -173,7 +173,7 @@ last data point in the bottom right:
Note in the example above that the last row doesn't have to be filled. For a
concrete example of a sprite, see
-[this sprite image](../images/mnist_10k_sprite.png) of 10,000 MNIST digits
+[this sprite image](https://www.tensorflow.org/images/mnist_10k_sprite.png) of 10,000 MNIST digits
(100x100).
Note: We currently support sprites up to 8192px X 8192px.
@@ -247,7 +247,7 @@ further analysis on their own with the "Isolate Points" button in the Inspector
pane on the right hand side.
-
+
*Selection of the nearest neighbors of “important” in a word embedding dataset.*
The combination of filtering with custom projection can be powerful. Below, we filtered
@@ -260,10 +260,10 @@ You can see that on the right side we have “ideas”, “science”, “perspe
-
+
-
+
@@ -284,4 +284,4 @@ projection) as a small file. The Projector can then be pointed to a set of one
or more of these files, producing the panel below. Other users can then walk
through a sequence of bookmarks.
-
+
diff --git a/tensorflow/docs_src/get_started/get_started.md b/tensorflow/docs_src/get_started/get_started.md
index 6bee7529d0a..b52adc3790a 100644
--- a/tensorflow/docs_src/get_started/get_started.md
+++ b/tensorflow/docs_src/get_started/get_started.md
@@ -123,7 +123,7 @@ TensorFlow provides a utility called TensorBoard that can display a picture of
the computational graph. Here is a screenshot showing how TensorBoard
visualizes the graph:
-
+
As it stands, this graph is not especially interesting because it always
produces a constant result. A graph can be parameterized to accept external
@@ -154,7 +154,7 @@ resulting in the output
In TensorBoard, the graph looks like this:
-
+
We can make the computational graph more complex by adding another operation.
For example,
@@ -170,7 +170,7 @@ produces the output
The preceding computational graph would look as follows in TensorBoard:
-
+
In machine learning we will typically want a model that can take arbitrary
inputs, such as the one above. To make the model trainable, we need to be able
@@ -336,7 +336,7 @@ program your loss will not be exactly the same, because the model is initialized
with random values.
This more complicated program can still be visualized in TensorBoard
-
+
## `tf.contrib.learn`
diff --git a/tensorflow/docs_src/get_started/graph_viz.md b/tensorflow/docs_src/get_started/graph_viz.md
index b69103299ea..06ec427b757 100644
--- a/tensorflow/docs_src/get_started/graph_viz.md
+++ b/tensorflow/docs_src/get_started/graph_viz.md
@@ -2,7 +2,7 @@
TensorFlow computation graphs are powerful but complicated. The graph visualization can help you understand and debug them. Here's an example of the visualization at work.
-
+
*Visualization of a TensorFlow graph.*
To see your own graph, run TensorBoard pointing it to the log directory of the job, click on the graph tab on the top pane and select the appropriate run using the menu at the upper left corner. For in depth information on how to run TensorBoard and make sure you are logging all the necessary information, see @{$summaries_and_tensorboard$TensorBoard: Visualizing Learning}.
@@ -43,10 +43,10 @@ expanded states.
-
+
-
+
@@ -87,10 +87,10 @@ and the auxiliary area.
-
+
-
+
@@ -114,10 +114,10 @@ specific set of nodes.
-
+
-
+
@@ -135,15 +135,15 @@ for constants and summary nodes. To summarize, here's a table of node symbols:
Symbol | Meaning
--- | ---
- | *High-level* node representing a name scope. Double-click to expand a high-level node.
- | Sequence of numbered nodes that are not connected to each other.
- | Sequence of numbered nodes that are connected to each other.
- | An individual operation node.
- | A constant.
- | A summary node.
- | Edge showing the data flow between operations.
- | Edge showing the control dependency between operations.
- | A reference edge showing that the outgoing operation node can mutate the incoming tensor.
+ | *High-level* node representing a name scope. Double-click to expand a high-level node.
+ | Sequence of numbered nodes that are not connected to each other.
+ | Sequence of numbered nodes that are connected to each other.
+ | An individual operation node.
+ | A constant.
+ | A summary node.
+ | Edge showing the data flow between operations.
+ | Edge showing the control dependency between operations.
+ | A reference edge showing that the outgoing operation node can mutate the incoming tensor.
## Interaction {#interaction}
@@ -161,10 +161,10 @@ right corner of the visualization.
-
+
-
+
@@ -207,10 +207,10 @@ The images below give an illustration for a piece of a real-life graph.
-
+
-
+
@@ -233,7 +233,7 @@ The images below show the CIFAR-10 model with tensor shape information:
-
+
@@ -303,13 +303,13 @@ tensor output sizes.
-
+
-
+
-
+
diff --git a/tensorflow/docs_src/get_started/mnist/beginners.md b/tensorflow/docs_src/get_started/mnist/beginners.md
index 2da2c19ea60..624d9164748 100644
--- a/tensorflow/docs_src/get_started/mnist/beginners.md
+++ b/tensorflow/docs_src/get_started/mnist/beginners.md
@@ -15,7 +15,7 @@ MNIST is a simple computer vision dataset. It consists of images of handwritten
digits like these:
-
+
It also includes labels for each image, telling us which digit it is. For
@@ -88,7 +88,7 @@ Each image is 28 pixels by 28 pixels. We can interpret this as a big array of
numbers:
-
+
We can flatten this array into a vector of 28x28 = 784 numbers. It doesn't
@@ -110,7 +110,7 @@ Each entry in the tensor is a pixel intensity between 0 and 1, for a particular
pixel in a particular image.
-
+
Each image in MNIST has a corresponding label, a number between 0 and 9
@@ -124,7 +124,7 @@ vector which is 1 in the \\(n\\)th dimension. For example, 3 would be
`[55000, 10]` array of floats.
-
+
We're now ready to actually make our model!
@@ -157,7 +157,7 @@ classes. Red represents negative weights, while blue represents positive
weights.
-
+
We also add some extra evidence called a bias. Basically, we want to be able
@@ -202,13 +202,13 @@ although with a lot more \\(x\\)s. For each output, we compute a weighted sum of
the \\(x\\)s, add a bias, and then apply softmax.
-
+
If we write that out as equations, we get:
-
@@ -217,7 +217,7 @@ and vector addition. This is helpful for computational efficiency. (It's also
a useful way to think.)
-
diff --git a/tensorflow/docs_src/get_started/mnist/mechanics.md b/tensorflow/docs_src/get_started/mnist/mechanics.md
index b55a5c19ff9..48d9a395f28 100644
--- a/tensorflow/docs_src/get_started/mnist/mechanics.md
+++ b/tensorflow/docs_src/get_started/mnist/mechanics.md
@@ -34,7 +34,7 @@ MNIST is a classic problem in machine learning. The problem is to look at
greyscale 28x28 pixel images of handwritten digits and determine which digit
the image represents, for all the digits from zero to nine.
-
+
For more information, refer to [Yann LeCun's MNIST page](http://yann.lecun.com/exdb/mnist/)
or [Chris Olah's visualizations of MNIST](http://colah.github.io/posts/2014-10-Visualizing-MNIST/).
@@ -90,7 +90,7 @@ loss.
and apply gradients.
-
+
### Inference
@@ -384,7 +384,7 @@ summary_writer.add_summary(summary_str, step)
When the events files are written, TensorBoard may be run against the training
folder to display the values from the summaries.
-
+
**NOTE**: For more info about how to build and run Tensorboard, please see the accompanying tutorial @{$summaries_and_tensorboard$Tensorboard: Visualizing Learning}.
diff --git a/tensorflow/docs_src/get_started/monitors.md b/tensorflow/docs_src/get_started/monitors.md
index 7db88c89812..cb4ef70eebf 100644
--- a/tensorflow/docs_src/get_started/monitors.md
+++ b/tensorflow/docs_src/get_started/monitors.md
@@ -401,6 +401,6 @@ Then navigate to `http://0.0.0.0:`*``* in your browser, where
If you click on the accuracy field, you'll see an image like the following,
which shows accuracy plotted against step count:
-
+
For more on using TensorBoard, see @{$summaries_and_tensorboard$TensorBoard: Visualizing Learning} and @{$graph_viz$TensorBoard: Graph Visualization}.
diff --git a/tensorflow/docs_src/get_started/summaries_and_tensorboard.md b/tensorflow/docs_src/get_started/summaries_and_tensorboard.md
index 6e06c9e41e4..45d43e7a6e7 100644
--- a/tensorflow/docs_src/get_started/summaries_and_tensorboard.md
+++ b/tensorflow/docs_src/get_started/summaries_and_tensorboard.md
@@ -8,7 +8,7 @@ your TensorFlow graph, plot quantitative metrics about the execution of your
graph, and show additional data like images that pass through it. When
TensorBoard is fully configured, it looks like this:
-
+
XLA comes with several optimizations and analyses that are target-independent,
diff --git a/tensorflow/docs_src/performance/xla/jit.md b/tensorflow/docs_src/performance/xla/jit.md
index 4d2a643b7f8..d4dc3e57c8f 100644
--- a/tensorflow/docs_src/performance/xla/jit.md
+++ b/tensorflow/docs_src/performance/xla/jit.md
@@ -124,7 +124,7 @@ open the timeline file created when the script finishes: `timeline.ctf.json`.
The rendered timeline should look similar to the picture below with multiple
green boxes labeled `MatMul`, possibly across multiple CPUs.
-
+
### Step #3 Run with XLA
@@ -139,7 +139,7 @@ TF_XLA_FLAGS=--xla_generate_hlo_graph=.* python mnist_softmax_xla.py
Open the timeline file created (`timeline.ctf.json`). The rendered timeline
should look similar to the picture below with one long bar labeled `_XlaLaunch`.
-
+
To understand what is happening in `_XlaLaunch`, look at the console output for
@@ -165,5 +165,5 @@ dot -Tpng hlo_graph_80.dot -o hlo_graph_80.png
The result will look like the following:
## ConvertElementType
@@ -707,7 +707,7 @@ are all 0. Figure below shows examples of different `edge_padding` and
`interior_padding` values for a two dimensional array.
-
+
## Reduce
@@ -781,13 +781,13 @@ Here's an example of reducing a 2D array (matrix). The shape has rank 2,
dimension 0 of size 2 and dimension 1 of size 3:
-
+
Results of reducing dimensions 0 or 1 with an "add" function:
-
+
Note that both reduction results are 1D arrays. The diagram shows one as column
@@ -798,7 +798,7 @@ size 4, dimension 1 of size 2 and dimension 2 of size 3. For simplicity, the
values 1 to 6 are replicated across dimension 0.
-
+
Similarly to the 2D example, we can reduce just one dimension. If we reduce
@@ -890,7 +890,7 @@ builder.ReduceWindow(
```
-
+
Stride of 1 in a dimension specifies that the position of a window in the
@@ -902,7 +902,7 @@ are the same as though the input came in with the dimensions it has after
padding.
-
+
The evaluation order of the reduction function is arbitrary and may be
@@ -1144,7 +1144,7 @@ addition `scatter` function produces the output element of value 8 (2 + 6).
The evaluation order of the `scatter` function is arbitrary and may be
@@ -1482,5 +1482,5 @@ while (result(0) < 1000) {
```
-
+
diff --git a/tensorflow/docs_src/programmers_guide/debugger.md b/tensorflow/docs_src/programmers_guide/debugger.md
index 7801fadb475..78819969b71 100644
--- a/tensorflow/docs_src/programmers_guide/debugger.md
+++ b/tensorflow/docs_src/programmers_guide/debugger.md
@@ -24,7 +24,7 @@ This code trains a simple NN for MNIST digit image recognition. Notice that the
accuracy increases slightly after the first training step, but then gets stuck
at a low (near-chance) level:
-
+
Scratching your head, you suspect that certain nodes in the training graph
generated bad numeric values such as `inf`s and `nan`s. The computation-graph
@@ -89,7 +89,7 @@ The debug wrapper session will prompt you when it is about to execute the first
`run()` call, with information regarding the fetched tensor and feed
dictionaries displayed on the screen.
-
+
This is what we refer to as the *run-start UI*. If the screen size is
too small to display the content of the message in its entirety, you can resize
@@ -108,7 +108,7 @@ intermediate tensors from the run. (These tensors can also be obtained by
running the command `lt` after you executed `run`.) This is called the
**run-end UI**:
-
+
### tfdbg CLI Frequently-Used Commands
@@ -181,7 +181,7 @@ screen with a red-colored title line indicating **tfdbg** stopped immediately
after a `run()` call generated intermediate tensors that passed the specified
filter `has_inf_or_nan`:
-
+
As the screen display indicates, the `has_inf_or_nan` filter is first passed
during the fourth `run()` call: an [Adam optimizer](https://arxiv.org/abs/1412.6980)
@@ -220,7 +220,7 @@ item on the top or entering the equivalent command:
tfdbg> ni cross_entropy/Log
```
-
+
You can see that this node has the op type `Log`
and that its input is the node `softmax/Softmax`. Run the following command to
@@ -263,7 +263,7 @@ simply click the underlined line numbers in the stack trace output of the
`ni -t ` commands, or use the `ps` (or `print_source`) command such as:
`ps /path/to/source.py`. See the screenshot below for an example of `ps` output:
-
+
Apply a value clipping on the input to @{tf.log}
to resolve this problem:
diff --git a/tensorflow/docs_src/programmers_guide/reading_data.md b/tensorflow/docs_src/programmers_guide/reading_data.md
index c0a50a15184..088724337e4 100644
--- a/tensorflow/docs_src/programmers_guide/reading_data.md
+++ b/tensorflow/docs_src/programmers_guide/reading_data.md
@@ -309,7 +309,7 @@ operations, so that our training loop can dequeue examples from the example
queue.
-
+
The helpers in `tf.train` that create these queues and enqueuing operations add
diff --git a/tensorflow/docs_src/programmers_guide/threading_and_queues.md b/tensorflow/docs_src/programmers_guide/threading_and_queues.md
index 1999cf69410..835e8060466 100644
--- a/tensorflow/docs_src/programmers_guide/threading_and_queues.md
+++ b/tensorflow/docs_src/programmers_guide/threading_and_queues.md
@@ -14,7 +14,7 @@ that takes an item off the queue, adds one to that item, and puts it back on the
end of the queue. Slowly, the numbers on the queue increase.
-
+
`Enqueue`, `EnqueueMany`, and `Dequeue` are special nodes. They take a pointer
diff --git a/tensorflow/docs_src/tutorials/deep_cnn.md b/tensorflow/docs_src/tutorials/deep_cnn.md
index bcdeb90ebec..d6a136fee47 100644
--- a/tensorflow/docs_src/tutorials/deep_cnn.md
+++ b/tensorflow/docs_src/tutorials/deep_cnn.md
@@ -141,7 +141,7 @@ so that we may visualize them in @{$summaries_and_tensorboard$TensorBoard}.
This is a good practice to verify that inputs are built correctly.
-
+
Reading images from disk and distorting them can use a non-trivial amount of
@@ -170,7 +170,7 @@ Layer Name | Description
Here is a graph generated from TensorBoard describing the inference operation:
-
+
> **EXERCISE**: The output of `inference` are un-normalized logits. Try editing
@@ -205,7 +205,7 @@ loss and all these weight decay terms, as returned by the `loss()` function.
We visualize it in TensorBoard with a @{tf.summary.scalar}:
-
+
We train the model using standard
[gradient descent](https://en.wikipedia.org/wiki/Gradient_descent)
@@ -214,7 +214,7 @@ with a learning rate that
@{tf.train.exponential_decay$exponentially decays}
over time.
-
+
The `train()` function adds the operations needed to minimize the objective by
calculating the gradient and updating the learned variables (see
@@ -295,8 +295,8 @@ For instance, we can watch how the distribution of activations and degree of
sparsity in `local3` features evolve during training:
-
-
+
+
Individual loss functions, as well as the total loss, are particularly
@@ -378,7 +378,7 @@ processing a batch of data.
Here is a diagram of this model:
-
+
Note that each GPU computes inference as well as the gradients for a unique
diff --git a/tensorflow/docs_src/tutorials/image_recognition.md b/tensorflow/docs_src/tutorials/image_recognition.md
index bf03427fc5b..88ae451cd53 100644
--- a/tensorflow/docs_src/tutorials/image_recognition.md
+++ b/tensorflow/docs_src/tutorials/image_recognition.md
@@ -36,7 +36,7 @@ images into [1000 classes], like "Zebra", "Dalmatian", and "Dishwasher".
For example, here are the results from [AlexNet] classifying some images:
-
+
To compare models, we examine how often the model fails to predict the
@@ -75,7 +75,7 @@ Start by cloning the [TensorFlow models repo](https://github.com/tensorflow/mode
The above command will classify a supplied image of a panda bear.
-
+
If the model runs correctly, the script will produce the following output:
@@ -137,7 +137,7 @@ score of 0.8.
-
+
Next, try it out on your own images by supplying the --image= argument, e.g.
diff --git a/tensorflow/docs_src/tutorials/image_retraining.md b/tensorflow/docs_src/tutorials/image_retraining.md
index c42bb8a023e..a65b5845cf5 100644
--- a/tensorflow/docs_src/tutorials/image_retraining.md
+++ b/tensorflow/docs_src/tutorials/image_retraining.md
@@ -18,7 +18,7 @@ to help control the training process.
## Training on Flowers
-
+
[Image by Kelly Sikkema](https://www.flickr.com/photos/95072945@N05/9922116524/)
Before you start any training, you'll need a set of images to teach the network
@@ -174,7 +174,7 @@ you do that and pass the root folder of the subdirectories as the argument to
Here's what the folder structure of the flowers archive looks like, to give you
and example of the kind of layout the script is looking for:
-
+
In practice it may take some work to get the accuracy you want. I'll try to
guide you through some of the common problems you might encounter below.
diff --git a/tensorflow/docs_src/tutorials/layers.md b/tensorflow/docs_src/tutorials/layers.md
index 0429c7f3463..aa8e2cc8399 100644
--- a/tensorflow/docs_src/tutorials/layers.md
+++ b/tensorflow/docs_src/tutorials/layers.md
@@ -7,7 +7,7 @@ activation functions, and applying dropout regularization. In this tutorial,
you'll learn how to use `layers` to build a convolutional neural network model
to recognize the handwritten digits in the MNIST data set.
-
+
**The [MNIST dataset](http://yann.lecun.com/exdb/mnist/) comprises 60,000
training examples and 10,000 test examples of the handwritten digits 0–9,
diff --git a/tensorflow/docs_src/tutorials/mandelbrot.md b/tensorflow/docs_src/tutorials/mandelbrot.md
index 7d8abbdcba6..1c0a548129c 100755
--- a/tensorflow/docs_src/tutorials/mandelbrot.md
+++ b/tensorflow/docs_src/tutorials/mandelbrot.md
@@ -109,7 +109,7 @@ Let's see what we've got.
DisplayFractal(ns.eval())
```
-
+
Not bad!
diff --git a/tensorflow/docs_src/tutorials/pdes.md b/tensorflow/docs_src/tutorials/pdes.md
index ec6915074ba..425e8d7084e 100755
--- a/tensorflow/docs_src/tutorials/pdes.md
+++ b/tensorflow/docs_src/tutorials/pdes.md
@@ -93,7 +93,7 @@ for n in range(40):
DisplayArray(u_init, rng=[-0.1, 0.1])
```
-
+
Now let's specify the details of the differential equation.
diff --git a/tensorflow/docs_src/tutorials/seq2seq.md b/tensorflow/docs_src/tutorials/seq2seq.md
index a3db3e51cfd..6ffe3e8b037 100644
--- a/tensorflow/docs_src/tutorials/seq2seq.md
+++ b/tensorflow/docs_src/tutorials/seq2seq.md
@@ -40,7 +40,7 @@ networks (RNNs): an *encoder* that processes the input and a *decoder* that
generates the output. This basic architecture is depicted below.
-
+
Each box in the picture above represents a cell of the RNN, most commonly
@@ -62,7 +62,7 @@ decoding step. A multi-layer sequence-to-sequence network with LSTM cells and
attention mechanism in the decoder looks like this.
-
+
## TensorFlow seq2seq library
diff --git a/tensorflow/docs_src/tutorials/wide_and_deep.md b/tensorflow/docs_src/tutorials/wide_and_deep.md
index bae934b3f4c..fda78f47c4a 100644
--- a/tensorflow/docs_src/tutorials/wide_and_deep.md
+++ b/tensorflow/docs_src/tutorials/wide_and_deep.md
@@ -17,8 +17,7 @@ large-scale regression and classification problems with sparse input features
you're interested in learning more about how Wide & Deep Learning works, please
check out our [research paper](http://arxiv.org/abs/1606.07792).
-![Wide & Deep Spectrum of Models]
-(../images/wide_n_deep.svg "Wide & Deep")
+
The figure above shows a comparison of a wide model (logistic regression with
sparse features and transformations), a deep model (feed-forward neural network
diff --git a/tensorflow/docs_src/tutorials/word2vec.md b/tensorflow/docs_src/tutorials/word2vec.md
index 1b25cb9e89f..348e069ed6d 100644
--- a/tensorflow/docs_src/tutorials/word2vec.md
+++ b/tensorflow/docs_src/tutorials/word2vec.md
@@ -51,7 +51,7 @@ means that we may need more data in order to successfully train statistical
models. Using vector representations can overcome some of these obstacles.
-
+
[Vector space models](https://en.wikipedia.org/wiki/Vector_space_model) (VSMs)
@@ -125,7 +125,7 @@ probability using the score for all other \\(V\\) words \\(w'\\) in the current
context \\(h\\), *at every training step*.
-
+
On the other hand, for feature learning in word2vec we do not need a full
@@ -136,7 +136,7 @@ same context. We illustrate this below for a CBOW model. For skip-gram the
direction is simply inverted.
-
+
Mathematically, the objective (for each example) is to maximize
@@ -233,7 +233,7 @@ below (see also for example
[Mikolov et al., 2013](http://www.aclweb.org/anthology/N13-1090)).
-
+
This explains why these vectors are also useful as features for many canonical
@@ -335,7 +335,7 @@ After training has finished we can visualize the learned embeddings using
t-SNE.
-
+
Et voila! As expected, words that are similar end up clustering nearby each
diff --git a/tensorflow/go/op/wrappers.go b/tensorflow/go/op/wrappers.go
index 8a34791fffe..eff67194671 100644
--- a/tensorflow/go/op/wrappers.go
+++ b/tensorflow/go/op/wrappers.go
@@ -57,7 +57,7 @@ func makeOutputList(op *tf.Operation, start int, output string) ([]tf.Output, in
// Requires `updates.shape = indices.shape + ref.shape[1:]`.
//
//
//
// In Python, this scatter operation would look like this:
@@ -3184,7 +3184,7 @@ func SpaceToDepth(scope *Scope, input tf.Output, block_size int64) (output tf.Ou
// rank-3 tensor with two matrices of new values.
//
//
-//
+//
//
//
// In Python, this scatter operation would look like this:
@@ -4940,7 +4940,7 @@ func TensorArrayGatherV2(scope *Scope, handle tf.Output, indices tf.Output, flow
// ```
//
//
-//
+//
//
func DynamicStitch(scope *Scope, indices []tf.Output, data []tf.Output) (merged tf.Output) {
if scope.Err() != nil {
@@ -13758,7 +13758,7 @@ func GatherValidateIndices(value bool) GatherAttr {
// raising an error.
//
//