Fix broken links to images, make all image links absolute.

Fixes #8064, fixes #7685.

(after docs republish)
Change: 154614227
This commit is contained in:
Martin Wicke 2017-04-28 21:26:21 -08:00 committed by TensorFlower Gardener
parent 2d264f38fd
commit 1d679a0476
32 changed files with 195 additions and 196 deletions

View File

@ -209,7 +209,7 @@ The input tensors are all required to have size 1 in the first dimension.
For example: For example:
```prettyprint ```
# 'x' is [[1, 4]] # 'x' is [[1, 4]]
# 'y' is [[2, 5]] # 'y' is [[2, 5]]
# 'z' is [[3, 6]] # 'z' is [[3, 6]]
@ -277,7 +277,7 @@ Etc.
For example: For example:
```prettyprint ```
# 'x' is [1, 4] # 'x' is [1, 4]
# 'y' is [2, 5] # 'y' is [2, 5]
# 'z' is [3, 6] # 'z' is [3, 6]
@ -432,7 +432,7 @@ Computes offsets of concat inputs within its output.
For example: For example:
```prettyprint ```
# 'x' is [2, 2, 7] # 'x' is [2, 2, 7]
# 'y' is [2, 3, 7] # 'y' is [2, 3, 7]
# 'z' is [2, 5, 7] # 'z' is [2, 5, 7]
@ -670,7 +670,7 @@ rank 2k with dimensions [D1,..., Dk, D1,..., Dk] where:
For example: For example:
```prettyprint ```
# 'diagonal' is [1, 2, 3, 4] # 'diagonal' is [1, 2, 3, 4]
tf.diag(diagonal) ==> [[1, 0, 0, 0] tf.diag(diagonal) ==> [[1, 0, 0, 0]
[0, 2, 0, 0] [0, 2, 0, 0]
@ -722,7 +722,7 @@ tensor of rank `k` with dimensions `[D1,..., Dk]` where:
For example: For example:
```prettyprint ```
# 'input' is [[1, 0, 0, 0] # 'input' is [[1, 0, 0, 0]
[0, 2, 0, 0] [0, 2, 0, 0]
[0, 0, 3, 0] [0, 0, 3, 0]
@ -768,7 +768,7 @@ tensor of rank `k+1` with dimensions [I, J, K, ..., N, N]` where:
For example: For example:
```prettyprint ```
# 'diagonal' is [[1, 2, 3, 4], [5, 6, 7, 8]] # 'diagonal' is [[1, 2, 3, 4], [5, 6, 7, 8]]
and diagonal.shape = (2, 4) and diagonal.shape = (2, 4)
@ -880,7 +880,7 @@ The input must be at least a matrix.
For example: For example:
```prettyprint ```
# 'input' is [[[1, 0, 0, 0] # 'input' is [[[1, 0, 0, 0]
[0, 2, 0, 0] [0, 2, 0, 0]
[0, 0, 3, 0] [0, 0, 3, 0]
@ -927,7 +927,7 @@ The indicator function
For example: For example:
```prettyprint ```
# if 'input' is [[ 0, 1, 2, 3] # if 'input' is [[ 0, 1, 2, 3]
[-1, 0, 1, 2] [-1, 0, 1, 2]
[-2, -1, 0, 1] [-2, -1, 0, 1]
@ -946,7 +946,7 @@ tf.matrix_band_part(input, 2, 1) ==> [[ 0, 1, 0, 0]
Useful special cases: Useful special cases:
```prettyprint ```
tf.matrix_band_part(input, 0, -1) ==> Upper triangular part. tf.matrix_band_part(input, 0, -1) ==> Upper triangular part.
tf.matrix_band_part(input, -1, 0) ==> Lower triangular part. tf.matrix_band_part(input, -1, 0) ==> Lower triangular part.
tf.matrix_band_part(input, 0, 0) ==> Diagonal. tf.matrix_band_part(input, 0, 0) ==> Diagonal.
@ -998,7 +998,7 @@ of `tensor` must equal the number of elements in `dims`. In other words:
For example: For example:
```prettyprint ```
# tensor 't' is [[[[ 0, 1, 2, 3], # tensor 't' is [[[[ 0, 1, 2, 3],
# [ 4, 5, 6, 7], # [ 4, 5, 6, 7],
# [ 8, 9, 10, 11]], # [ 8, 9, 10, 11]],
@ -1074,7 +1074,7 @@ once, a InvalidArgument error is raised.
For example: For example:
```prettyprint ```
# tensor 't' is [[[[ 0, 1, 2, 3], # tensor 't' is [[[[ 0, 1, 2, 3],
# [ 4, 5, 6, 7], # [ 4, 5, 6, 7],
# [ 8, 9, 10, 11]], # [ 8, 9, 10, 11]],
@ -1245,7 +1245,7 @@ This operation creates a tensor of shape `dims` and fills it with `value`.
For example: For example:
```prettyprint ```
# Output tensor has shape [2, 3]. # Output tensor has shape [2, 3].
fill([2, 3], 9) ==> [[9, 9, 9] fill([2, 3], 9) ==> [[9, 9, 9]
[9, 9, 9]] [9, 9, 9]]
@ -1354,7 +1354,7 @@ out-of-bound indices result in safe but unspecified behavior, which may include
raising an error. raising an error.
<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../../../images/Gather.png" alt> <img style="width:100%" src="https://www.tensorflow.org/images/Gather.png" alt>
</div> </div>
)doc"); )doc");
@ -1610,7 +1610,7 @@ implied by `shape` must be the same as the number of elements in `tensor`.
For example: For example:
```prettyprint ```
# tensor 't' is [1, 2, 3, 4, 5, 6, 7, 8, 9] # tensor 't' is [1, 2, 3, 4, 5, 6, 7, 8, 9]
# tensor 't' has shape [9] # tensor 't' has shape [9]
reshape(t, [3, 3]) ==> [[1, 2, 3], reshape(t, [3, 3]) ==> [[1, 2, 3],
@ -1697,7 +1697,7 @@ The values must include 0. There can be no duplicate values or negative values.
For example: For example:
```prettyprint ```
# tensor `x` is [3, 4, 0, 2, 1] # tensor `x` is [3, 4, 0, 2, 1]
invert_permutation(x) ==> [2, 4, 3, 0, 1] invert_permutation(x) ==> [2, 4, 3, 0, 1]
``` ```
@ -1802,7 +1802,7 @@ in the unique output `y`. In other words:
For example: For example:
```prettyprint ```
# tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8] # tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8]
y, idx = unique(x) y, idx = unique(x)
y ==> [1, 2, 4, 7, 8] y ==> [1, 2, 4, 7, 8]
@ -1842,7 +1842,7 @@ contains the count of each element of `y` in `x`. In other words:
For example: For example:
```prettyprint ```
# tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8] # tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8]
y, idx, count = unique_with_counts(x) y, idx, count = unique_with_counts(x)
y ==> [1, 2, 4, 7, 8] y ==> [1, 2, 4, 7, 8]
@ -1887,7 +1887,7 @@ This operation returns a 1-D integer tensor representing the shape of `input`.
For example: For example:
```prettyprint ```
# 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]] # 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]
shape(t) ==> [2, 2, 3] shape(t) ==> [2, 2, 3]
``` ```
@ -1968,7 +1968,7 @@ slice `i`, with the first `seq_lengths[i]` slices along dimension
For example: For example:
```prettyprint ```
# Given this: # Given this:
batch_dim = 0 batch_dim = 0
seq_dim = 1 seq_dim = 1
@ -1990,7 +1990,7 @@ output[3, 2:, :, ...] = input[3, 2:, :, ...]
In contrast, if: In contrast, if:
```prettyprint ```
# Given this: # Given this:
batch_dim = 2 batch_dim = 2
seq_dim = 0 seq_dim = 0
@ -2031,7 +2031,7 @@ This operation returns an integer representing the rank of `input`.
For example: For example:
```prettyprint ```
# 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]] # 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]
# shape of tensor 't' is [2, 2, 3] # shape of tensor 't' is [2, 2, 3]
rank(t) ==> 3 rank(t) ==> 3
@ -2057,7 +2057,7 @@ This operation returns an integer representing the number of elements in
For example: For example:
```prettyprint ```
# 't' is [[[1, 1,, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]] # 't' is [[[1, 1,, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]]
size(t) ==> 12 size(t) ==> 12
``` ```
@ -2290,7 +2290,7 @@ encoding is best understand by considering a non-trivial example. In
particular, particular,
`foo[1, 2:4, None, ..., :-3:-1, :]` will be encoded as `foo[1, 2:4, None, ..., :-3:-1, :]` will be encoded as
```prettyprint ```
begin = [1, 2, x, x, 0, x] # x denotes don't care (usually 0) begin = [1, 2, x, x, 0, x] # x denotes don't care (usually 0)
end = [2, 4, x, x, -3, x] end = [2, 4, x, x, -3, x]
strides = [1, 1, x, x, -1, 1] strides = [1, 1, x, x, -1, 1]
@ -2512,7 +2512,7 @@ the output tensor can vary depending on how many true values there are in
For example: For example:
```prettyprint ```
# 'input' tensor is [[True, False] # 'input' tensor is [[True, False]
# [True, False]] # [True, False]]
# 'input' has two true values, so output has two coordinates. # 'input' has two true values, so output has two coordinates.
@ -2616,7 +2616,7 @@ The padded size of each dimension D of the output is:
For example: For example:
```prettyprint ```
# 't' is [[1, 1], [2, 2]] # 't' is [[1, 1], [2, 2]]
# 'paddings' is [[1, 1], [2, 2]] # 'paddings' is [[1, 1], [2, 2]]
# rank of 't' is 2 # rank of 't' is 2
@ -2655,7 +2655,7 @@ The padded size of each dimension D of the output is:
For example: For example:
```prettyprint ```
# 't' is [[1, 2, 3], [4, 5, 6]]. # 't' is [[1, 2, 3], [4, 5, 6]].
# 'paddings' is [[1, 1]], [2, 2]]. # 'paddings' is [[1, 1]], [2, 2]].
# 'mode' is SYMMETRIC. # 'mode' is SYMMETRIC.
@ -2751,7 +2751,7 @@ The folded size of each dimension D of the output is:
For example: For example:
```prettyprint ```
# 't' is [[1, 2, 3], [4, 5, 6], [7, 8, 9]]. # 't' is [[1, 2, 3], [4, 5, 6], [7, 8, 9]].
# 'paddings' is [[0, 1]], [0, 1]]. # 'paddings' is [[0, 1]], [0, 1]].
# 'mode' is SYMMETRIC. # 'mode' is SYMMETRIC.
@ -2927,7 +2927,7 @@ which will make the shape `[1, height, width, channels]`.
Other examples: Other examples:
```prettyprint ```
# 't' is a tensor of shape [2] # 't' is a tensor of shape [2]
shape(expand_dims(t, 0)) ==> [1, 2] shape(expand_dims(t, 0)) ==> [1, 2]
shape(expand_dims(t, 1)) ==> [2, 1] shape(expand_dims(t, 1)) ==> [2, 1]
@ -3029,14 +3029,14 @@ dimensions, you can remove specific size 1 dimensions by specifying
For example: For example:
```prettyprint ```
# 't' is a tensor of shape [1, 2, 1, 3, 1, 1] # 't' is a tensor of shape [1, 2, 1, 3, 1, 1]
shape(squeeze(t)) ==> [2, 3] shape(squeeze(t)) ==> [2, 3]
``` ```
Or, to remove specific size 1 dimensions: Or, to remove specific size 1 dimensions:
```prettyprint ```
# 't' is a tensor of shape [1, 2, 1, 3, 1, 1] # 't' is a tensor of shape [1, 2, 1, 3, 1, 1]
shape(squeeze(t, [2, 4])) ==> [1, 2, 3, 1] shape(squeeze(t, [2, 4])) ==> [1, 2, 3, 1]
``` ```
@ -3079,14 +3079,14 @@ position of each `out` element in `x`. In other words:
For example, given this input: For example, given this input:
```prettyprint ```
x = [1, 2, 3, 4, 5, 6] x = [1, 2, 3, 4, 5, 6]
y = [1, 3, 5] y = [1, 3, 5]
``` ```
This operation would return: This operation would return:
```prettyprint ```
out ==> [2, 4, 6] out ==> [2, 4, 6]
idx ==> [1, 3, 5] idx ==> [1, 3, 5]
``` ```
@ -3345,34 +3345,34 @@ Some examples:
(1) For the following input of shape `[1, 2, 2, 1]`, `block_shape = [2, 2]`, and (1) For the following input of shape `[1, 2, 2, 1]`, `block_shape = [2, 2]`, and
`paddings = [[0, 0], [0, 0]]`: `paddings = [[0, 0], [0, 0]]`:
```prettyprint ```
x = [[[[1], [2]], [[3], [4]]]] x = [[[[1], [2]], [[3], [4]]]]
``` ```
The output tensor has shape `[4, 1, 1, 1]` and value: The output tensor has shape `[4, 1, 1, 1]` and value:
```prettyprint ```
[[[[1]]], [[[2]]], [[[3]]], [[[4]]]] [[[[1]]], [[[2]]], [[[3]]], [[[4]]]]
``` ```
(2) For the following input of shape `[1, 2, 2, 3]`, `block_shape = [2, 2]`, and (2) For the following input of shape `[1, 2, 2, 3]`, `block_shape = [2, 2]`, and
`paddings = [[0, 0], [0, 0]]`: `paddings = [[0, 0], [0, 0]]`:
```prettyprint ```
x = [[[[1, 2, 3], [4, 5, 6]], x = [[[[1, 2, 3], [4, 5, 6]],
[[7, 8, 9], [10, 11, 12]]]] [[7, 8, 9], [10, 11, 12]]]]
``` ```
The output tensor has shape `[4, 1, 1, 3]` and value: The output tensor has shape `[4, 1, 1, 3]` and value:
```prettyprint ```
[[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]] [[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]]
``` ```
(3) For the following input of shape `[1, 4, 4, 1]`, `block_shape = [2, 2]`, and (3) For the following input of shape `[1, 4, 4, 1]`, `block_shape = [2, 2]`, and
`paddings = [[0, 0], [0, 0]]`: `paddings = [[0, 0], [0, 0]]`:
```prettyprint ```
x = [[[[1], [2], [3], [4]], x = [[[[1], [2], [3], [4]],
[[5], [6], [7], [8]], [[5], [6], [7], [8]],
[[9], [10], [11], [12]], [[9], [10], [11], [12]],
@ -3381,7 +3381,7 @@ x = [[[[1], [2], [3], [4]],
The output tensor has shape `[4, 2, 2, 1]` and value: The output tensor has shape `[4, 2, 2, 1]` and value:
```prettyprint ```
x = [[[[1], [3]], [[9], [11]]], x = [[[[1], [3]], [[9], [11]]],
[[[2], [4]], [[10], [12]]], [[[2], [4]], [[10], [12]]],
[[[5], [7]], [[13], [15]]], [[[5], [7]], [[13], [15]]],
@ -3391,7 +3391,7 @@ x = [[[[1], [3]], [[9], [11]]],
(4) For the following input of shape `[2, 2, 4, 1]`, block_shape = `[2, 2]`, and (4) For the following input of shape `[2, 2, 4, 1]`, block_shape = `[2, 2]`, and
paddings = `[[0, 0], [2, 0]]`: paddings = `[[0, 0], [2, 0]]`:
```prettyprint ```
x = [[[[1], [2], [3], [4]], x = [[[[1], [2], [3], [4]],
[[5], [6], [7], [8]]], [[5], [6], [7], [8]]],
[[[9], [10], [11], [12]], [[[9], [10], [11], [12]],
@ -3400,7 +3400,7 @@ x = [[[[1], [2], [3], [4]],
The output tensor has shape `[8, 1, 3, 1]` and value: The output tensor has shape `[8, 1, 3, 1]` and value:
```prettyprint ```
x = [[[[0], [1], [3]]], [[[0], [9], [11]]], x = [[[[0], [1], [3]]], [[[0], [9], [11]]],
[[[0], [2], [4]]], [[[0], [10], [12]]], [[[0], [2], [4]]], [[[0], [10], [12]]],
[[[0], [5], [7]]], [[[0], [13], [15]]], [[[0], [5], [7]]], [[[0], [13], [15]]],
@ -3474,32 +3474,32 @@ Some examples:
(1) For the following input of shape `[1, 2, 2, 1]` and block_size of 2: (1) For the following input of shape `[1, 2, 2, 1]` and block_size of 2:
```prettyprint ```
x = [[[[1], [2]], [[3], [4]]]] x = [[[[1], [2]], [[3], [4]]]]
``` ```
The output tensor has shape `[4, 1, 1, 1]` and value: The output tensor has shape `[4, 1, 1, 1]` and value:
```prettyprint ```
[[[[1]]], [[[2]]], [[[3]]], [[[4]]]] [[[[1]]], [[[2]]], [[[3]]], [[[4]]]]
``` ```
(2) For the following input of shape `[1, 2, 2, 3]` and block_size of 2: (2) For the following input of shape `[1, 2, 2, 3]` and block_size of 2:
```prettyprint ```
x = [[[[1, 2, 3], [4, 5, 6]], x = [[[[1, 2, 3], [4, 5, 6]],
[[7, 8, 9], [10, 11, 12]]]] [[7, 8, 9], [10, 11, 12]]]]
``` ```
The output tensor has shape `[4, 1, 1, 3]` and value: The output tensor has shape `[4, 1, 1, 3]` and value:
```prettyprint ```
[[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]] [[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]]
``` ```
(3) For the following input of shape `[1, 4, 4, 1]` and block_size of 2: (3) For the following input of shape `[1, 4, 4, 1]` and block_size of 2:
```prettyprint ```
x = [[[[1], [2], [3], [4]], x = [[[[1], [2], [3], [4]],
[[5], [6], [7], [8]], [[5], [6], [7], [8]],
[[9], [10], [11], [12]], [[9], [10], [11], [12]],
@ -3508,7 +3508,7 @@ x = [[[[1], [2], [3], [4]],
The output tensor has shape `[4, 2, 2, 1]` and value: The output tensor has shape `[4, 2, 2, 1]` and value:
```prettyprint ```
x = [[[[1], [3]], [[9], [11]]], x = [[[[1], [3]], [[9], [11]]],
[[[2], [4]], [[10], [12]]], [[[2], [4]], [[10], [12]]],
[[[5], [7]], [[13], [15]]], [[[5], [7]], [[13], [15]]],
@ -3517,7 +3517,7 @@ x = [[[[1], [3]], [[9], [11]]],
(4) For the following input of shape `[2, 2, 4, 1]` and block_size of 2: (4) For the following input of shape `[2, 2, 4, 1]` and block_size of 2:
```prettyprint ```
x = [[[[1], [2], [3], [4]], x = [[[[1], [2], [3], [4]],
[[5], [6], [7], [8]]], [[5], [6], [7], [8]]],
[[[9], [10], [11], [12]], [[[9], [10], [11], [12]],
@ -3526,7 +3526,7 @@ x = [[[[1], [2], [3], [4]],
The output tensor has shape `[8, 1, 2, 1]` and value: The output tensor has shape `[8, 1, 2, 1]` and value:
```prettyprint ```
x = [[[[1], [3]]], [[[9], [11]]], [[[2], [4]]], [[[10], [12]]], x = [[[[1], [3]]], [[[9], [11]]], [[[2], [4]]], [[[10], [12]]],
[[[5], [7]]], [[[13], [15]]], [[[6], [8]]], [[[14], [16]]]] [[[5], [7]]], [[[13], [15]]], [[[6], [8]]], [[[14], [16]]]]
``` ```
@ -3612,26 +3612,26 @@ Some examples:
(1) For the following input of shape `[4, 1, 1, 1]`, `block_shape = [2, 2]`, and (1) For the following input of shape `[4, 1, 1, 1]`, `block_shape = [2, 2]`, and
`crops = [[0, 0], [0, 0]]`: `crops = [[0, 0], [0, 0]]`:
```prettyprint ```
[[[[1]]], [[[2]]], [[[3]]], [[[4]]]] [[[[1]]], [[[2]]], [[[3]]], [[[4]]]]
``` ```
The output tensor has shape `[1, 2, 2, 1]` and value: The output tensor has shape `[1, 2, 2, 1]` and value:
```prettyprint ```
x = [[[[1], [2]], [[3], [4]]]] x = [[[[1], [2]], [[3], [4]]]]
``` ```
(2) For the following input of shape `[4, 1, 1, 3]`, `block_shape = [2, 2]`, and (2) For the following input of shape `[4, 1, 1, 3]`, `block_shape = [2, 2]`, and
`crops = [[0, 0], [0, 0]]`: `crops = [[0, 0], [0, 0]]`:
```prettyprint ```
[[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]] [[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]]
``` ```
The output tensor has shape `[1, 2, 2, 3]` and value: The output tensor has shape `[1, 2, 2, 3]` and value:
```prettyprint ```
x = [[[[1, 2, 3], [4, 5, 6]], x = [[[[1, 2, 3], [4, 5, 6]],
[[7, 8, 9], [10, 11, 12]]]] [[7, 8, 9], [10, 11, 12]]]]
``` ```
@ -3639,7 +3639,7 @@ x = [[[[1, 2, 3], [4, 5, 6]],
(3) For the following input of shape `[4, 2, 2, 1]`, `block_shape = [2, 2]`, and (3) For the following input of shape `[4, 2, 2, 1]`, `block_shape = [2, 2]`, and
`crops = [[0, 0], [0, 0]]`: `crops = [[0, 0], [0, 0]]`:
```prettyprint ```
x = [[[[1], [3]], [[9], [11]]], x = [[[[1], [3]], [[9], [11]]],
[[[2], [4]], [[10], [12]]], [[[2], [4]], [[10], [12]]],
[[[5], [7]], [[13], [15]]], [[[5], [7]], [[13], [15]]],
@ -3648,7 +3648,7 @@ x = [[[[1], [3]], [[9], [11]]],
The output tensor has shape `[1, 4, 4, 1]` and value: The output tensor has shape `[1, 4, 4, 1]` and value:
```prettyprint ```
x = [[[1], [2], [3], [4]], x = [[[1], [2], [3], [4]],
[[5], [6], [7], [8]], [[5], [6], [7], [8]],
[[9], [10], [11], [12]], [[9], [10], [11], [12]],
@ -3658,7 +3658,7 @@ x = [[[1], [2], [3], [4]],
(4) For the following input of shape `[8, 1, 3, 1]`, `block_shape = [2, 2]`, and (4) For the following input of shape `[8, 1, 3, 1]`, `block_shape = [2, 2]`, and
`crops = [[0, 0], [2, 0]]`: `crops = [[0, 0], [2, 0]]`:
```prettyprint ```
x = [[[[0], [1], [3]]], [[[0], [9], [11]]], x = [[[[0], [1], [3]]], [[[0], [9], [11]]],
[[[0], [2], [4]]], [[[0], [10], [12]]], [[[0], [2], [4]]], [[[0], [10], [12]]],
[[[0], [5], [7]]], [[[0], [13], [15]]], [[[0], [5], [7]]], [[[0], [13], [15]]],
@ -3667,7 +3667,7 @@ x = [[[[0], [1], [3]]], [[[0], [9], [11]]],
The output tensor has shape `[2, 2, 4, 1]` and value: The output tensor has shape `[2, 2, 4, 1]` and value:
```prettyprint ```
x = [[[[1], [2], [3], [4]], x = [[[[1], [2], [3], [4]],
[[5], [6], [7], [8]]], [[5], [6], [7], [8]]],
[[[9], [10], [11], [12]], [[[9], [10], [11], [12]],
@ -3732,32 +3732,32 @@ Some examples:
(1) For the following input of shape `[4, 1, 1, 1]` and block_size of 2: (1) For the following input of shape `[4, 1, 1, 1]` and block_size of 2:
```prettyprint ```
[[[[1]]], [[[2]]], [[[3]]], [[[4]]]] [[[[1]]], [[[2]]], [[[3]]], [[[4]]]]
``` ```
The output tensor has shape `[1, 2, 2, 1]` and value: The output tensor has shape `[1, 2, 2, 1]` and value:
```prettyprint ```
x = [[[[1], [2]], [[3], [4]]]] x = [[[[1], [2]], [[3], [4]]]]
``` ```
(2) For the following input of shape `[4, 1, 1, 3]` and block_size of 2: (2) For the following input of shape `[4, 1, 1, 3]` and block_size of 2:
```prettyprint ```
[[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]] [[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]]
``` ```
The output tensor has shape `[1, 2, 2, 3]` and value: The output tensor has shape `[1, 2, 2, 3]` and value:
```prettyprint ```
x = [[[[1, 2, 3], [4, 5, 6]], x = [[[[1, 2, 3], [4, 5, 6]],
[[7, 8, 9], [10, 11, 12]]]] [[7, 8, 9], [10, 11, 12]]]]
``` ```
(3) For the following input of shape `[4, 2, 2, 1]` and block_size of 2: (3) For the following input of shape `[4, 2, 2, 1]` and block_size of 2:
```prettyprint ```
x = [[[[1], [3]], [[9], [11]]], x = [[[[1], [3]], [[9], [11]]],
[[[2], [4]], [[10], [12]]], [[[2], [4]], [[10], [12]]],
[[[5], [7]], [[13], [15]]], [[[5], [7]], [[13], [15]]],
@ -3766,7 +3766,7 @@ x = [[[[1], [3]], [[9], [11]]],
The output tensor has shape `[1, 4, 4, 1]` and value: The output tensor has shape `[1, 4, 4, 1]` and value:
```prettyprint ```
x = [[[1], [2], [3], [4]], x = [[[1], [2], [3], [4]],
[[5], [6], [7], [8]], [[5], [6], [7], [8]],
[[9], [10], [11], [12]], [[9], [10], [11], [12]],
@ -3775,14 +3775,14 @@ x = [[[1], [2], [3], [4]],
(4) For the following input of shape `[8, 1, 2, 1]` and block_size of 2: (4) For the following input of shape `[8, 1, 2, 1]` and block_size of 2:
```prettyprint ```
x = [[[[1], [3]]], [[[9], [11]]], [[[2], [4]]], [[[10], [12]]], x = [[[[1], [3]]], [[[9], [11]]], [[[2], [4]]], [[[10], [12]]],
[[[5], [7]]], [[[13], [15]]], [[[6], [8]]], [[[14], [16]]]] [[[5], [7]]], [[[13], [15]]], [[[6], [8]]], [[[14], [16]]]]
``` ```
The output tensor has shape `[2, 2, 4, 1]` and value: The output tensor has shape `[2, 2, 4, 1]` and value:
```prettyprint ```
x = [[[[1], [3]], [[5], [7]]], x = [[[[1], [3]], [[5], [7]]],
[[[2], [4]], [[10], [12]]], [[[2], [4]], [[10], [12]]],
[[[5], [7]], [[13], [15]]], [[[5], [7]], [[13], [15]]],
@ -3848,14 +3848,14 @@ purely convolutional models.
For example, given this input of shape `[1, 2, 2, 1]`, and block_size of 2: For example, given this input of shape `[1, 2, 2, 1]`, and block_size of 2:
```prettyprint ```
x = [[[[1], [2]], x = [[[[1], [2]],
[[3], [4]]]] [[3], [4]]]]
``` ```
This operation will output a tensor of shape `[1, 1, 1, 4]`: This operation will output a tensor of shape `[1, 1, 1, 4]`:
```prettyprint ```
[[[[1, 2, 3, 4]]]] [[[[1, 2, 3, 4]]]]
``` ```
@ -3866,7 +3866,7 @@ The output element shape is `[1, 1, 4]`.
For an input tensor with larger depth, here of shape `[1, 2, 2, 3]`, e.g. For an input tensor with larger depth, here of shape `[1, 2, 2, 3]`, e.g.
```prettyprint ```
x = [[[[1, 2, 3], [4, 5, 6]], x = [[[[1, 2, 3], [4, 5, 6]],
[[7, 8, 9], [10, 11, 12]]]] [[7, 8, 9], [10, 11, 12]]]]
``` ```
@ -3874,13 +3874,13 @@ x = [[[[1, 2, 3], [4, 5, 6]],
This operation, for block_size of 2, will return the following tensor of shape This operation, for block_size of 2, will return the following tensor of shape
`[1, 1, 1, 12]` `[1, 1, 1, 12]`
```prettyprint ```
[[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]] [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]]
``` ```
Similarly, for the following input of shape `[1 4 4 1]`, and a block size of 2: Similarly, for the following input of shape `[1 4 4 1]`, and a block size of 2:
```prettyprint ```
x = [[[[1], [2], [5], [6]], x = [[[[1], [2], [5], [6]],
[[3], [4], [7], [8]], [[3], [4], [7], [8]],
[[9], [10], [13], [14]], [[9], [10], [13], [14]],
@ -3889,7 +3889,7 @@ x = [[[[1], [2], [5], [6]],
the operator will return the following tensor of shape `[1 2 2 4]`: the operator will return the following tensor of shape `[1 2 2 4]`:
```prettyprint ```
x = [[[[1, 2, 3, 4], x = [[[[1, 2, 3, 4],
[5, 6, 7, 8]], [5, 6, 7, 8]],
[[9, 10, 11, 12], [[9, 10, 11, 12],
@ -3958,14 +3958,14 @@ purely convolutional models.
For example, given this input of shape `[1, 1, 1, 4]`, and a block size of 2: For example, given this input of shape `[1, 1, 1, 4]`, and a block size of 2:
```prettyprint ```
x = [[[[1, 2, 3, 4]]]] x = [[[[1, 2, 3, 4]]]]
``` ```
This operation will output a tensor of shape `[1, 2, 2, 1]`: This operation will output a tensor of shape `[1, 2, 2, 1]`:
```prettyprint ```
[[[[1], [2]], [[[[1], [2]],
[[3], [4]]]] [[3], [4]]]]
``` ```
@ -3977,14 +3977,14 @@ The output element shape is `[2, 2, 1]`.
For an input tensor with larger depth, here of shape `[1, 1, 1, 12]`, e.g. For an input tensor with larger depth, here of shape `[1, 1, 1, 12]`, e.g.
```prettyprint ```
x = [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]] x = [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]]
``` ```
This operation, for block size of 2, will return the following tensor of shape This operation, for block size of 2, will return the following tensor of shape
`[1, 2, 2, 3]` `[1, 2, 2, 3]`
```prettyprint ```
[[[[1, 2, 3], [4, 5, 6]], [[[[1, 2, 3], [4, 5, 6]],
[[7, 8, 9], [10, 11, 12]]]] [[7, 8, 9], [10, 11, 12]]]]
@ -3992,7 +3992,7 @@ This operation, for block size of 2, will return the following tensor of shape
Similarly, for the following input of shape `[1 2 2 4]`, and a block size of 2: Similarly, for the following input of shape `[1 2 2 4]`, and a block size of 2:
```prettyprint ```
x = [[[[1, 2, 3, 4], x = [[[[1, 2, 3, 4],
[5, 6, 7, 8]], [5, 6, 7, 8]],
[[9, 10, 11, 12], [[9, 10, 11, 12],
@ -4001,7 +4001,7 @@ x = [[[[1, 2, 3, 4],
the operator will return the following tensor of shape `[1 4 4 1]`: the operator will return the following tensor of shape `[1 4 4 1]`:
```prettyprint ```
x = [[ [1], [2], [5], [6]], x = [[ [1], [2], [5], [6]],
[ [3], [4], [7], [8]], [ [3], [4], [7], [8]],
[ [9], [10], [13], [14]], [ [9], [10], [13], [14]],
@ -4775,7 +4775,7 @@ index. For example, say we want to insert 4 scattered elements in a rank-1
tensor with 8 elements. tensor with 8 elements.
<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../../images/ScatterNd1.png" alt> <img style="width:100%" src="https://www.tensorflow.org/images/ScatterNd1.png" alt>
</div> </div>
In Python, this scatter operation would look like this: In Python, this scatter operation would look like this:
@ -4798,7 +4798,7 @@ example, if we wanted to insert two slices in the first dimension of a
rank-3 tensor with two matrices of new values. rank-3 tensor with two matrices of new values.
<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../../images/ScatterNd2.png" alt> <img style="width:100%" src="https://www.tensorflow.org/images/ScatterNd2.png" alt>
</div> </div>
In Python, this scatter operation would look like this: In Python, this scatter operation would look like this:

View File

@ -102,7 +102,7 @@ For example:
``` ```
<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../../images/DynamicPartition.png" alt> <img style="width:100%" src="https://www.tensorflow.org/images/DynamicPartition.png" alt>
</div> </div>
partitions: Any shape. Indices in the range `[0, num_partitions)`. partitions: Any shape. Indices in the range `[0, num_partitions)`.
@ -190,7 +190,7 @@ For example:
``` ```
<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../../images/DynamicStitch.png" alt> <img style="width:100%" src="https://www.tensorflow.org/images/DynamicStitch.png" alt>
</div> </div>
)doc"); )doc");

View File

@ -295,7 +295,7 @@ the same location, their contributions add.
Requires `updates.shape = indices.shape + ref.shape[1:]`. Requires `updates.shape = indices.shape + ref.shape[1:]`.
<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../../images/ScatterAdd.png" alt> <img style="width:100%" src="https://www.tensorflow.org/images/ScatterAdd.png" alt>
</div> </div>
resource: Should be from a `Variable` node. resource: Should be from a `Variable` node.

View File

@ -288,7 +288,7 @@ for each value is undefined.
Requires `updates.shape = indices.shape + ref.shape[1:]`. Requires `updates.shape = indices.shape + ref.shape[1:]`.
<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../../images/ScatterUpdate.png" alt> <img style="width:100%" src="https://www.tensorflow.org/images/ScatterUpdate.png" alt>
</div> </div>
ref: Should be from a `Variable` node. ref: Should be from a `Variable` node.
@ -332,7 +332,7 @@ the same location, their contributions add.
Requires `updates.shape = indices.shape + ref.shape[1:]`. Requires `updates.shape = indices.shape + ref.shape[1:]`.
<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../../images/ScatterAdd.png" alt> <img style="width:100%" src="https://www.tensorflow.org/images/ScatterAdd.png" alt>
</div> </div>
ref: Should be from a `Variable` node. ref: Should be from a `Variable` node.
@ -376,7 +376,7 @@ the same location, their (negated) contributions add.
Requires `updates.shape = indices.shape + ref.shape[1:]`. Requires `updates.shape = indices.shape + ref.shape[1:]`.
<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../../images/ScatterSub.png" alt> <img style="width:100%" src="https://www.tensorflow.org/images/ScatterSub.png" alt>
</div> </div>
ref: Should be from a `Variable` node. ref: Should be from a `Variable` node.

View File

@ -33,7 +33,7 @@ plt.plot(x, z)
``` ```
<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../../images/lorenz_attractor.png" alt> <img style="width:100%" src="https://www.tensorflow.org/images/lorenz_attractor.png" alt>
</div> </div>
## Ops ## Ops

View File

@ -25,7 +25,7 @@ The TensorFlow runtime is a cross-platform library. Figure 1 illustrates its
general architecture. A C API separates user level code in different languages general architecture. A C API separates user level code in different languages
from the core runtime. from the core runtime.
![TensorFlow Layers](../images/layers.png){: width="300"} ![TensorFlow Layers](https://www.tensorflow.org/images/layers.png){: width="300"}
**Figure 1** **Figure 1**
@ -57,7 +57,7 @@ Other tasks send updates to these parameters as they work on optimizing the
parameters. This particular division of labor between tasks is not required, but parameters. This particular division of labor between tasks is not required, but
it is common for distributed training. it is common for distributed training.
![TensorFlow Architecture Diagram](../images/diag1.svg){: width="500"} ![TensorFlow Architecture Diagram](https://www.tensorflow.org/images/diag1.svg){: width="500"}
**Figure 2** **Figure 2**
@ -91,7 +91,7 @@ In Figure 3, the client has built a graph that applies weights (w) to a
feature vector (x), adds a bias term (b) and saves the result in a variable feature vector (x), adds a bias term (b) and saves the result in a variable
(s). (s).
![TensorFlow Architecture Diagram: Client](../images/graph_client.svg){: width="700"} ![TensorFlow Architecture Diagram: Client](https://www.tensorflow.org/images/graph_client.svg){: width="700"}
**Figure 3** **Figure 3**
@ -114,7 +114,7 @@ a step, it applies standard optimizations such as common subexpression
elimination and constant folding. It then coordinates execution of the elimination and constant folding. It then coordinates execution of the
optimized subgraphs across a set of tasks. optimized subgraphs across a set of tasks.
![TensorFlow Architecture Diagram: Master](../images/graph_master_cln.svg){: width="700"} ![TensorFlow Architecture Diagram: Master](https://www.tensorflow.org/images/graph_master_cln.svg){: width="700"}
**Figure 4** **Figure 4**
@ -123,7 +123,7 @@ Figure 5 shows a possible partition of our example graph. The distributed
master has grouped the model parameters in order to place them together on the master has grouped the model parameters in order to place them together on the
parameter server. parameter server.
![Partitioned Graph](../images/graph_split1.svg){: width="700"} ![Partitioned Graph](https://www.tensorflow.org/images/graph_split1.svg){: width="700"}
**Figure 5** **Figure 5**
@ -132,14 +132,14 @@ Where graph edges are cut by the partition, the distributed master inserts
send and receive nodes to pass information between the distributed tasks send and receive nodes to pass information between the distributed tasks
(Figure 6). (Figure 6).
![Partitioned Graph](../images/graph_split2.svg){: width="700"} ![Partitioned Graph](https://www.tensorflow.org/images/graph_split2.svg){: width="700"}
**Figure 6** **Figure 6**
The distributed master then ships the graph pieces to the distributed tasks. The distributed master then ships the graph pieces to the distributed tasks.
![Partitioned Graph](../images/graph_workers_cln.svg){: width="700"} ![Partitioned Graph](https://www.tensorflow.org/images/graph_workers_cln.svg){: width="700"}
**Figure 7** **Figure 7**
@ -181,7 +181,7 @@ We also have preliminary support for NVIDIA's NCCL library for multi-GPU
communication (see [`tf.contrib.nccl`]( communication (see [`tf.contrib.nccl`](
https://www.tensorflow.org/code/tensorflow/contrib/nccl/python/ops/nccl_ops.py)). https://www.tensorflow.org/code/tensorflow/contrib/nccl/python/ops/nccl_ops.py)).
![Partitioned Graph](../images/graph_send_recv.svg){: width="700"} ![Partitioned Graph](https://www.tensorflow.org/images/graph_send_recv.svg){: width="700"}
**Figure 8** **Figure 8**

View File

@ -72,7 +72,7 @@ for abalone:
The label to predict is number of rings, as a proxy for abalone age. The label to predict is number of rings, as a proxy for abalone age.
![Abalone shell](../images/abalone_shell.jpg) **[“Abalone ![Abalone shell](https://www.tensorflow.org/abalone_shell.jpg) **[“Abalone
shell”](https://www.flickr.com/photos/thenickster/16641048623/) (by [Nicki Dugan shell”](https://www.flickr.com/photos/thenickster/16641048623/) (by [Nicki Dugan
Pogue](https://www.flickr.com/photos/thenickster/), CC BY-SA 2.0)** Pogue](https://www.flickr.com/photos/thenickster/), CC BY-SA 2.0)**

View File

@ -21,7 +21,7 @@ interested in word embeddings,
gives a good introduction. gives a good introduction.
<video autoplay loop style="max-width: 100%;"> <video autoplay loop style="max-width: 100%;">
<source src="../images/embedding-mnist.mp4" type="video/mp4"> <source src="https://www.tensorflow.org/images/embedding-mnist.mp4" type="video/mp4">
Sorry, your browser doesn't support HTML5 video in MP4 format. Sorry, your browser doesn't support HTML5 video in MP4 format.
</video> </video>
@ -173,7 +173,7 @@ last data point in the bottom right:
Note in the example above that the last row doesn't have to be filled. For a Note in the example above that the last row doesn't have to be filled. For a
concrete example of a sprite, see concrete example of a sprite, see
[this sprite image](../images/mnist_10k_sprite.png) of 10,000 MNIST digits [this sprite image](https://www.tensorflow.org/images/mnist_10k_sprite.png) of 10,000 MNIST digits
(100x100). (100x100).
Note: We currently support sprites up to 8192px X 8192px. Note: We currently support sprites up to 8192px X 8192px.
@ -247,7 +247,7 @@ further analysis on their own with the "Isolate Points" button in the Inspector
pane on the right hand side. pane on the right hand side.
![Selection of nearest neighbors](../images/embedding-nearest-points.png "Selection of nearest neighbors") ![Selection of nearest neighbors](https://www.tensorflow.org/images/embedding-nearest-points.png "Selection of nearest neighbors")
*Selection of the nearest neighbors of “important” in a word embedding dataset.* *Selection of the nearest neighbors of “important” in a word embedding dataset.*
The combination of filtering with custom projection can be powerful. Below, we filtered The combination of filtering with custom projection can be powerful. Below, we filtered
@ -260,10 +260,10 @@ You can see that on the right side we have “ideas”, “science”, “perspe
<table width="100%;"> <table width="100%;">
<tr> <tr>
<td style="width: 30%;"> <td style="width: 30%;">
<img src="../images/embedding-custom-controls.png" alt="Custom controls panel" title="Custom controls panel" /> <img src="https://www.tensorflow.org/images/embedding-custom-controls.png" alt="Custom controls panel" title="Custom controls panel" />
</td> </td>
<td style="width: 70%;"> <td style="width: 70%;">
<img src="../images/embedding-custom-projection.png" alt="Custom projection" title="Custom projection" /> <img src="https://www.tensorflow.org/images/embedding-custom-projection.png" alt="Custom projection" title="Custom projection" />
</td> </td>
</tr> </tr>
<tr> <tr>
@ -284,4 +284,4 @@ projection) as a small file. The Projector can then be pointed to a set of one
or more of these files, producing the panel below. Other users can then walk or more of these files, producing the panel below. Other users can then walk
through a sequence of bookmarks. through a sequence of bookmarks.
<img src="../images/embedding-bookmark.png" alt="Bookmark panel" style="width:300px;"> <img src="https://www.tensorflow.org/images/embedding-bookmark.png" alt="Bookmark panel" style="width:300px;">

View File

@ -123,7 +123,7 @@ TensorFlow provides a utility called TensorBoard that can display a picture of
the computational graph. Here is a screenshot showing how TensorBoard the computational graph. Here is a screenshot showing how TensorBoard
visualizes the graph: visualizes the graph:
![TensorBoard screenshot](../images/getting_started_add.png) ![TensorBoard screenshot](https://www.tensorflow.org/images/getting_started_add.png)
As it stands, this graph is not especially interesting because it always As it stands, this graph is not especially interesting because it always
produces a constant result. A graph can be parameterized to accept external produces a constant result. A graph can be parameterized to accept external
@ -154,7 +154,7 @@ resulting in the output
In TensorBoard, the graph looks like this: In TensorBoard, the graph looks like this:
![TensorBoard screenshot](../images/getting_started_adder.png) ![TensorBoard screenshot](https://www.tensorflow.org/images/getting_started_adder.png)
We can make the computational graph more complex by adding another operation. We can make the computational graph more complex by adding another operation.
For example, For example,
@ -170,7 +170,7 @@ produces the output
The preceding computational graph would look as follows in TensorBoard: The preceding computational graph would look as follows in TensorBoard:
![TensorBoard screenshot](../images/getting_started_triple.png) ![TensorBoard screenshot](https://www.tensorflow.org/images/getting_started_triple.png)
In machine learning we will typically want a model that can take arbitrary In machine learning we will typically want a model that can take arbitrary
inputs, such as the one above. To make the model trainable, we need to be able inputs, such as the one above. To make the model trainable, we need to be able
@ -336,7 +336,7 @@ program your loss will not be exactly the same, because the model is initialized
with random values. with random values.
This more complicated program can still be visualized in TensorBoard This more complicated program can still be visualized in TensorBoard
![TensorBoard final model visualization](../images/getting_started_final.png) ![TensorBoard final model visualization](https://www.tensorflow.org/images/getting_started_final.png)
## `tf.contrib.learn` ## `tf.contrib.learn`

View File

@ -2,7 +2,7 @@
TensorFlow computation graphs are powerful but complicated. The graph visualization can help you understand and debug them. Here's an example of the visualization at work. TensorFlow computation graphs are powerful but complicated. The graph visualization can help you understand and debug them. Here's an example of the visualization at work.
![Visualization of a TensorFlow graph](../images/graph_vis_animation.gif "Visualization of a TensorFlow graph") ![Visualization of a TensorFlow graph](https://www.tensorflow.org/images/graph_vis_animation.gif "Visualization of a TensorFlow graph")
*Visualization of a TensorFlow graph.* *Visualization of a TensorFlow graph.*
To see your own graph, run TensorBoard pointing it to the log directory of the job, click on the graph tab on the top pane and select the appropriate run using the menu at the upper left corner. For in depth information on how to run TensorBoard and make sure you are logging all the necessary information, see @{$summaries_and_tensorboard$TensorBoard: Visualizing Learning}. To see your own graph, run TensorBoard pointing it to the log directory of the job, click on the graph tab on the top pane and select the appropriate run using the menu at the upper left corner. For in depth information on how to run TensorBoard and make sure you are logging all the necessary information, see @{$summaries_and_tensorboard$TensorBoard: Visualizing Learning}.
@ -43,10 +43,10 @@ expanded states.
<table width="100%;"> <table width="100%;">
<tr> <tr>
<td style="width: 50%;"> <td style="width: 50%;">
<img src="../images/pool1_collapsed.png" alt="Unexpanded name scope" title="Unexpanded name scope" /> <img src="https://www.tensorflow.org/images/pool1_collapsed.png" alt="Unexpanded name scope" title="Unexpanded name scope" />
</td> </td>
<td style="width: 50%;"> <td style="width: 50%;">
<img src="../images/pool1_expanded.png" alt="Expanded name scope" title="Expanded name scope" /> <img src="https://www.tensorflow.org/images/pool1_expanded.png" alt="Expanded name scope" title="Expanded name scope" />
</td> </td>
</tr> </tr>
<tr> <tr>
@ -87,10 +87,10 @@ and the auxiliary area.
<table width="100%;"> <table width="100%;">
<tr> <tr>
<td style="width: 50%;"> <td style="width: 50%;">
<img src="../images/conv_1.png" alt="conv_1 is part of the main graph" title="conv_1 is part of the main graph" /> <img src="https://www.tensorflow.org/images/conv_1.png" alt="conv_1 is part of the main graph" title="conv_1 is part of the main graph" />
</td> </td>
<td style="width: 50%;"> <td style="width: 50%;">
<img src="../images/save.png" alt="save is extracted as auxiliary node" title="save is extracted as auxiliary node" /> <img src="https://www.tensorflow.org/images/save.png" alt="save is extracted as auxiliary node" title="save is extracted as auxiliary node" />
</td> </td>
</tr> </tr>
<tr> <tr>
@ -114,10 +114,10 @@ specific set of nodes.
<table width="100%;"> <table width="100%;">
<tr> <tr>
<td style="width: 50%;"> <td style="width: 50%;">
<img src="../images/series.png" alt="Sequence of nodes" title="Sequence of nodes" /> <img src="https://www.tensorflow.org/images/series.png" alt="Sequence of nodes" title="Sequence of nodes" />
</td> </td>
<td style="width: 50%;"> <td style="width: 50%;">
<img src="../images/series_expanded.png" alt="Expanded sequence of nodes" title="Expanded sequence of nodes" /> <img src="https://www.tensorflow.org/images/series_expanded.png" alt="Expanded sequence of nodes" title="Expanded sequence of nodes" />
</td> </td>
</tr> </tr>
<tr> <tr>
@ -135,15 +135,15 @@ for constants and summary nodes. To summarize, here's a table of node symbols:
Symbol | Meaning Symbol | Meaning
--- | --- --- | ---
![Name scope](../images/namespace_node.png "Name scope") | *High-level* node representing a name scope. Double-click to expand a high-level node. ![Name scope](https://www.tensorflow.org/images/namespace_node.png "Name scope") | *High-level* node representing a name scope. Double-click to expand a high-level node.
![Sequence of unconnected nodes](../images/horizontal_stack.png "Sequence of unconnected nodes") | Sequence of numbered nodes that are not connected to each other. ![Sequence of unconnected nodes](https://www.tensorflow.org/images/horizontal_stack.png "Sequence of unconnected nodes") | Sequence of numbered nodes that are not connected to each other.
![Sequence of connected nodes](../images/vertical_stack.png "Sequence of connected nodes") | Sequence of numbered nodes that are connected to each other. ![Sequence of connected nodes](https://www.tensorflow.org/images/vertical_stack.png "Sequence of connected nodes") | Sequence of numbered nodes that are connected to each other.
![Operation node](../images/op_node.png "Operation node") | An individual operation node. ![Operation node](https://www.tensorflow.org/images/op_node.png "Operation node") | An individual operation node.
![Constant node](../images/constant.png "Constant node") | A constant. ![Constant node](https://www.tensorflow.org/images/constant.png "Constant node") | A constant.
![Summary node](../images/summary.png "Summary node") | A summary node. ![Summary node](https://www.tensorflow.org/images/summary.png "Summary node") | A summary node.
![Data flow edge](../images/dataflow_edge.png "Data flow edge") | Edge showing the data flow between operations. ![Data flow edge](https://www.tensorflow.org/images/dataflow_edge.png "Data flow edge") | Edge showing the data flow between operations.
![Control dependency edge](../images/control_edge.png "Control dependency edge") | Edge showing the control dependency between operations. ![Control dependency edge](https://www.tensorflow.org/images/control_edge.png "Control dependency edge") | Edge showing the control dependency between operations.
![Reference edge](../images/reference_edge.png "Reference edge") | A reference edge showing that the outgoing operation node can mutate the incoming tensor. ![Reference edge](https://www.tensorflow.org/images/reference_edge.png "Reference edge") | A reference edge showing that the outgoing operation node can mutate the incoming tensor.
## Interaction {#interaction} ## Interaction {#interaction}
@ -161,10 +161,10 @@ right corner of the visualization.
<table width="100%;"> <table width="100%;">
<tr> <tr>
<td style="width: 50%;"> <td style="width: 50%;">
<img src="../images/infocard.png" alt="Info card of a name scope" title="Info card of a name scope" /> <img src="https://www.tensorflow.org/images/infocard.png" alt="Info card of a name scope" title="Info card of a name scope" />
</td> </td>
<td style="width: 50%;"> <td style="width: 50%;">
<img src="../images/infocard_op.png" alt="Info card of operation node" title="Info card of operation node" /> <img src="https://www.tensorflow.org/images/infocard_op.png" alt="Info card of operation node" title="Info card of operation node" />
</td> </td>
</tr> </tr>
<tr> <tr>
@ -207,10 +207,10 @@ The images below give an illustration for a piece of a real-life graph.
<table width="100%;"> <table width="100%;">
<tr> <tr>
<td style="width: 50%;"> <td style="width: 50%;">
<img src="../images/colorby_structure.png" alt="Color by structure" title="Color by structure" /> <img src="https://www.tensorflow.org/images/colorby_structure.png" alt="Color by structure" title="Color by structure" />
</td> </td>
<td style="width: 50%;"> <td style="width: 50%;">
<img src="../images/colorby_device.png" alt="Color by device" title="Color by device" /> <img src="https://www.tensorflow.org/images/colorby_device.png" alt="Color by device" title="Color by device" />
</td> </td>
</tr> </tr>
<tr> <tr>
@ -233,7 +233,7 @@ The images below show the CIFAR-10 model with tensor shape information:
<table width="100%;"> <table width="100%;">
<tr> <tr>
<td style="width: 100%;"> <td style="width: 100%;">
<img src="../images/tensor_shapes.png" alt="CIFAR-10 model with tensor shape information" title="CIFAR-10 model with tensor shape information" /> <img src="https://www.tensorflow.org/images/tensor_shapes.png" alt="CIFAR-10 model with tensor shape information" title="CIFAR-10 model with tensor shape information" />
</td> </td>
</tr> </tr>
<tr> <tr>
@ -303,13 +303,13 @@ tensor output sizes.
<table width="100%;"> <table width="100%;">
<tr style="height: 380px"> <tr style="height: 380px">
<td> <td>
<img src="../images/colorby_compute_time.png" alt="Color by compute time" title="Color by compute time"/> <img src="https://www.tensorflow.org/images/colorby_compute_time.png" alt="Color by compute time" title="Color by compute time"/>
</td> </td>
<td> <td>
<img src="../images/run_metadata_graph.png" alt="Run metadata graph" title="Run metadata graph" /> <img src="https://www.tensorflow.org/images/run_metadata_graph.png" alt="Run metadata graph" title="Run metadata graph" />
</td> </td>
<td> <td>
<img src="../images/run_metadata_infocard.png" alt="Run metadata info card" title="Run metadata info card" /> <img src="https://www.tensorflow.org/images/run_metadata_infocard.png" alt="Run metadata info card" title="Run metadata info card" />
</td> </td>
</tr> </tr>
</table> </table>

View File

@ -15,7 +15,7 @@ MNIST is a simple computer vision dataset. It consists of images of handwritten
digits like these: digits like these:
<div style="width:40%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:40%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../../images/MNIST.png"> <img style="width:100%" src="https://www.tensorflow.org/images/MNIST.png">
</div> </div>
It also includes labels for each image, telling us which digit it is. For It also includes labels for each image, telling us which digit it is. For
@ -88,7 +88,7 @@ Each image is 28 pixels by 28 pixels. We can interpret this as a big array of
numbers: numbers:
<div style="width:50%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:50%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../../images/MNIST-Matrix.png"> <img style="width:100%" src="https://www.tensorflow.org/images/MNIST-Matrix.png">
</div> </div>
We can flatten this array into a vector of 28x28 = 784 numbers. It doesn't We can flatten this array into a vector of 28x28 = 784 numbers. It doesn't
@ -110,7 +110,7 @@ Each entry in the tensor is a pixel intensity between 0 and 1, for a particular
pixel in a particular image. pixel in a particular image.
<div style="width:40%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:40%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../../images/mnist-train-xs.png"> <img style="width:100%" src="https://www.tensorflow.org/images/mnist-train-xs.png">
</div> </div>
Each image in MNIST has a corresponding label, a number between 0 and 9 Each image in MNIST has a corresponding label, a number between 0 and 9
@ -124,7 +124,7 @@ vector which is 1 in the \\(n\\)th dimension. For example, 3 would be
`[55000, 10]` array of floats. `[55000, 10]` array of floats.
<div style="width:40%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:40%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../../images/mnist-train-ys.png"> <img style="width:100%" src="https://www.tensorflow.org/images/mnist-train-ys.png">
</div> </div>
We're now ready to actually make our model! We're now ready to actually make our model!
@ -157,7 +157,7 @@ classes. Red represents negative weights, while blue represents positive
weights. weights.
<div style="width:40%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:40%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../../images/softmax-weights.png"> <img style="width:100%" src="https://www.tensorflow.org/images/softmax-weights.png">
</div> </div>
We also add some extra evidence called a bias. Basically, we want to be able We also add some extra evidence called a bias. Basically, we want to be able
@ -202,13 +202,13 @@ although with a lot more \\(x\\)s. For each output, we compute a weighted sum of
the \\(x\\)s, add a bias, and then apply softmax. the \\(x\\)s, add a bias, and then apply softmax.
<div style="width:55%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:55%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../../images/softmax-regression-scalargraph.png"> <img style="width:100%" src="https://www.tensorflow.org/images/softmax-regression-scalargraph.png">
</div> </div>
If we write that out as equations, we get: If we write that out as equations, we get:
<div style="width:52%; margin-left:25%; margin-bottom:10px; margin-top:20px;"> <div style="width:52%; margin-left:25%; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../../images/softmax-regression-scalarequation.png" <img style="width:100%" src="https://www.tensorflow.org/images/softmax-regression-scalarequation.png"
alt="[y1, y2, y3] = softmax(W11*x1 + W12*x2 + W13*x3 + b1, W21*x1 + W22*x2 + W23*x3 + b2, W31*x1 + W32*x2 + W33*x3 + b3)"> alt="[y1, y2, y3] = softmax(W11*x1 + W12*x2 + W13*x3 + b1, W21*x1 + W22*x2 + W23*x3 + b2, W31*x1 + W32*x2 + W33*x3 + b3)">
</div> </div>
@ -217,7 +217,7 @@ and vector addition. This is helpful for computational efficiency. (It's also
a useful way to think.) a useful way to think.)
<div style="width:50%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:50%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../../images/softmax-regression-vectorequation.png" <img style="width:100%" src="https://www.tensorflow.org/images/softmax-regression-vectorequation.png"
alt="[y1, y2, y3] = softmax([[W11, W12, W13], [W21, W22, W23], [W31, W32, W33]]*[x1, x2, x3] + [b1, b2, b3])"> alt="[y1, y2, y3] = softmax([[W11, W12, W13], [W21, W22, W23], [W31, W32, W33]]*[x1, x2, x3] + [b1, b2, b3])">
</div> </div>

View File

@ -34,7 +34,7 @@ MNIST is a classic problem in machine learning. The problem is to look at
greyscale 28x28 pixel images of handwritten digits and determine which digit greyscale 28x28 pixel images of handwritten digits and determine which digit
the image represents, for all the digits from zero to nine. the image represents, for all the digits from zero to nine.
![MNIST Digits](../../images/mnist_digits.png "MNIST Digits") ![MNIST Digits](https://www.tensorflow.org/images/mnist_digits.png "MNIST Digits")
For more information, refer to [Yann LeCun's MNIST page](http://yann.lecun.com/exdb/mnist/) For more information, refer to [Yann LeCun's MNIST page](http://yann.lecun.com/exdb/mnist/)
or [Chris Olah's visualizations of MNIST](http://colah.github.io/posts/2014-10-Visualizing-MNIST/). or [Chris Olah's visualizations of MNIST](http://colah.github.io/posts/2014-10-Visualizing-MNIST/).
@ -90,7 +90,7 @@ loss.
and apply gradients. and apply gradients.
<div style="width:95%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:95%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../../images/mnist_subgraph.png"> <img style="width:100%" src="https://www.tensorflow.org/images/mnist_subgraph.png">
</div> </div>
### Inference ### Inference
@ -384,7 +384,7 @@ summary_writer.add_summary(summary_str, step)
When the events files are written, TensorBoard may be run against the training When the events files are written, TensorBoard may be run against the training
folder to display the values from the summaries. folder to display the values from the summaries.
![MNIST TensorBoard](../../images/mnist_tensorboard.png "MNIST TensorBoard") ![MNIST TensorBoard](https://www.tensorflow.org/images/mnist_tensorboard.png "MNIST TensorBoard")
**NOTE**: For more info about how to build and run Tensorboard, please see the accompanying tutorial @{$summaries_and_tensorboard$Tensorboard: Visualizing Learning}. **NOTE**: For more info about how to build and run Tensorboard, please see the accompanying tutorial @{$summaries_and_tensorboard$Tensorboard: Visualizing Learning}.

View File

@ -401,6 +401,6 @@ Then navigate to `http://0.0.0.0:`*`<port_number>`* in your browser, where
If you click on the accuracy field, you'll see an image like the following, If you click on the accuracy field, you'll see an image like the following,
which shows accuracy plotted against step count: which shows accuracy plotted against step count:
![Accuracy over step count in TensorBoard](../images/validation_monitor_tensorboard_accuracy.png "Accuracy over step count in TensorBoard") ![Accuracy over step count in TensorBoard](https://www.tensorflow.org/images/validation_monitor_tensorboard_accuracy.png "Accuracy over step count in TensorBoard")
For more on using TensorBoard, see @{$summaries_and_tensorboard$TensorBoard: Visualizing Learning} and @{$graph_viz$TensorBoard: Graph Visualization}. For more on using TensorBoard, see @{$summaries_and_tensorboard$TensorBoard: Visualizing Learning} and @{$graph_viz$TensorBoard: Graph Visualization}.

View File

@ -8,7 +8,7 @@ your TensorFlow graph, plot quantitative metrics about the execution of your
graph, and show additional data like images that pass through it. When graph, and show additional data like images that pass through it. When
TensorBoard is fully configured, it looks like this: TensorBoard is fully configured, it looks like this:
![MNIST TensorBoard](../images/mnist_tensorboard.png "MNIST TensorBoard") ![MNIST TensorBoard](https://www.tensorflow.org/images/mnist_tensorboard.png "MNIST TensorBoard")
<div class="video-wrapper"> <div class="video-wrapper">
<iframe class="devsite-embedded-youtube-video" data-video-id="eBbEDRsCmv4" <iframe class="devsite-embedded-youtube-video" data-video-id="eBbEDRsCmv4"

View File

@ -118,7 +118,7 @@ The [Iris data set](https://en.wikipedia.org/wiki/Iris_flower_data_set) contains
150 rows of data, comprising 50 samples from each of three related Iris species: 150 rows of data, comprising 50 samples from each of three related Iris species:
*Iris setosa*, *Iris virginica*, and *Iris versicolor*. *Iris setosa*, *Iris virginica*, and *Iris versicolor*.
![Petal geometry compared for three iris species: Iris setosa, Iris virginica, and Iris versicolor](../images/iris_three_species.jpg) **From left to right, ![Petal geometry compared for three iris species: Iris setosa, Iris virginica, and Iris versicolor](https://www.tensorflow.org/images/iris_three_species.jpg) **From left to right,
[*Iris setosa*](https://commons.wikimedia.org/w/index.php?curid=170298) (by [*Iris setosa*](https://commons.wikimedia.org/w/index.php?curid=170298) (by
[Radomil](https://commons.wikimedia.org/wiki/User:Radomil), CC BY-SA 3.0), [Radomil](https://commons.wikimedia.org/wiki/User:Radomil), CC BY-SA 3.0),
[*Iris versicolor*](https://commons.wikimedia.org/w/index.php?curid=248095) (by [*Iris versicolor*](https://commons.wikimedia.org/w/index.php?curid=248095) (by

View File

@ -143,13 +143,13 @@ conversion functions before and after to move the data between float and
eight-bit. Below is an example of what they look like. First here's the original eight-bit. Below is an example of what they look like. First here's the original
Relu operation, with float inputs and outputs: Relu operation, with float inputs and outputs:
![Relu Diagram](https://www.tensorflow.org/../images/quantization0.png) ![Relu Diagram](https://www.tensorflow.org/images/quantization0.png)
Then, this is the equivalent converted subgraph, still with float inputs and Then, this is the equivalent converted subgraph, still with float inputs and
outputs, but with internal conversions so the calculations are done in eight outputs, but with internal conversions so the calculations are done in eight
bit. bit.
![Converted Diagram](https://www.tensorflow.org/../images/quantization1.png) ![Converted Diagram](https://www.tensorflow.org/images/quantization1.png)
The min and max operations actually look at the values in the input float The min and max operations actually look at the values in the input float
tensor, and then feeds them into the Dequantize operation that converts the tensor, and then feeds them into the Dequantize operation that converts the
@ -162,7 +162,7 @@ operations that all have float equivalents, then there will be a lot of adjacent
Dequantize/Quantize ops. This stage spots that pattern, recognizes that they Dequantize/Quantize ops. This stage spots that pattern, recognizes that they
cancel each other out, and removes them, like this: cancel each other out, and removes them, like this:
![Stripping Diagram](https://www.tensorflow.org/../images/quantization2.png) ![Stripping Diagram](https://www.tensorflow.org/images/quantization2.png)
Applied on a large scale to models where all of the operations have quantized Applied on a large scale to models where all of the operations have quantized
equivalents, this gives a graph where all of the tensor calculations are done in equivalents, this gives a graph where all of the tensor calculations are done in

View File

@ -62,7 +62,7 @@ well as the NVIDIA GPU backend are in the TensorFlow source tree.
The following diagram shows the compilation process in XLA: The following diagram shows the compilation process in XLA:
<div style="width:95%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:95%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img src="../../images/how-does-xla-work.png"> <img src="https://www.tensorflow.org/images/how-does-xla-work.png">
</div> </div>
XLA comes with several optimizations and analyses that are target-independent, XLA comes with several optimizations and analyses that are target-independent,

View File

@ -124,7 +124,7 @@ open the timeline file created when the script finishes: `timeline.ctf.json`.
The rendered timeline should look similar to the picture below with multiple The rendered timeline should look similar to the picture below with multiple
green boxes labeled `MatMul`, possibly across multiple CPUs. green boxes labeled `MatMul`, possibly across multiple CPUs.
<div style="width:95%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:95%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../../images/jit_timeline_gpu.png"> <img style="width:100%" src="https://www.tensorflow.org/images/jit_timeline_gpu.png">
</div> </div>
### Step #3 Run with XLA ### Step #3 Run with XLA
@ -139,7 +139,7 @@ TF_XLA_FLAGS=--xla_generate_hlo_graph=.* python mnist_softmax_xla.py
Open the timeline file created (`timeline.ctf.json`). The rendered timeline Open the timeline file created (`timeline.ctf.json`). The rendered timeline
should look similar to the picture below with one long bar labeled `_XlaLaunch`. should look similar to the picture below with one long bar labeled `_XlaLaunch`.
<div style="width:95%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:95%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../../images/jit_timeline_gpu_xla.png"> <img style="width:100%" src="https://www.tensorflow.org/images/jit_timeline_gpu_xla.png">
</div> </div>
To understand what is happening in `_XlaLaunch`, look at the console output for To understand what is happening in `_XlaLaunch`, look at the console output for
@ -165,5 +165,5 @@ dot -Tpng hlo_graph_80.dot -o hlo_graph_80.png
The result will look like the following: The result will look like the following:
<div style="width:95%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:95%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../../images/jit_gpu_xla_graph.png"> <img style="width:100%" src="https://www.tensorflow.org/images/jit_gpu_xla_graph.png">
</div> </div>

View File

@ -178,7 +178,7 @@ Concat({a, b}, 0)
Diagram: Diagram:
<div style="width:95%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:95%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../../images/ops_concatenate.png"> <img style="width:100%" src="https://www.tensorflow.org/images/ops_concatenate.png">
</div> </div>
## ConvertElementType ## ConvertElementType
@ -707,7 +707,7 @@ are all 0. Figure below shows examples of different `edge_padding` and
`interior_padding` values for a two dimensional array. `interior_padding` values for a two dimensional array.
<div style="width:95%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:95%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../../images/ops_pad.png"> <img style="width:100%" src="https://www.tensorflow.org/images/ops_pad.png">
</div> </div>
## Reduce ## Reduce
@ -781,13 +781,13 @@ Here's an example of reducing a 2D array (matrix). The shape has rank 2,
dimension 0 of size 2 and dimension 1 of size 3: dimension 0 of size 2 and dimension 1 of size 3:
<div style="width:95%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:95%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:35%" src="../../images/ops_2d_matrix.png"> <img style="width:35%" src="https://www.tensorflow.org/images/ops_2d_matrix.png">
</div> </div>
Results of reducing dimensions 0 or 1 with an "add" function: Results of reducing dimensions 0 or 1 with an "add" function:
<div style="width:95%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:95%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:35%" src="../../images/ops_reduce_from_2d_matrix.png"> <img style="width:35%" src="https://www.tensorflow.org/images/ops_reduce_from_2d_matrix.png">
</div> </div>
Note that both reduction results are 1D arrays. The diagram shows one as column Note that both reduction results are 1D arrays. The diagram shows one as column
@ -798,7 +798,7 @@ size 4, dimension 1 of size 2 and dimension 2 of size 3. For simplicity, the
values 1 to 6 are replicated across dimension 0. values 1 to 6 are replicated across dimension 0.
<div style="width:95%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:95%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:35%" src="../../images/ops_reduce_from_3d_matrix.png"> <img style="width:35%" src="https://www.tensorflow.org/images/ops_reduce_from_3d_matrix.png">
</div> </div>
Similarly to the 2D example, we can reduce just one dimension. If we reduce Similarly to the 2D example, we can reduce just one dimension. If we reduce
@ -890,7 +890,7 @@ builder.ReduceWindow(
``` ```
<div style="width:95%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:95%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:35%" src="../../images/ops_reduce_window.png"> <img style="width:35%" src="https://www.tensorflow.org/images/ops_reduce_window.png">
</div> </div>
Stride of 1 in a dimension specifies that the position of a window in the Stride of 1 in a dimension specifies that the position of a window in the
@ -902,7 +902,7 @@ are the same as though the input came in with the dimensions it has after
padding. padding.
<div style="width:95%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:95%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:75%" src="../../images/ops_reduce_window_stride.png"> <img style="width:75%" src="https://www.tensorflow.org/images/ops_reduce_window_stride.png">
</div> </div>
The evaluation order of the reduction function is arbitrary and may be The evaluation order of the reduction function is arbitrary and may be
@ -1144,7 +1144,7 @@ addition `scatter` function produces the output element of value 8 (2 + 6).
<div style="width:95%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:95%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" <img style="width:100%"
src="../../images/ops_scatter_to_selected_window_element.png"> src="https://www.tensorflow.org/images/ops_scatter_to_selected_window_element.png">
</div> </div>
The evaluation order of the `scatter` function is arbitrary and may be The evaluation order of the `scatter` function is arbitrary and may be
@ -1482,5 +1482,5 @@ while (result(0) < 1000) {
``` ```
<div style="width:95%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:95%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../../images/ops_while.png"> <img style="width:100%" src="https://www.tensorflow.org/images/ops_while.png">
</div> </div>

View File

@ -24,7 +24,7 @@ This code trains a simple NN for MNIST digit image recognition. Notice that the
accuracy increases slightly after the first training step, but then gets stuck accuracy increases slightly after the first training step, but then gets stuck
at a low (near-chance) level: at a low (near-chance) level:
![debug_mnist training fails](../images/tfdbg_screenshot_mnist_symptom.png) ![debug_mnist training fails](https://www.tensorflow.org/images/tfdbg_screenshot_mnist_symptom.png)
Scratching your head, you suspect that certain nodes in the training graph Scratching your head, you suspect that certain nodes in the training graph
generated bad numeric values such as `inf`s and `nan`s. The computation-graph generated bad numeric values such as `inf`s and `nan`s. The computation-graph
@ -89,7 +89,7 @@ The debug wrapper session will prompt you when it is about to execute the first
`run()` call, with information regarding the fetched tensor and feed `run()` call, with information regarding the fetched tensor and feed
dictionaries displayed on the screen. dictionaries displayed on the screen.
![tfdbg run-start UI](../images/tfdbg_screenshot_run_start.png) ![tfdbg run-start UI](https://www.tensorflow.org/images/tfdbg_screenshot_run_start.png)
This is what we refer to as the *run-start UI*. If the screen size is This is what we refer to as the *run-start UI*. If the screen size is
too small to display the content of the message in its entirety, you can resize too small to display the content of the message in its entirety, you can resize
@ -108,7 +108,7 @@ intermediate tensors from the run. (These tensors can also be obtained by
running the command `lt` after you executed `run`.) This is called the running the command `lt` after you executed `run`.) This is called the
**run-end UI**: **run-end UI**:
![tfdbg run-end UI: accuracy](../images/tfdbg_screenshot_run_end_accuracy.png) ![tfdbg run-end UI: accuracy](https://www.tensorflow.org/images/tfdbg_screenshot_run_end_accuracy.png)
### tfdbg CLI Frequently-Used Commands ### tfdbg CLI Frequently-Used Commands
@ -181,7 +181,7 @@ screen with a red-colored title line indicating **tfdbg** stopped immediately
after a `run()` call generated intermediate tensors that passed the specified after a `run()` call generated intermediate tensors that passed the specified
filter `has_inf_or_nan`: filter `has_inf_or_nan`:
![tfdbg run-end UI: infs and nans](../images/tfdbg_screenshot_run_end_inf_nan.png) ![tfdbg run-end UI: infs and nans](https://www.tensorflow.org/images/tfdbg_screenshot_run_end_inf_nan.png)
As the screen display indicates, the `has_inf_or_nan` filter is first passed As the screen display indicates, the `has_inf_or_nan` filter is first passed
during the fourth `run()` call: an [Adam optimizer](https://arxiv.org/abs/1412.6980) during the fourth `run()` call: an [Adam optimizer](https://arxiv.org/abs/1412.6980)
@ -220,7 +220,7 @@ item on the top or entering the equivalent command:
tfdbg> ni cross_entropy/Log tfdbg> ni cross_entropy/Log
``` ```
![tfdbg run-end UI: infs and nans](../images/tfdbg_screenshot_run_end_node_info.png) ![tfdbg run-end UI: infs and nans](https://www.tensorflow.org/images/tfdbg_screenshot_run_end_node_info.png)
You can see that this node has the op type `Log` You can see that this node has the op type `Log`
and that its input is the node `softmax/Softmax`. Run the following command to and that its input is the node `softmax/Softmax`. Run the following command to
@ -263,7 +263,7 @@ simply click the underlined line numbers in the stack trace output of the
`ni -t <op_name>` commands, or use the `ps` (or `print_source`) command such as: `ni -t <op_name>` commands, or use the `ps` (or `print_source`) command such as:
`ps /path/to/source.py`. See the screenshot below for an example of `ps` output: `ps /path/to/source.py`. See the screenshot below for an example of `ps` output:
![tfdbg run-end UI: annotated Python source file](../images/tfdbg_screenshot_run_end_annotated_source.png) ![tfdbg run-end UI: annotated Python source file](https://www.tensorflow.org/images/tfdbg_screenshot_run_end_annotated_source.png)
Apply a value clipping on the input to @{tf.log} Apply a value clipping on the input to @{tf.log}
to resolve this problem: to resolve this problem:

View File

@ -309,7 +309,7 @@ operations, so that our training loop can dequeue examples from the example
queue. queue.
<div style="width:70%; margin-left:12%; margin-bottom:10px; margin-top:20px;"> <div style="width:70%; margin-left:12%; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../images/AnimatedFileQueues.gif"> <img style="width:100%" src="https://www.tensorflow.org/images/AnimatedFileQueues.gif">
</div> </div>
The helpers in `tf.train` that create these queues and enqueuing operations add The helpers in `tf.train` that create these queues and enqueuing operations add

View File

@ -14,7 +14,7 @@ that takes an item off the queue, adds one to that item, and puts it back on the
end of the queue. Slowly, the numbers on the queue increase. end of the queue. Slowly, the numbers on the queue increase.
<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../images/IncremeterFifoQueue.gif"> <img style="width:100%" src="https://www.tensorflow.org/images/IncremeterFifoQueue.gif">
</div> </div>
`Enqueue`, `EnqueueMany`, and `Dequeue` are special nodes. They take a pointer `Enqueue`, `EnqueueMany`, and `Dequeue` are special nodes. They take a pointer

View File

@ -141,7 +141,7 @@ so that we may visualize them in @{$summaries_and_tensorboard$TensorBoard}.
This is a good practice to verify that inputs are built correctly. This is a good practice to verify that inputs are built correctly.
<div style="width:50%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:50%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:70%" src="../images/cifar_image_summary.png"> <img style="width:70%" src="https://www.tensorflow.org/images/cifar_image_summary.png">
</div> </div>
Reading images from disk and distorting them can use a non-trivial amount of Reading images from disk and distorting them can use a non-trivial amount of
@ -170,7 +170,7 @@ Layer Name | Description
Here is a graph generated from TensorBoard describing the inference operation: Here is a graph generated from TensorBoard describing the inference operation:
<div style="width:15%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:15%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../images/cifar_graph.png"> <img style="width:100%" src="https://www.tensorflow.org/images/cifar_graph.png">
</div> </div>
> **EXERCISE**: The output of `inference` are un-normalized logits. Try editing > **EXERCISE**: The output of `inference` are un-normalized logits. Try editing
@ -205,7 +205,7 @@ loss and all these weight decay terms, as returned by the `loss()` function.
We visualize it in TensorBoard with a @{tf.summary.scalar}: We visualize it in TensorBoard with a @{tf.summary.scalar}:
![CIFAR-10 Loss](../images/cifar_loss.png "CIFAR-10 Total Loss") ![CIFAR-10 Loss](https://www.tensorflow.org/images/cifar_loss.png "CIFAR-10 Total Loss")
We train the model using standard We train the model using standard
[gradient descent](https://en.wikipedia.org/wiki/Gradient_descent) [gradient descent](https://en.wikipedia.org/wiki/Gradient_descent)
@ -214,7 +214,7 @@ with a learning rate that
@{tf.train.exponential_decay$exponentially decays} @{tf.train.exponential_decay$exponentially decays}
over time. over time.
![CIFAR-10 Learning Rate Decay](../images/cifar_lr_decay.png "CIFAR-10 Learning Rate Decay") ![CIFAR-10 Learning Rate Decay](https://www.tensorflow.org/images/cifar_lr_decay.png "CIFAR-10 Learning Rate Decay")
The `train()` function adds the operations needed to minimize the objective by The `train()` function adds the operations needed to minimize the objective by
calculating the gradient and updating the learned variables (see calculating the gradient and updating the learned variables (see
@ -295,8 +295,8 @@ For instance, we can watch how the distribution of activations and degree of
sparsity in `local3` features evolve during training: sparsity in `local3` features evolve during training:
<div style="width:100%; margin:auto; margin-bottom:10px; margin-top:20px; display: flex; flex-direction: row"> <div style="width:100%; margin:auto; margin-bottom:10px; margin-top:20px; display: flex; flex-direction: row">
<img style="flex-grow:1; flex-shrink:1;" src="../images/cifar_sparsity.png"> <img style="flex-grow:1; flex-shrink:1;" src="https://www.tensorflow.org/images/cifar_sparsity.png">
<img style="flex-grow:1; flex-shrink:1;" src="../images/cifar_activations.png"> <img style="flex-grow:1; flex-shrink:1;" src="https://www.tensorflow.org/images/cifar_activations.png">
</div> </div>
Individual loss functions, as well as the total loss, are particularly Individual loss functions, as well as the total loss, are particularly
@ -378,7 +378,7 @@ processing a batch of data.
Here is a diagram of this model: Here is a diagram of this model:
<div style="width:40%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:40%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../images/Parallelism.png"> <img style="width:100%" src="https://www.tensorflow.org/images/Parallelism.png">
</div> </div>
Note that each GPU computes inference as well as the gradients for a unique Note that each GPU computes inference as well as the gradients for a unique

View File

@ -36,7 +36,7 @@ images into [1000 classes], like "Zebra", "Dalmatian", and "Dishwasher".
For example, here are the results from [AlexNet] classifying some images: For example, here are the results from [AlexNet] classifying some images:
<div style="width:50%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:50%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../images/AlexClassification.png"> <img style="width:100%" src="https://www.tensorflow.org/images/AlexClassification.png">
</div> </div>
To compare models, we examine how often the model fails to predict the To compare models, we examine how often the model fails to predict the
@ -75,7 +75,7 @@ Start by cloning the [TensorFlow models repo](https://github.com/tensorflow/mode
The above command will classify a supplied image of a panda bear. The above command will classify a supplied image of a panda bear.
<div style="width:15%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:15%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../images/cropped_panda.jpg"> <img style="width:100%" src="https://www.tensorflow.org/images/cropped_panda.jpg">
</div> </div>
If the model runs correctly, the script will produce the following output: If the model runs correctly, the script will produce the following output:
@ -137,7 +137,7 @@ score of 0.8.
<div style="width:45%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:45%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../images/grace_hopper.jpg"> <img style="width:100%" src="https://www.tensorflow.org/images/grace_hopper.jpg">
</div> </div>
Next, try it out on your own images by supplying the --image= argument, e.g. Next, try it out on your own images by supplying the --image= argument, e.g.

View File

@ -18,7 +18,7 @@ to help control the training process.
## Training on Flowers ## Training on Flowers
![Daisies by Kelly Sikkema](../images/daisies.jpg) ![Daisies by Kelly Sikkema](https://www.tensorflow.org/images/daisies.jpg)
[Image by Kelly Sikkema](https://www.flickr.com/photos/95072945@N05/9922116524/) [Image by Kelly Sikkema](https://www.flickr.com/photos/95072945@N05/9922116524/)
Before you start any training, you'll need a set of images to teach the network Before you start any training, you'll need a set of images to teach the network
@ -174,7 +174,7 @@ you do that and pass the root folder of the subdirectories as the argument to
Here's what the folder structure of the flowers archive looks like, to give you Here's what the folder structure of the flowers archive looks like, to give you
and example of the kind of layout the script is looking for: and example of the kind of layout the script is looking for:
![Folder Structure](../images/folder_structure.png) ![Folder Structure](https://www.tensorflow.org/images/folder_structure.png)
In practice it may take some work to get the accuracy you want. I'll try to In practice it may take some work to get the accuracy you want. I'll try to
guide you through some of the common problems you might encounter below. guide you through some of the common problems you might encounter below.

View File

@ -7,7 +7,7 @@ activation functions, and applying dropout regularization. In this tutorial,
you'll learn how to use `layers` to build a convolutional neural network model you'll learn how to use `layers` to build a convolutional neural network model
to recognize the handwritten digits in the MNIST data set. to recognize the handwritten digits in the MNIST data set.
![handwritten digits 09 from the MNIST data set](../images/mnist_0-9.png) ![handwritten digits 09 from the MNIST data set](https://www.tensorflow.org/images/mnist_0-9.png)
**The [MNIST dataset](http://yann.lecun.com/exdb/mnist/) comprises 60,000 **The [MNIST dataset](http://yann.lecun.com/exdb/mnist/) comprises 60,000
training examples and 10,000 test examples of the handwritten digits 09, training examples and 10,000 test examples of the handwritten digits 09,

View File

@ -109,7 +109,7 @@ Let's see what we've got.
DisplayFractal(ns.eval()) DisplayFractal(ns.eval())
``` ```
![jpeg](../images/mandelbrot_output.jpg) ![jpeg](https://www.tensorflow.org/images/mandelbrot_output.jpg)
Not bad! Not bad!

View File

@ -93,7 +93,7 @@ for n in range(40):
DisplayArray(u_init, rng=[-0.1, 0.1]) DisplayArray(u_init, rng=[-0.1, 0.1])
``` ```
![jpeg](../images/pde_output_1.jpg) ![jpeg](https://www.tensorflow.org/images/pde_output_1.jpg)
Now let's specify the details of the differential equation. Now let's specify the details of the differential equation.

View File

@ -40,7 +40,7 @@ networks (RNNs): an *encoder* that processes the input and a *decoder* that
generates the output. This basic architecture is depicted below. generates the output. This basic architecture is depicted below.
<div style="width:80%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:80%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../images/basic_seq2seq.png" /> <img style="width:100%" src="https://www.tensorflow.org/images/basic_seq2seq.png" />
</div> </div>
Each box in the picture above represents a cell of the RNN, most commonly Each box in the picture above represents a cell of the RNN, most commonly
@ -62,7 +62,7 @@ decoding step. A multi-layer sequence-to-sequence network with LSTM cells and
attention mechanism in the decoder looks like this. attention mechanism in the decoder looks like this.
<div style="width:80%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:80%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../images/attention_seq2seq.png" /> <img style="width:100%" src="https://www.tensorflow.org/images/attention_seq2seq.png" />
</div> </div>
## TensorFlow seq2seq library ## TensorFlow seq2seq library

View File

@ -17,8 +17,7 @@ large-scale regression and classification problems with sparse input features
you're interested in learning more about how Wide & Deep Learning works, please you're interested in learning more about how Wide & Deep Learning works, please
check out our [research paper](http://arxiv.org/abs/1606.07792). check out our [research paper](http://arxiv.org/abs/1606.07792).
![Wide & Deep Spectrum of Models] ![Wide & Deep Spectrum of Models](https://www.tensorflow.org/images/wide_n_deep.svg "Wide & Deep")
(../images/wide_n_deep.svg "Wide & Deep")
The figure above shows a comparison of a wide model (logistic regression with The figure above shows a comparison of a wide model (logistic regression with
sparse features and transformations), a deep model (feed-forward neural network sparse features and transformations), a deep model (feed-forward neural network

View File

@ -51,7 +51,7 @@ means that we may need more data in order to successfully train statistical
models. Using vector representations can overcome some of these obstacles. models. Using vector representations can overcome some of these obstacles.
<div style="width:100%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:100%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../images/audio-image-text.png" alt> <img style="width:100%" src="https://www.tensorflow.org/images/audio-image-text.png" alt>
</div> </div>
[Vector space models](https://en.wikipedia.org/wiki/Vector_space_model) (VSMs) [Vector space models](https://en.wikipedia.org/wiki/Vector_space_model) (VSMs)
@ -125,7 +125,7 @@ probability using the score for all other \\(V\\) words \\(w'\\) in the current
context \\(h\\), *at every training step*. context \\(h\\), *at every training step*.
<div style="width:60%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:60%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../images/softmax-nplm.png" alt> <img style="width:100%" src="https://www.tensorflow.org/images/softmax-nplm.png" alt>
</div> </div>
On the other hand, for feature learning in word2vec we do not need a full On the other hand, for feature learning in word2vec we do not need a full
@ -136,7 +136,7 @@ same context. We illustrate this below for a CBOW model. For skip-gram the
direction is simply inverted. direction is simply inverted.
<div style="width:60%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:60%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../images/nce-nplm.png" alt> <img style="width:100%" src="https://www.tensorflow.org/images/nce-nplm.png" alt>
</div> </div>
Mathematically, the objective (for each example) is to maximize Mathematically, the objective (for each example) is to maximize
@ -233,7 +233,7 @@ below (see also for example
[Mikolov et al., 2013](http://www.aclweb.org/anthology/N13-1090)). [Mikolov et al., 2013](http://www.aclweb.org/anthology/N13-1090)).
<div style="width:100%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:100%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../images/linear-relationships.png" alt> <img style="width:100%" src="https://www.tensorflow.org/images/linear-relationships.png" alt>
</div> </div>
This explains why these vectors are also useful as features for many canonical This explains why these vectors are also useful as features for many canonical
@ -335,7 +335,7 @@ After training has finished we can visualize the learned embeddings using
t-SNE. t-SNE.
<div style="width:100%; margin:auto; margin-bottom:10px; margin-top:20px;"> <div style="width:100%; margin:auto; margin-bottom:10px; margin-top:20px;">
<img style="width:100%" src="../images/tsne.png" alt> <img style="width:100%" src="https://www.tensorflow.org/images/tsne.png" alt>
</div> </div>
Et voila! As expected, words that are similar end up clustering nearby each Et voila! As expected, words that are similar end up clustering nearby each

View File

@ -57,7 +57,7 @@ func makeOutputList(op *tf.Operation, start int, output string) ([]tf.Output, in
// Requires `updates.shape = indices.shape + ref.shape[1:]`. // Requires `updates.shape = indices.shape + ref.shape[1:]`.
// //
// <div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;"> // <div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
// <img style="width:100%" src="../../images/ScatterAdd.png" alt> // <img style="width:100%" src="https://www.tensorflow.org/images/ScatterAdd.png" alt>
// </div> // </div>
// //
// Arguments: // Arguments:
@ -3161,7 +3161,7 @@ func SpaceToDepth(scope *Scope, input tf.Output, block_size int64) (output tf.Ou
// tensor with 8 elements. // tensor with 8 elements.
// //
// <div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;"> // <div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
// <img style="width:100%" src="../../images/ScatterNd1.png" alt> // <img style="width:100%" src="https://www.tensorflow.org/images/ScatterNd1.png" alt>
// </div> // </div>
// //
// In Python, this scatter operation would look like this: // In Python, this scatter operation would look like this:
@ -3184,7 +3184,7 @@ func SpaceToDepth(scope *Scope, input tf.Output, block_size int64) (output tf.Ou
// rank-3 tensor with two matrices of new values. // rank-3 tensor with two matrices of new values.
// //
// <div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;"> // <div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
// <img style="width:100%" src="../../images/ScatterNd2.png" alt> // <img style="width:100%" src="https://www.tensorflow.org/images/ScatterNd2.png" alt>
// </div> // </div>
// //
// In Python, this scatter operation would look like this: // In Python, this scatter operation would look like this:
@ -4940,7 +4940,7 @@ func TensorArrayGatherV2(scope *Scope, handle tf.Output, indices tf.Output, flow
// ``` // ```
// //
// <div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;"> // <div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
// <img style="width:100%" src="../../images/DynamicStitch.png" alt> // <img style="width:100%" src="https://www.tensorflow.org/images/DynamicStitch.png" alt>
// </div> // </div>
func DynamicStitch(scope *Scope, indices []tf.Output, data []tf.Output) (merged tf.Output) { func DynamicStitch(scope *Scope, indices []tf.Output, data []tf.Output) (merged tf.Output) {
if scope.Err() != nil { if scope.Err() != nil {
@ -13758,7 +13758,7 @@ func GatherValidateIndices(value bool) GatherAttr {
// raising an error. // raising an error.
// //
// <div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;"> // <div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
// <img style="width:100%" src="../../../images/Gather.png" alt> // <img style="width:100%" src="https://www.tensorflow.org/images/Gather.png" alt>
// </div> // </div>
func Gather(scope *Scope, params tf.Output, indices tf.Output, optional ...GatherAttr) (output tf.Output) { func Gather(scope *Scope, params tf.Output, indices tf.Output, optional ...GatherAttr) (output tf.Output) {
if scope.Err() != nil { if scope.Err() != nil {
@ -19994,7 +19994,7 @@ func Sum(scope *Scope, input tf.Output, reduction_indices tf.Output, optional ..
// ``` // ```
// //
// <div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;"> // <div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
// <img style="width:100%" src="../../images/DynamicPartition.png" alt> // <img style="width:100%" src="https://www.tensorflow.org/images/DynamicPartition.png" alt>
// </div> // </div>
// //
// Arguments: // Arguments: