From c915f338429b4ffcd430acb845194a54b9629291 Mon Sep 17 00:00:00 2001 From: "A. Unique TensorFlower" Date: Thu, 12 Jan 2017 18:50:55 -0800 Subject: [PATCH 01/51] Update generated Python Op docs. Change: 144398858 --- tensorflow/g3doc/api_docs/python/array_ops.md | 2 +- .../g3doc/api_docs/python/contrib.layers.md | 6 +- .../shard0/tf.depth_to_space.md | 2 +- .../shard0/tf.summary.TaggedRunMetadata.md | 244 -------- .../shard1/tf.merge_all_summaries.md | 17 - .../shard2/tf.image_summary.md | 49 -- .../shard2/tf.summary.SummaryDescription.md | 237 -------- .../shard2/tf.test.TestCase.md | 521 +++++++++++++++++- .../shard3/tf.scalar_summary.md | 22 - .../shard4/tf.contrib.layers.batch_norm.md | 3 +- ...ry.SummaryDescription.RegisterExtension.md | 4 - .../shard5/tf.histogram_summary.md | 26 - .../shard5/tf.image.total_variation.md | 40 ++ .../shard5/tf.merge_summary.md | 27 - ...f.summary.SummaryDescription.FromString.md | 4 - ...ary.TaggedRunMetadata.RegisterExtension.md | 4 - .../shard7/tf.contrib.layers.layer_norm.md | 3 +- .../shard7/tf.train.SummaryWriter.md | 207 ------- .../shard9/tf.audio_summary.md | 37 -- ...tf.summary.TaggedRunMetadata.FromString.md | 4 - tensorflow/g3doc/api_docs/python/image.md | 46 ++ tensorflow/g3doc/api_docs/python/index.md | 1 + tensorflow/g3doc/api_docs/python/summary.md | 481 ---------------- tensorflow/g3doc/api_docs/python/test.md | 521 +++++++++++++++++- 24 files changed, 1119 insertions(+), 1389 deletions(-) delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.merge_all_summaries.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.image_summary.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.scalar_summary.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.summary.SummaryDescription.RegisterExtension.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.histogram_summary.md create mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.total_variation.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.merge_summary.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.summary.SummaryDescription.FromString.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.summary.TaggedRunMetadata.RegisterExtension.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.SummaryWriter.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.audio_summary.md delete mode 100644 tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.summary.TaggedRunMetadata.FromString.md diff --git a/tensorflow/g3doc/api_docs/python/array_ops.md b/tensorflow/g3doc/api_docs/python/array_ops.md index a6d950dc6ed..2dcf6bcca6f 100644 --- a/tensorflow/g3doc/api_docs/python/array_ops.md +++ b/tensorflow/g3doc/api_docs/python/array_ops.md @@ -2044,7 +2044,7 @@ The attr `block_size` indicates the input block size and how the data is moved. * Chunks of data of size `block_size * block_size` from depth are rearranged into non-overlapping blocks of size `block_size x block_size` - * The width the output tensor is `input_width * block_size`, whereas the + * The width the output tensor is `input_depth * block_size`, whereas the height is `input_height * block_size`. * The depth of the input tensor must be divisible by `block_size * block_size`. diff --git a/tensorflow/g3doc/api_docs/python/contrib.layers.md b/tensorflow/g3doc/api_docs/python/contrib.layers.md index c9bbabdd4fd..d2751e8febc 100644 --- a/tensorflow/g3doc/api_docs/python/contrib.layers.md +++ b/tensorflow/g3doc/api_docs/python/contrib.layers.md @@ -83,7 +83,8 @@ can have speed penalty, specially in distributed settings. Lower `decay` value (recommend trying `decay`=0.9) if model experiences reasonably good training performance but poor validation and/or test performance. Try zero_debias_moving_mean=True for improved stability. -* `center`: If True, subtract `beta`. If False, `beta` is ignored. +* `center`: If True, add offset of `beta` to normalized tensor. If False, `beta` + is ignored. * `scale`: If True, multiply by `gamma`. If False, `gamma` is not used. When the next layer is linear (also e.g. `nn.relu`), this can be disabled since the scaling can be done by the next layer. @@ -411,7 +412,8 @@ Can be used as a normalizer function for conv2d and fully_connected. * `inputs`: a tensor with 2 or more dimensions. The normalization occurs over all but the first dimension. -* `center`: If True, subtract `beta`. If False, `beta` is ignored. +* `center`: If True, add offset of `beta` to normalized tensor. If False, `beta` + is ignored. * `scale`: If True, multiply by `gamma`. If False, `gamma` is not used. When the next layer is linear (also e.g. `nn.relu`), this can be disabled since the scaling can be done by the next layer. diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.depth_to_space.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.depth_to_space.md index ef74b4d54a4..03dc6bb3b0d 100644 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.depth_to_space.md +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.depth_to_space.md @@ -10,7 +10,7 @@ The attr `block_size` indicates the input block size and how the data is moved. * Chunks of data of size `block_size * block_size` from depth are rearranged into non-overlapping blocks of size `block_size x block_size` - * The width the output tensor is `input_width * block_size`, whereas the + * The width the output tensor is `input_depth * block_size`, whereas the height is `input_height * block_size`. * The depth of the input tensor must be divisible by `block_size * block_size`. diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.summary.TaggedRunMetadata.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.summary.TaggedRunMetadata.md index 8dc62c4c18c..788d2066ad7 100644 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.summary.TaggedRunMetadata.md +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.summary.TaggedRunMetadata.md @@ -1,185 +1,4 @@ -- - - - -#### `tf.summary.TaggedRunMetadata.ByteSize()` {#TaggedRunMetadata.ByteSize} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.Clear()` {#TaggedRunMetadata.Clear} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.ClearExtension(extension_handle)` {#TaggedRunMetadata.ClearExtension} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.ClearField(field_name)` {#TaggedRunMetadata.ClearField} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.CopyFrom(other_msg)` {#TaggedRunMetadata.CopyFrom} - -Copies the content of the specified message into the current message. - -The method clears the current message and then merges the specified -message using MergeFrom. - -##### Args: - - -* `other_msg`: Message to copy into the current one. - - -- - - - -#### `tf.summary.TaggedRunMetadata.DiscardUnknownFields()` {#TaggedRunMetadata.DiscardUnknownFields} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.FindInitializationErrors()` {#TaggedRunMetadata.FindInitializationErrors} - -Finds required fields which are not initialized. - -##### Returns: - - A list of strings. Each string is a path to an uninitialized field from - the top-level message, e.g. "foo.bar[5].baz". - - -- - - - -#### `tf.summary.TaggedRunMetadata.FromString(s)` {#TaggedRunMetadata.FromString} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.HasExtension(extension_handle)` {#TaggedRunMetadata.HasExtension} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.HasField(field_name)` {#TaggedRunMetadata.HasField} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.IsInitialized(errors=None)` {#TaggedRunMetadata.IsInitialized} - -Checks if all required fields of a message are set. - -##### Args: - - -* `errors`: A list which, if provided, will be populated with the field - paths of all missing required fields. - -##### Returns: - - True iff the specified message has all required fields set. - - -- - - - -#### `tf.summary.TaggedRunMetadata.ListFields()` {#TaggedRunMetadata.ListFields} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.MergeFrom(msg)` {#TaggedRunMetadata.MergeFrom} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.MergeFromString(serialized)` {#TaggedRunMetadata.MergeFromString} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.ParseFromString(serialized)` {#TaggedRunMetadata.ParseFromString} - -Parse serialized protocol buffer data into this message. - -Like MergeFromString(), except we clear the object first and -do not return the value that MergeFromString returns. - - -- - - - -#### `tf.summary.TaggedRunMetadata.RegisterExtension(extension_handle)` {#TaggedRunMetadata.RegisterExtension} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.SerializePartialToString()` {#TaggedRunMetadata.SerializePartialToString} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.SerializeToString()` {#TaggedRunMetadata.SerializeToString} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.SetInParent()` {#TaggedRunMetadata.SetInParent} - -Sets the _cached_byte_size_dirty bit to true, -and propagates this to our listener iff this was a state change. - - -- - - - -#### `tf.summary.TaggedRunMetadata.WhichOneof(oneof_name)` {#TaggedRunMetadata.WhichOneof} - -Returns the name of the currently set field inside a oneof, or None. - - -- - - - -#### `tf.summary.TaggedRunMetadata.__deepcopy__(memo=None)` {#TaggedRunMetadata.__deepcopy__} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.__eq__(other)` {#TaggedRunMetadata.__eq__} - - - - - - - #### `tf.summary.TaggedRunMetadata.__getstate__()` {#TaggedRunMetadata.__getstate__} @@ -187,66 +6,3 @@ Returns the name of the currently set field inside a oneof, or None. Support the pickle protocol. -- - - - -#### `tf.summary.TaggedRunMetadata.__hash__()` {#TaggedRunMetadata.__hash__} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.__init__(**kwargs)` {#TaggedRunMetadata.__init__} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.__ne__(other_msg)` {#TaggedRunMetadata.__ne__} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.__repr__()` {#TaggedRunMetadata.__repr__} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.__setstate__(state)` {#TaggedRunMetadata.__setstate__} - -Support the pickle protocol. - - -- - - - -#### `tf.summary.TaggedRunMetadata.__str__()` {#TaggedRunMetadata.__str__} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.__unicode__()` {#TaggedRunMetadata.__unicode__} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.run_metadata` {#TaggedRunMetadata.run_metadata} - -Magic attribute generated for "run_metadata" proto field. - - -- - - - -#### `tf.summary.TaggedRunMetadata.tag` {#TaggedRunMetadata.tag} - -Magic attribute generated for "tag" proto field. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.merge_all_summaries.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.merge_all_summaries.md deleted file mode 100644 index bf17320a5a3..00000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.merge_all_summaries.md +++ /dev/null @@ -1,17 +0,0 @@ -### `tf.merge_all_summaries(*args, **kwargs)` {#merge_all_summaries} - -Merges all summaries collected in the default graph. (deprecated) - -THIS FUNCTION IS DEPRECATED. It will be removed after 2016-11-30. -Instructions for updating: -Please switch to tf.summary.merge_all. - - Args: - key: `GraphKey` used to collect the summaries. Defaults to - `GraphKeys.SUMMARIES`. - - Returns: - If no summaries were collected, returns None. Otherwise returns a scalar - `Tensor` of type `string` containing the serialized `Summary` protocol - buffer resulting from the merging. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.image_summary.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.image_summary.md deleted file mode 100644 index 6220d3641bc..00000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.image_summary.md +++ /dev/null @@ -1,49 +0,0 @@ -### `tf.image_summary(*args, **kwargs)` {#image_summary} - -Outputs a `Summary` protocol buffer with images. (deprecated) - -THIS FUNCTION IS DEPRECATED. It will be removed after 2016-11-30. -Instructions for updating: -Please switch to tf.summary.image. Note that tf.summary.histogram uses the node name instead of the tag. This means that TensorFlow will automatically de-duplicate summary names based on the scope they are created in. Also, the max_images argument was renamed to max_outputs. - - The summary has up to `max_images` summary values containing images. The - images are built from `tensor` which must be 4-D with shape `[batch_size, - height, width, channels]` and where `channels` can be: - - * 1: `tensor` is interpreted as Grayscale. - * 3: `tensor` is interpreted as RGB. - * 4: `tensor` is interpreted as RGBA. - - The images have the same number of channels as the input tensor. For float - input, the values are normalized one image at a time to fit in the range - `[0, 255]`. `uint8` values are unchanged. The op uses two different - normalization algorithms: - - * If the input values are all positive, they are rescaled so the largest one - is 255. - - * If any input value is negative, the values are shifted so input value 0.0 - is at 127. They are then rescaled so that either the smallest value is 0, - or the largest one is 255. - - The `tag` argument is a scalar `Tensor` of type `string`. It is used to - build the `tag` of the summary values: - - * If `max_images` is 1, the summary value tag is '*tag*/image'. - * If `max_images` is greater than 1, the summary value tags are - generated sequentially as '*tag*/image/0', '*tag*/image/1', etc. - - Args: - tag: A scalar `Tensor` of type `string`. Used to build the `tag` - of the summary values. - tensor: A 4-D `uint8` or `float32` `Tensor` of shape `[batch_size, height, - width, channels]` where `channels` is 1, 3, or 4. - max_images: Max number of batch elements to generate images for. - collections: Optional list of ops.GraphKeys. The collections to add the - summary to. Defaults to [ops.GraphKeys.SUMMARIES] - name: A name for the operation (optional). - - Returns: - A scalar `Tensor` of type `string`. The serialized `Summary` protocol - buffer. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.summary.SummaryDescription.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.summary.SummaryDescription.md index bce704ef4f2..19532f7cc33 100644 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.summary.SummaryDescription.md +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.summary.SummaryDescription.md @@ -1,185 +1,4 @@ -- - - - -#### `tf.summary.SummaryDescription.ByteSize()` {#SummaryDescription.ByteSize} - - - - -- - - - -#### `tf.summary.SummaryDescription.Clear()` {#SummaryDescription.Clear} - - - - -- - - - -#### `tf.summary.SummaryDescription.ClearExtension(extension_handle)` {#SummaryDescription.ClearExtension} - - - - -- - - - -#### `tf.summary.SummaryDescription.ClearField(field_name)` {#SummaryDescription.ClearField} - - - - -- - - - -#### `tf.summary.SummaryDescription.CopyFrom(other_msg)` {#SummaryDescription.CopyFrom} - -Copies the content of the specified message into the current message. - -The method clears the current message and then merges the specified -message using MergeFrom. - -##### Args: - - -* `other_msg`: Message to copy into the current one. - - -- - - - -#### `tf.summary.SummaryDescription.DiscardUnknownFields()` {#SummaryDescription.DiscardUnknownFields} - - - - -- - - - -#### `tf.summary.SummaryDescription.FindInitializationErrors()` {#SummaryDescription.FindInitializationErrors} - -Finds required fields which are not initialized. - -##### Returns: - - A list of strings. Each string is a path to an uninitialized field from - the top-level message, e.g. "foo.bar[5].baz". - - -- - - - -#### `tf.summary.SummaryDescription.FromString(s)` {#SummaryDescription.FromString} - - - - -- - - - -#### `tf.summary.SummaryDescription.HasExtension(extension_handle)` {#SummaryDescription.HasExtension} - - - - -- - - - -#### `tf.summary.SummaryDescription.HasField(field_name)` {#SummaryDescription.HasField} - - - - -- - - - -#### `tf.summary.SummaryDescription.IsInitialized(errors=None)` {#SummaryDescription.IsInitialized} - -Checks if all required fields of a message are set. - -##### Args: - - -* `errors`: A list which, if provided, will be populated with the field - paths of all missing required fields. - -##### Returns: - - True iff the specified message has all required fields set. - - -- - - - -#### `tf.summary.SummaryDescription.ListFields()` {#SummaryDescription.ListFields} - - - - -- - - - -#### `tf.summary.SummaryDescription.MergeFrom(msg)` {#SummaryDescription.MergeFrom} - - - - -- - - - -#### `tf.summary.SummaryDescription.MergeFromString(serialized)` {#SummaryDescription.MergeFromString} - - - - -- - - - -#### `tf.summary.SummaryDescription.ParseFromString(serialized)` {#SummaryDescription.ParseFromString} - -Parse serialized protocol buffer data into this message. - -Like MergeFromString(), except we clear the object first and -do not return the value that MergeFromString returns. - - -- - - - -#### `tf.summary.SummaryDescription.RegisterExtension(extension_handle)` {#SummaryDescription.RegisterExtension} - - - - -- - - - -#### `tf.summary.SummaryDescription.SerializePartialToString()` {#SummaryDescription.SerializePartialToString} - - - - -- - - - -#### `tf.summary.SummaryDescription.SerializeToString()` {#SummaryDescription.SerializeToString} - - - - -- - - - -#### `tf.summary.SummaryDescription.SetInParent()` {#SummaryDescription.SetInParent} - -Sets the _cached_byte_size_dirty bit to true, -and propagates this to our listener iff this was a state change. - - -- - - - -#### `tf.summary.SummaryDescription.WhichOneof(oneof_name)` {#SummaryDescription.WhichOneof} - -Returns the name of the currently set field inside a oneof, or None. - - -- - - - -#### `tf.summary.SummaryDescription.__deepcopy__(memo=None)` {#SummaryDescription.__deepcopy__} - - - - -- - - - -#### `tf.summary.SummaryDescription.__eq__(other)` {#SummaryDescription.__eq__} - - - - - - - #### `tf.summary.SummaryDescription.__getstate__()` {#SummaryDescription.__getstate__} @@ -187,59 +6,3 @@ Returns the name of the currently set field inside a oneof, or None. Support the pickle protocol. -- - - - -#### `tf.summary.SummaryDescription.__hash__()` {#SummaryDescription.__hash__} - - - - -- - - - -#### `tf.summary.SummaryDescription.__init__(**kwargs)` {#SummaryDescription.__init__} - - - - -- - - - -#### `tf.summary.SummaryDescription.__ne__(other_msg)` {#SummaryDescription.__ne__} - - - - -- - - - -#### `tf.summary.SummaryDescription.__repr__()` {#SummaryDescription.__repr__} - - - - -- - - - -#### `tf.summary.SummaryDescription.__setstate__(state)` {#SummaryDescription.__setstate__} - -Support the pickle protocol. - - -- - - - -#### `tf.summary.SummaryDescription.__str__()` {#SummaryDescription.__str__} - - - - -- - - - -#### `tf.summary.SummaryDescription.__unicode__()` {#SummaryDescription.__unicode__} - - - - -- - - - -#### `tf.summary.SummaryDescription.type_hint` {#SummaryDescription.type_hint} - -Magic attribute generated for "type_hint" proto field. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.test.TestCase.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.test.TestCase.md index 4d4330488f6..e9e8a2684ca 100644 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.test.TestCase.md +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.test.TestCase.md @@ -173,6 +173,125 @@ Checks that for all elements of farray1 and farray2 * `err`: a float value. +- - - + +#### `tf.test.TestCase.assertBetween(value, minv, maxv, msg=None)` {#TestCase.assertBetween} + +Asserts that value is between minv and maxv (inclusive). + + +- - - + +#### `tf.test.TestCase.assertCommandFails(command, regexes, env=None, close_fds=True, msg=None)` {#TestCase.assertCommandFails} + +Asserts a shell command fails and the error matches a regex in a list. + +##### Args: + + +* `command`: List or string representing the command to run. +* `regexes`: the list of regular expression strings. +* `env`: Dictionary of environment variable settings. +* `close_fds`: Whether or not to close all open fd's in the child after + forking. +* `msg`: Optional message to report on failure. + + +- - - + +#### `tf.test.TestCase.assertCommandSucceeds(command, regexes=('',), env=None, close_fds=True, msg=None)` {#TestCase.assertCommandSucceeds} + +Asserts that a shell command succeeds (i.e. exits with code 0). + +##### Args: + + +* `command`: List or string representing the command to run. +* `regexes`: List of regular expression byte strings that match success. +* `env`: Dictionary of environment variable settings. +* `close_fds`: Whether or not to close all open fd's in the child after + forking. +* `msg`: Optional message to report on failure. + + +- - - + +#### `tf.test.TestCase.assertContainsExactSubsequence(container, subsequence, msg=None)` {#TestCase.assertContainsExactSubsequence} + +Assert that "container" contains "subsequence" as an exact subsequence. + +Asserts that "container" contains all the elements of "subsequence", in +order, and without other elements interspersed. For example, [1, 2, 3] is an +exact subsequence of [0, 0, 1, 2, 3, 0] but not of [0, 0, 1, 2, 0, 3, 0]. + +##### Args: + + +* `container`: the list we're testing for subsequence inclusion. +* `subsequence`: the list we hope will be an exact subsequence of container. +* `msg`: Optional message to report on failure. + + +- - - + +#### `tf.test.TestCase.assertContainsInOrder(strings, target, msg=None)` {#TestCase.assertContainsInOrder} + +Asserts that the strings provided are found in the target in order. + +This may be useful for checking HTML output. + +##### Args: + + +* `strings`: A list of strings, such as [ 'fox', 'dog' ] +* `target`: A target string in which to look for the strings, such as + 'The quick brown fox jumped over the lazy dog'. +* `msg`: Optional message to report on failure. + + +- - - + +#### `tf.test.TestCase.assertContainsSubsequence(container, subsequence, msg=None)` {#TestCase.assertContainsSubsequence} + +Assert that "container" contains "subsequence" as a subsequence. + +Asserts that "container" contains all the elements of "subsequence", in +order, but possibly with other elements interspersed. For example, [1, 2, 3] +is a subsequence of [0, 0, 1, 2, 0, 3, 0] but not of [0, 0, 1, 3, 0, 2, 0]. + +##### Args: + + +* `container`: the list we're testing for subsequence inclusion. +* `subsequence`: the list we hope will be a subsequence of container. +* `msg`: Optional message to report on failure. + + +- - - + +#### `tf.test.TestCase.assertContainsSubset(expected_subset, actual_set, msg=None)` {#TestCase.assertContainsSubset} + +Checks whether actual iterable is a superset of expected iterable. + + +- - - + +#### `tf.test.TestCase.assertCountEqual(*args, **kwargs)` {#TestCase.assertCountEqual} + +An unordered sequence specific comparison. + +Equivalent to assertItemsEqual(). This method is a compatibility layer +for Python 3k, since 2to3 does not convert assertItemsEqual() calls into +assertCountEqual() calls. + +##### Args: + + +* `expected_seq`: A sequence containing elements we are expecting. +* `actual_seq`: The sequence that we are testing. +* `msg`: The message to be printed if the test fails. + + - - - #### `tf.test.TestCase.assertDeviceEqual(device1, device2)` {#TestCase.assertDeviceEqual} @@ -195,9 +314,48 @@ Checks whether actual is a superset of expected. - - - -#### `tf.test.TestCase.assertDictEqual(d1, d2, msg=None)` {#TestCase.assertDictEqual} +#### `tf.test.TestCase.assertDictEqual(a, b, msg=None)` {#TestCase.assertDictEqual} + +Raises AssertionError if a and b are not equal dictionaries. + +##### Args: +* `a`: A dict, the expected value. +* `b`: A dict, the actual value. +* `msg`: An optional str, the associated message. + +##### Raises: + + +* `AssertionError`: if the dictionaries are not equal. + + +- - - + +#### `tf.test.TestCase.assertEmpty(container, msg=None)` {#TestCase.assertEmpty} + +Assert that an object has zero length. + +##### Args: + + +* `container`: Anything that implements the collections.Sized interface. +* `msg`: Optional message to report on failure. + + +- - - + +#### `tf.test.TestCase.assertEndsWith(actual, expected_end, msg=None)` {#TestCase.assertEndsWith} + +Assert that actual.endswith(expected_end) is True. + +##### Args: + + +* `actual`: str +* `expected_end`: str +* `msg`: Optional message to report on failure. - - - @@ -282,10 +440,11 @@ Included for symmetry with assertIsNone. - - - -#### `tf.test.TestCase.assertItemsEqual(expected_seq, actual_seq, msg=None)` {#TestCase.assertItemsEqual} +#### `tf.test.TestCase.assertItemsEqual(*args, **kwargs)` {#TestCase.assertItemsEqual} -An unordered sequence specific comparison. It asserts that -actual_seq and expected_seq have the same element counts. +An unordered sequence specific comparison. + +It asserts that actual_seq and expected_seq have the same element counts. Equivalent to:: self.assertEqual(Counter(iter(actual_seq)), @@ -298,6 +457,30 @@ Asserts that each element has the same count in both sequences. - [0, 1, 1] and [1, 0, 1] compare equal. - [0, 0, 1] and [0, 1] compare unequal. +##### Args: + + +* `expected_seq`: A sequence containing elements we are expecting. +* `actual_seq`: The sequence that we are testing. +* `msg`: The message to be printed if the test fails. + + +- - - + +#### `tf.test.TestCase.assertJsonEqual(first, second, msg=None)` {#TestCase.assertJsonEqual} + +Asserts that the JSON objects defined in two strings are equal. + +A summary of the differences will be included in the failure message +using assertSameStructure. + +##### Args: + + +* `first`: A string contining JSON to decode and compare to second. +* `second`: A string contining JSON to decode and compare to first. +* `msg`: Additional text to include in the failure message. + - - - @@ -367,6 +550,13 @@ if not. * `msg`: An optional string message to append to the failure message. +- - - + +#### `tf.test.TestCase.assertNoCommonElements(expected_seq, actual_seq, msg=None)` {#TestCase.assertNoCommonElements} + +Checks whether actual iterable and expected iterable are disjoint. + + - - - #### `tf.test.TestCase.assertNotAlmostEqual(first, second, places=None, msg=None, delta=None)` {#TestCase.assertNotAlmostEqual} @@ -397,6 +587,33 @@ as significant digits (measured from the most signficant digit). Objects that are equal automatically fail. +- - - + +#### `tf.test.TestCase.assertNotEmpty(container, msg=None)` {#TestCase.assertNotEmpty} + +Assert that an object has non-zero length. + +##### Args: + + +* `container`: Anything that implements the collections.Sized interface. +* `msg`: Optional message to report on failure. + + +- - - + +#### `tf.test.TestCase.assertNotEndsWith(actual, unexpected_end, msg=None)` {#TestCase.assertNotEndsWith} + +Assert that actual.endswith(unexpected_end) is False. + +##### Args: + + +* `actual`: str +* `unexpected_end`: str +* `msg`: Optional message to report on failure. + + - - - #### `tf.test.TestCase.assertNotEqual(first, second, msg=None)` {#TestCase.assertNotEqual} @@ -434,6 +651,20 @@ Included for symmetry with assertIsInstance. Fail the test if the text matches the regular expression. +- - - + +#### `tf.test.TestCase.assertNotStartsWith(actual, unexpected_start, msg=None)` {#TestCase.assertNotStartsWith} + +Assert that actual.startswith(unexpected_start) is False. + +##### Args: + + +* `actual`: str +* `unexpected_start`: str +* `msg`: Optional message to report on failure. + + - - - #### `tf.test.TestCase.assertProtoEquals(expected_message_maybe_ascii, message)` {#TestCase.assertProtoEquals} @@ -508,6 +739,38 @@ Asserts that the message in a raised exception matches a regexp. * `kwargs`: Extra kwargs. +- - - + +#### `tf.test.TestCase.assertRaisesWithLiteralMatch(expected_exception, expected_exception_message, callable_obj=None, *args, **kwargs)` {#TestCase.assertRaisesWithLiteralMatch} + +Asserts that the message in a raised exception equals the given string. + +Unlike assertRaisesRegexp, this method takes a literal string, not +a regular expression. + +with self.assertRaisesWithLiteralMatch(ExType, 'message'): + DoSomething() + +##### Args: + + +* `expected_exception`: Exception class expected to be raised. +* `expected_exception_message`: String message expected in the raised + exception. For a raise exception e, expected_exception_message must + equal str(e). +* `callable_obj`: Function to be called, or None to return a context. +* `args`: Extra args. +* `kwargs`: Extra kwargs. + +##### Returns: + + A context manager if callable_obj is None. Otherwise, None. + +##### Raises: + + self.failureException if callable_obj does not raise a macthing exception. + + - - - #### `tf.test.TestCase.assertRaisesWithPredicateMatch(exception_type, expected_err_re_or_predicate)` {#TestCase.assertRaisesWithPredicateMatch} @@ -532,6 +795,71 @@ predicate search. exception. +- - - + +#### `tf.test.TestCase.assertRaisesWithRegexpMatch(expected_exception, expected_regexp, callable_obj=None, *args, **kwargs)` {#TestCase.assertRaisesWithRegexpMatch} + +Asserts that the message in a raised exception matches the given regexp. + +This is just a wrapper around assertRaisesRegexp. Please use +assertRaisesRegexp instead of assertRaisesWithRegexpMatch. + +##### Args: + + +* `expected_exception`: Exception class expected to be raised. +* `expected_regexp`: Regexp (re pattern object or string) expected to be + found in error message. +* `callable_obj`: Function to be called, or None to return a context. +* `args`: Extra args. +* `kwargs`: Extra keyword args. + +##### Returns: + + A context manager if callable_obj is None. Otherwise, None. + +##### Raises: + + self.failureException if callable_obj does not raise a macthing exception. + + +- - - + +#### `tf.test.TestCase.assertRegexMatch(actual_str, regexes, message=None)` {#TestCase.assertRegexMatch} + +Asserts that at least one regex in regexes matches str. + + If possible you should use assertRegexpMatches, which is a simpler + version of this method. assertRegexpMatches takes a single regular + expression (a string or re compiled object) instead of a list. + + Notes: + 1. This function uses substring matching, i.e. the matching + succeeds if *any* substring of the error message matches *any* + regex in the list. This is more convenient for the user than + full-string matching. + + 2. If regexes is the empty list, the matching will always fail. + + 3. Use regexes=[''] for a regex that will always pass. + + 4. '.' matches any single character *except* the newline. To + match any character, use '(.| +)'. + + 5. '^' matches the beginning of each line, not just the beginning + of the string. Similarly, '$' matches the end of each line. + + 6. An exception will be thrown if regexes contains an invalid + regex. + + Args: + actual_str: The string we try to match with the items in regexes. + regexes: The regular expressions we want to match against str. + See "Notes" above for detailed notes on how this is interpreted. + message: The message to be printed if the test fails. + + - - - #### `tf.test.TestCase.assertRegexpMatches(text, expected_regexp, msg=None)` {#TestCase.assertRegexpMatches} @@ -539,6 +867,79 @@ predicate search. Fail the test unless the text matches the regular expression. +- - - + +#### `tf.test.TestCase.assertSameElements(expected_seq, actual_seq, msg=None)` {#TestCase.assertSameElements} + +Assert that two sequences have the same elements (in any order). + +This method, unlike assertItemsEqual, doesn't care about any +duplicates in the expected and actual sequences. + + >> assertSameElements([1, 1, 1, 0, 0, 0], [0, 1]) + # Doesn't raise an AssertionError + +If possible, you should use assertItemsEqual instead of +assertSameElements. + +##### Args: + + +* `expected_seq`: A sequence containing elements we are expecting. +* `actual_seq`: The sequence that we are testing. +* `msg`: The message to be printed if the test fails. + + +- - - + +#### `tf.test.TestCase.assertSameStructure(a, b, aname='a', bname='b', msg=None)` {#TestCase.assertSameStructure} + +Asserts that two values contain the same structural content. + +The two arguments should be data trees consisting of trees of dicts and +lists. They will be deeply compared by walking into the contents of dicts +and lists; other items will be compared using the == operator. +If the two structures differ in content, the failure message will indicate +the location within the structures where the first difference is found. +This may be helpful when comparing large structures. + +##### Args: + + +* `a`: The first structure to compare. +* `b`: The second structure to compare. +* `aname`: Variable name to use for the first structure in assertion messages. +* `bname`: Variable name to use for the second structure. +* `msg`: Additional text to include in the failure message. + + +- - - + +#### `tf.test.TestCase.assertSequenceAlmostEqual(expected_seq, actual_seq, places=None, msg=None, delta=None)` {#TestCase.assertSequenceAlmostEqual} + +An approximate equality assertion for ordered sequences. + +Fail if the two sequences are unequal as determined by their value +differences rounded to the given number of decimal places (default 7) and +comparing to zero, or by comparing that the difference between each value +in the two sequences is more than the given delta. + +Note that decimal places (from zero) are usually not the same as significant +digits (measured from the most signficant digit). + +If the two sequences compare equal then they will automatically compare +almost equal. + +##### Args: + + +* `expected_seq`: A sequence containing elements we are expecting. +* `actual_seq`: The sequence that we are testing. +* `places`: The number of decimal places to compare. +* `msg`: The message to be printed if the test fails. +* `delta`: The OK difference between compared values. + + - - - #### `tf.test.TestCase.assertSequenceEqual(seq1, seq2, msg=None, seq_type=None)` {#TestCase.assertSequenceEqual} @@ -559,6 +960,26 @@ which can be indexed, has a length, and has an equality operator. differences. +- - - + +#### `tf.test.TestCase.assertSequenceStartsWith(prefix, whole, msg=None)` {#TestCase.assertSequenceStartsWith} + +An equality assertion for the beginning of ordered sequences. + +If prefix is an empty sequence, it will raise an error unless whole is also +an empty sequence. + +If prefix is not a sequence, it will raise an error if the first element of +whole does not match. + +##### Args: + + +* `prefix`: A sequence expected at the beginning of the whole parameter. +* `whole`: The sequence in which to look for prefix. +* `msg`: Optional message to report on failure. + + - - - #### `tf.test.TestCase.assertSetEqual(set1, set2, msg=None)` {#TestCase.assertSetEqual} @@ -610,6 +1031,51 @@ Assert that actual.startswith(expected_start) is True. * `msg`: Optional message to report on failure. +- - - + +#### `tf.test.TestCase.assertTotallyOrdered(*groups, **kwargs)` {#TestCase.assertTotallyOrdered} + +Asserts that total ordering has been implemented correctly. + +For example, say you have a class A that compares only on its attribute x. +Comparators other than __lt__ are omitted for brevity. + +class A(object): + def __init__(self, x, y): + self.x = x + self.y = y + + def __hash__(self): + return hash(self.x) + + def __lt__(self, other): + try: + return self.x < other.x + except AttributeError: + return NotImplemented + +assertTotallyOrdered will check that instances can be ordered correctly. +For example, + +self.assertTotallyOrdered( + [None], # None should come before everything else. + [1], # Integers sort earlier. + [A(1, 'a')], + [A(2, 'b')], # 2 is after 1. + [A(3, 'c'), A(3, 'd')], # The second argument is irrelevant. + [A(4, 'z')], + ['foo']) # Strings sort last. + +##### Args: + + +* `*groups`: A list of groups of elements. Each group of elements is a list + of objects that are equal. The elements in each group must be less than + the elements in the group after it. For example, these groups are + totally ordered: [None], [1], [2, 2], [3]. +* `**kwargs`: optional msg keyword argument can be passed. + + - - - #### `tf.test.TestCase.assertTrue(expr, msg=None)` {#TestCase.assertTrue} @@ -632,6 +1098,13 @@ A tuple-specific equality assertion. differences. +- - - + +#### `tf.test.TestCase.assertUrlEqual(a, b, msg=None)` {#TestCase.assertUrlEqual} + +Asserts that urls are equal, ignoring ordering of query params. + + - - - #### `tf.test.TestCase.assert_(expr, msg=None)` {#TestCase.assert_} @@ -693,9 +1166,9 @@ tearDown. - - - -#### `tf.test.TestCase.fail(msg=None)` {#TestCase.fail} +#### `tf.test.TestCase.fail(msg=None, prefix=None)` {#TestCase.fail} -Fail immediately, with the given message. +Fail immediately with the given message, optionally prefixed. - - - @@ -747,6 +1220,13 @@ Fail immediately, with the given message. +- - - + +#### `tf.test.TestCase.getRecordedProperties()` {#TestCase.getRecordedProperties} + +Return any properties that the user has recorded. + + - - - #### `tf.test.TestCase.get_temp_dir()` {#TestCase.get_temp_dir} @@ -769,6 +1249,20 @@ pollute each others environment. +- - - + +#### `tf.test.TestCase.recordProperty(property_name, property_value)` {#TestCase.recordProperty} + +Record an arbitrary property for later use. + +##### Args: + + +* `property_name`: str, name of property to record; must be a valid XML + attribute name +* `property_value`: value of property; must be valid XML attribute value + + - - - #### `tf.test.TestCase.run(result=None)` {#TestCase.run} @@ -794,11 +1288,18 @@ Hook method for setting up class fixture before running tests in the class. #### `tf.test.TestCase.shortDescription()` {#TestCase.shortDescription} -Returns a one-line description of the test, or None if no -description has been provided. +Format both the test method name and the first line of its docstring. -The default implementation of this method returns the first line of -the specified test method's docstring. +If no docstring is given, only returns the method name. + +This method overrides unittest.TestCase.shortDescription(), which +only returns the first line of the docstring, obscuring the name +of the test upon failure. + +##### Returns: + + +* `desc`: A short description of a test method. - - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.scalar_summary.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.scalar_summary.md deleted file mode 100644 index 3ffd9260c7b..00000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.scalar_summary.md +++ /dev/null @@ -1,22 +0,0 @@ -### `tf.scalar_summary(*args, **kwargs)` {#scalar_summary} - -Outputs a `Summary` protocol buffer with scalar values. (deprecated) - -THIS FUNCTION IS DEPRECATED. It will be removed after 2016-11-30. -Instructions for updating: -Please switch to tf.summary.scalar. Note that tf.summary.scalar uses the node name instead of the tag. This means that TensorFlow will automatically de-duplicate summary names based on the scope they are created in. Also, passing a tensor or list of tags to a scalar summary op is no longer supported. - - The input `tags` and `values` must have the same shape. The generated - summary has a summary value for each tag-value pair in `tags` and `values`. - - Args: - tags: A `string` `Tensor`. Tags for the summaries. - values: A real numeric Tensor. Values for the summaries. - collections: Optional list of graph collections keys. The new summary op is - added to these collections. Defaults to `[GraphKeys.SUMMARIES]`. - name: A name for the operation (optional). - - Returns: - A scalar `Tensor` of type `string`. The serialized `Summary` protocol - buffer. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.batch_norm.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.batch_norm.md index 2b23d99de2c..386d3a357c2 100644 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.batch_norm.md +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.batch_norm.md @@ -33,7 +33,8 @@ can have speed penalty, specially in distributed settings. Lower `decay` value (recommend trying `decay`=0.9) if model experiences reasonably good training performance but poor validation and/or test performance. Try zero_debias_moving_mean=True for improved stability. -* `center`: If True, subtract `beta`. If False, `beta` is ignored. +* `center`: If True, add offset of `beta` to normalized tensor. If False, `beta` + is ignored. * `scale`: If True, multiply by `gamma`. If False, `gamma` is not used. When the next layer is linear (also e.g. `nn.relu`), this can be disabled since the scaling can be done by the next layer. diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.summary.SummaryDescription.RegisterExtension.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.summary.SummaryDescription.RegisterExtension.md deleted file mode 100644 index 3cfd7103d7e..00000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.summary.SummaryDescription.RegisterExtension.md +++ /dev/null @@ -1,4 +0,0 @@ -#### `tf.summary.SummaryDescription.RegisterExtension(extension_handle)` {#SummaryDescription.RegisterExtension} - - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.histogram_summary.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.histogram_summary.md deleted file mode 100644 index 570d7b712c6..00000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.histogram_summary.md +++ /dev/null @@ -1,26 +0,0 @@ -### `tf.histogram_summary(*args, **kwargs)` {#histogram_summary} - -Outputs a `Summary` protocol buffer with a histogram. (deprecated) - -THIS FUNCTION IS DEPRECATED. It will be removed after 2016-11-30. -Instructions for updating: -Please switch to tf.summary.histogram. Note that tf.summary.histogram uses the node name instead of the tag. This means that TensorFlow will automatically de-duplicate summary names based on their scope. - - The generated - [`Summary`](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto) - has one summary value containing a histogram for `values`. - - This op reports an `InvalidArgument` error if any value is not finite. - - Args: - tag: A `string` `Tensor`. 0-D. Tag to use for the summary value. - values: A real numeric `Tensor`. Any shape. Values to use to - build the histogram. - collections: Optional list of graph collections keys. The new summary op is - added to these collections. Defaults to `[GraphKeys.SUMMARIES]`. - name: A name for the operation (optional). - - Returns: - A scalar `Tensor` of type `string`. The serialized `Summary` protocol - buffer. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.total_variation.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.total_variation.md new file mode 100644 index 00000000000..03fec86c85e --- /dev/null +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.total_variation.md @@ -0,0 +1,40 @@ +### `tf.image.total_variation(images, name=None)` {#total_variation} + +Calculate and return the total variation for one or more images. + +The total variation is the sum of the absolute differences for neighboring +pixel-values in the input images. This measures how much noise is in the +images. + +This can be used as a loss-function during optimization so as to suppress +noise in images. If you have a batch of images, then you should calculate +the scalar loss-value as the sum: +`loss = tf.reduce_sum(tf.image.total_variation(images))` + +This implements the anisotropic 2-D version of the formula described here: + +https://en.wikipedia.org/wiki/Total_variation_denoising + +##### Args: + + +* `images`: 4-D Tensor of shape `[batch, height, width, channels]` or + 3-D Tensor of shape `[height, width, channels]`. + + +* `name`: A name for the operation (optional). + +##### Raises: + + +* `ValueError`: if images.shape is not a 3-D or 4-D vector. + +##### Returns: + + The total variation of `images`. + + If `images` was 4-D, return a 1-D float Tensor of shape `[batch]` with the + total variation for each image in the batch. + If `images` was 3-D, return a scalar float with the total variation for + that image. + diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.merge_summary.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.merge_summary.md deleted file mode 100644 index ccb984f5abe..00000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.merge_summary.md +++ /dev/null @@ -1,27 +0,0 @@ -### `tf.merge_summary(*args, **kwargs)` {#merge_summary} - -Merges summaries. (deprecated) - -THIS FUNCTION IS DEPRECATED. It will be removed after 2016-11-30. -Instructions for updating: -Please switch to tf.summary.merge. - - This op creates a - [`Summary`](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto) - protocol buffer that contains the union of all the values in the input - summaries. - - When the Op is run, it reports an `InvalidArgument` error if multiple values - in the summaries to merge use the same tag. - - Args: - inputs: A list of `string` `Tensor` objects containing serialized `Summary` - protocol buffers. - collections: Optional list of graph collections keys. The new summary op is - added to these collections. Defaults to `[GraphKeys.SUMMARIES]`. - name: A name for the operation (optional). - - Returns: - A scalar `Tensor` of type `string`. The serialized `Summary` protocol - buffer resulting from the merging. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.summary.SummaryDescription.FromString.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.summary.SummaryDescription.FromString.md deleted file mode 100644 index 24a3b3f10c3..00000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.summary.SummaryDescription.FromString.md +++ /dev/null @@ -1,4 +0,0 @@ -#### `tf.summary.SummaryDescription.FromString(s)` {#SummaryDescription.FromString} - - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.summary.TaggedRunMetadata.RegisterExtension.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.summary.TaggedRunMetadata.RegisterExtension.md deleted file mode 100644 index f2d0c042d77..00000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.summary.TaggedRunMetadata.RegisterExtension.md +++ /dev/null @@ -1,4 +0,0 @@ -#### `tf.summary.TaggedRunMetadata.RegisterExtension(extension_handle)` {#TaggedRunMetadata.RegisterExtension} - - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.layer_norm.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.layer_norm.md index c2d6c88d2e8..726426d9a90 100644 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.layer_norm.md +++ b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.layer_norm.md @@ -13,7 +13,8 @@ Can be used as a normalizer function for conv2d and fully_connected. * `inputs`: a tensor with 2 or more dimensions. The normalization occurs over all but the first dimension. -* `center`: If True, subtract `beta`. If False, `beta` is ignored. +* `center`: If True, add offset of `beta` to normalized tensor. If False, `beta` + is ignored. * `scale`: If True, multiply by `gamma`. If False, `gamma` is not used. When the next layer is linear (also e.g. `nn.relu`), this can be disabled since the scaling can be done by the next layer. diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.SummaryWriter.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.SummaryWriter.md deleted file mode 100644 index e9bdda200f9..00000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.SummaryWriter.md +++ /dev/null @@ -1,207 +0,0 @@ - -- - - - -#### `tf.train.SummaryWriter.__init__(*args, **kwargs)` {#SummaryWriter.__init__} - -Creates a `SummaryWriter` and an event file. (deprecated) - -THIS FUNCTION IS DEPRECATED. It will be removed after 2016-11-30. -Instructions for updating: -Please switch to tf.summary.FileWriter. The interface and behavior is the same; this is just a rename. - - This class is deprecated, and should be replaced with tf.summary.FileWriter. - - On construction the summary writer creates a new event file in `logdir`. - This event file will contain `Event` protocol buffers constructed when you - call one of the following functions: `add_summary()`, `add_session_log()`, - `add_event()`, or `add_graph()`. - - If you pass a `Graph` to the constructor it is added to - the event file. (This is equivalent to calling `add_graph()` later). - - TensorBoard will pick the graph from the file and display it graphically so - you can interactively explore the graph you built. You will usually pass - the graph from the session in which you launched it: - - ```python - ...create a graph... - # Launch the graph in a session. - sess = tf.Session() - # Create a summary writer, add the 'graph' to the event file. - writer = tf.train.SummaryWriter(, sess.graph) - ``` - - The other arguments to the constructor control the asynchronous writes to - the event file: - - * `flush_secs`: How often, in seconds, to flush the added summaries - and events to disk. - * `max_queue`: Maximum number of summaries or events pending to be - written to disk before one of the 'add' calls block. - - Args: - logdir: A string. Directory where event file will be written. - graph: A `Graph` object, such as `sess.graph`. - max_queue: Integer. Size of the queue for pending events and summaries. - flush_secs: Number. How often, in seconds, to flush the - pending events and summaries to disk. - graph_def: DEPRECATED: Use the `graph` argument instead. - - -- - - - -#### `tf.train.SummaryWriter.add_event(event)` {#SummaryWriter.add_event} - -Adds an event to the event file. - -##### Args: - - -* `event`: An `Event` protocol buffer. - - -- - - - -#### `tf.train.SummaryWriter.add_graph(graph, global_step=None, graph_def=None)` {#SummaryWriter.add_graph} - -Adds a `Graph` to the event file. - -The graph described by the protocol buffer will be displayed by -TensorBoard. Most users pass a graph in the constructor instead. - -##### Args: - - -* `graph`: A `Graph` object, such as `sess.graph`. -* `global_step`: Number. Optional global step counter to record with the - graph. -* `graph_def`: DEPRECATED. Use the `graph` parameter instead. - -##### Raises: - - -* `ValueError`: If both graph and graph_def are passed to the method. - - -- - - - -#### `tf.train.SummaryWriter.add_meta_graph(meta_graph_def, global_step=None)` {#SummaryWriter.add_meta_graph} - -Adds a `MetaGraphDef` to the event file. - -The `MetaGraphDef` allows running the given graph via -`saver.import_meta_graph()`. - -##### Args: - - -* `meta_graph_def`: A `MetaGraphDef` object, often as retured by - `saver.export_meta_graph()`. -* `global_step`: Number. Optional global step counter to record with the - graph. - -##### Raises: - - -* `TypeError`: If both `meta_graph_def` is not an instance of `MetaGraphDef`. - - -- - - - -#### `tf.train.SummaryWriter.add_run_metadata(run_metadata, tag, global_step=None)` {#SummaryWriter.add_run_metadata} - -Adds a metadata information for a single session.run() call. - -##### Args: - - -* `run_metadata`: A `RunMetadata` protobuf object. -* `tag`: The tag name for this metadata. -* `global_step`: Number. Optional global step counter to record with the - StepStats. - -##### Raises: - - -* `ValueError`: If the provided tag was already used for this type of event. - - -- - - - -#### `tf.train.SummaryWriter.add_session_log(session_log, global_step=None)` {#SummaryWriter.add_session_log} - -Adds a `SessionLog` protocol buffer to the event file. - -This method wraps the provided session in an `Event` protocol buffer -and adds it to the event file. - -##### Args: - - -* `session_log`: A `SessionLog` protocol buffer. -* `global_step`: Number. Optional global step value to record with the - summary. - - -- - - - -#### `tf.train.SummaryWriter.add_summary(summary, global_step=None)` {#SummaryWriter.add_summary} - -Adds a `Summary` protocol buffer to the event file. - -This method wraps the provided summary in an `Event` protocol buffer -and adds it to the event file. - -You can pass the result of evaluating any summary op, using -[`Session.run()`](client.md#Session.run) or -[`Tensor.eval()`](framework.md#Tensor.eval), to this -function. Alternatively, you can pass a `tf.Summary` protocol -buffer that you populate with your own data. The latter is -commonly done to report evaluation results in event files. - -##### Args: - - -* `summary`: A `Summary` protocol buffer, optionally serialized as a string. -* `global_step`: Number. Optional global step value to record with the - summary. - - -- - - - -#### `tf.train.SummaryWriter.close()` {#SummaryWriter.close} - -Flushes the event file to disk and close the file. - -Call this method when you do not need the summary writer anymore. - - -- - - - -#### `tf.train.SummaryWriter.flush()` {#SummaryWriter.flush} - -Flushes the event file to disk. - -Call this method to make sure that all pending events have been written to -disk. - - -- - - - -#### `tf.train.SummaryWriter.get_logdir()` {#SummaryWriter.get_logdir} - -Returns the directory where event file will be written. - - -- - - - -#### `tf.train.SummaryWriter.reopen()` {#SummaryWriter.reopen} - -Reopens the EventFileWriter. - -Can be called after `close()` to add more events in the same directory. -The events will go into a new events file. - -Does nothing if the EventFileWriter was not closed. - - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.audio_summary.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.audio_summary.md deleted file mode 100644 index c5830ab5504..00000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.audio_summary.md +++ /dev/null @@ -1,37 +0,0 @@ -### `tf.audio_summary(*args, **kwargs)` {#audio_summary} - -Outputs a `Summary` protocol buffer with audio. (deprecated) - -THIS FUNCTION IS DEPRECATED. It will be removed after 2016-11-30. -Instructions for updating: -Please switch to tf.summary.audio. Note that tf.summary.histogram uses the node name instead of the tag. This means that TensorFlow will automatically de-duplicate summary names based on the scope they are created in. - - The summary has up to `max_outputs` summary values containing audio. The - audio is built from `tensor` which must be 3-D with shape `[batch_size, - frames, channels]` or 2-D with shape `[batch_size, frames]`. The values are - assumed to be in the range of `[-1.0, 1.0]` with a sample rate of - `sample_rate`. - - The `tag` argument is a scalar `Tensor` of type `string`. It is used to - build the `tag` of the summary values: - - * If `max_outputs` is 1, the summary value tag is '*tag*/audio'. - * If `max_outputs` is greater than 1, the summary value tags are - generated sequentially as '*tag*/audio/0', '*tag*/audio/1', etc. - - Args: - tag: A scalar `Tensor` of type `string`. Used to build the `tag` - of the summary values. - tensor: A 3-D `float32` `Tensor` of shape `[batch_size, frames, channels]` - or a 2-D `float32` `Tensor` of shape `[batch_size, frames]`. - sample_rate: A Scalar `float32` `Tensor` indicating the sample rate of the - signal in hertz. - max_outputs: Max number of batch elements to generate audio for. - collections: Optional list of ops.GraphKeys. The collections to add the - summary to. Defaults to [ops.GraphKeys.SUMMARIES] - name: A name for the operation (optional). - - Returns: - A scalar `Tensor` of type `string`. The serialized `Summary` protocol - buffer. - diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.summary.TaggedRunMetadata.FromString.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.summary.TaggedRunMetadata.FromString.md deleted file mode 100644 index 613f4ebd73d..00000000000 --- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.summary.TaggedRunMetadata.FromString.md +++ /dev/null @@ -1,4 +0,0 @@ -#### `tf.summary.TaggedRunMetadata.FromString(s)` {#TaggedRunMetadata.FromString} - - - diff --git a/tensorflow/g3doc/api_docs/python/image.md b/tensorflow/g3doc/api_docs/python/image.md index d218ba024a5..baef42db057 100644 --- a/tensorflow/g3doc/api_docs/python/image.md +++ b/tensorflow/g3doc/api_docs/python/image.md @@ -1474,3 +1474,49 @@ false and no bounding boxes are supplied, an error is raised. Provide as input to `tf.image.draw_bounding_boxes`. + +## Denoising + +- - - + +### `tf.image.total_variation(images, name=None)` {#total_variation} + +Calculate and return the total variation for one or more images. + +The total variation is the sum of the absolute differences for neighboring +pixel-values in the input images. This measures how much noise is in the +images. + +This can be used as a loss-function during optimization so as to suppress +noise in images. If you have a batch of images, then you should calculate +the scalar loss-value as the sum: +`loss = tf.reduce_sum(tf.image.total_variation(images))` + +This implements the anisotropic 2-D version of the formula described here: + +https://en.wikipedia.org/wiki/Total_variation_denoising + +##### Args: + + +* `images`: 4-D Tensor of shape `[batch, height, width, channels]` or + 3-D Tensor of shape `[height, width, channels]`. + + +* `name`: A name for the operation (optional). + +##### Raises: + + +* `ValueError`: if images.shape is not a 3-D or 4-D vector. + +##### Returns: + + The total variation of `images`. + + If `images` was 4-D, return a 1-D float Tensor of shape `[batch]` with the + total variation for each image in the batch. + If `images` was 3-D, return a scalar float with the total variation for + that image. + + diff --git a/tensorflow/g3doc/api_docs/python/index.md b/tensorflow/g3doc/api_docs/python/index.md index 449e582d190..5f3fe3b25e0 100644 --- a/tensorflow/g3doc/api_docs/python/index.md +++ b/tensorflow/g3doc/api_docs/python/index.md @@ -415,6 +415,7 @@ * [`rgb_to_hsv`](../../api_docs/python/image.md#rgb_to_hsv) * [`rot90`](../../api_docs/python/image.md#rot90) * [`sample_distorted_bounding_box`](../../api_docs/python/image.md#sample_distorted_bounding_box) + * [`total_variation`](../../api_docs/python/image.md#total_variation) * [`transpose_image`](../../api_docs/python/image.md#transpose_image) * **[Sparse Tensors](../../api_docs/python/sparse_ops.md)**: diff --git a/tensorflow/g3doc/api_docs/python/summary.md b/tensorflow/g3doc/api_docs/python/summary.md index be029f42906..8d344036dbc 100644 --- a/tensorflow/g3doc/api_docs/python/summary.md +++ b/tensorflow/g3doc/api_docs/python/summary.md @@ -485,187 +485,6 @@ metadata is stored in its NodeDef. This method retrieves the description. ### `class tf.summary.SummaryDescription` {#SummaryDescription} -- - - - -#### `tf.summary.SummaryDescription.ByteSize()` {#SummaryDescription.ByteSize} - - - - -- - - - -#### `tf.summary.SummaryDescription.Clear()` {#SummaryDescription.Clear} - - - - -- - - - -#### `tf.summary.SummaryDescription.ClearExtension(extension_handle)` {#SummaryDescription.ClearExtension} - - - - -- - - - -#### `tf.summary.SummaryDescription.ClearField(field_name)` {#SummaryDescription.ClearField} - - - - -- - - - -#### `tf.summary.SummaryDescription.CopyFrom(other_msg)` {#SummaryDescription.CopyFrom} - -Copies the content of the specified message into the current message. - -The method clears the current message and then merges the specified -message using MergeFrom. - -##### Args: - - -* `other_msg`: Message to copy into the current one. - - -- - - - -#### `tf.summary.SummaryDescription.DiscardUnknownFields()` {#SummaryDescription.DiscardUnknownFields} - - - - -- - - - -#### `tf.summary.SummaryDescription.FindInitializationErrors()` {#SummaryDescription.FindInitializationErrors} - -Finds required fields which are not initialized. - -##### Returns: - - A list of strings. Each string is a path to an uninitialized field from - the top-level message, e.g. "foo.bar[5].baz". - - -- - - - -#### `tf.summary.SummaryDescription.FromString(s)` {#SummaryDescription.FromString} - - - - -- - - - -#### `tf.summary.SummaryDescription.HasExtension(extension_handle)` {#SummaryDescription.HasExtension} - - - - -- - - - -#### `tf.summary.SummaryDescription.HasField(field_name)` {#SummaryDescription.HasField} - - - - -- - - - -#### `tf.summary.SummaryDescription.IsInitialized(errors=None)` {#SummaryDescription.IsInitialized} - -Checks if all required fields of a message are set. - -##### Args: - - -* `errors`: A list which, if provided, will be populated with the field - paths of all missing required fields. - -##### Returns: - - True iff the specified message has all required fields set. - - -- - - - -#### `tf.summary.SummaryDescription.ListFields()` {#SummaryDescription.ListFields} - - - - -- - - - -#### `tf.summary.SummaryDescription.MergeFrom(msg)` {#SummaryDescription.MergeFrom} - - - - -- - - - -#### `tf.summary.SummaryDescription.MergeFromString(serialized)` {#SummaryDescription.MergeFromString} - - - - -- - - - -#### `tf.summary.SummaryDescription.ParseFromString(serialized)` {#SummaryDescription.ParseFromString} - -Parse serialized protocol buffer data into this message. - -Like MergeFromString(), except we clear the object first and -do not return the value that MergeFromString returns. - - -- - - - -#### `tf.summary.SummaryDescription.RegisterExtension(extension_handle)` {#SummaryDescription.RegisterExtension} - - - - -- - - - -#### `tf.summary.SummaryDescription.SerializePartialToString()` {#SummaryDescription.SerializePartialToString} - - - - -- - - - -#### `tf.summary.SummaryDescription.SerializeToString()` {#SummaryDescription.SerializeToString} - - - - -- - - - -#### `tf.summary.SummaryDescription.SetInParent()` {#SummaryDescription.SetInParent} - -Sets the _cached_byte_size_dirty bit to true, -and propagates this to our listener iff this was a state change. - - -- - - - -#### `tf.summary.SummaryDescription.WhichOneof(oneof_name)` {#SummaryDescription.WhichOneof} - -Returns the name of the currently set field inside a oneof, or None. - - -- - - - -#### `tf.summary.SummaryDescription.__deepcopy__(memo=None)` {#SummaryDescription.__deepcopy__} - - - - -- - - - -#### `tf.summary.SummaryDescription.__eq__(other)` {#SummaryDescription.__eq__} - - - - - - - #### `tf.summary.SummaryDescription.__getstate__()` {#SummaryDescription.__getstate__} @@ -673,249 +492,12 @@ Returns the name of the currently set field inside a oneof, or None. Support the pickle protocol. -- - - - -#### `tf.summary.SummaryDescription.__hash__()` {#SummaryDescription.__hash__} - - - - -- - - - -#### `tf.summary.SummaryDescription.__init__(**kwargs)` {#SummaryDescription.__init__} - - - - -- - - - -#### `tf.summary.SummaryDescription.__ne__(other_msg)` {#SummaryDescription.__ne__} - - - - -- - - - -#### `tf.summary.SummaryDescription.__repr__()` {#SummaryDescription.__repr__} - - - - -- - - - -#### `tf.summary.SummaryDescription.__setstate__(state)` {#SummaryDescription.__setstate__} - -Support the pickle protocol. - - -- - - - -#### `tf.summary.SummaryDescription.__str__()` {#SummaryDescription.__str__} - - - - -- - - - -#### `tf.summary.SummaryDescription.__unicode__()` {#SummaryDescription.__unicode__} - - - - -- - - - -#### `tf.summary.SummaryDescription.type_hint` {#SummaryDescription.type_hint} - -Magic attribute generated for "type_hint" proto field. - - - - - ### `class tf.summary.TaggedRunMetadata` {#TaggedRunMetadata} -- - - - -#### `tf.summary.TaggedRunMetadata.ByteSize()` {#TaggedRunMetadata.ByteSize} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.Clear()` {#TaggedRunMetadata.Clear} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.ClearExtension(extension_handle)` {#TaggedRunMetadata.ClearExtension} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.ClearField(field_name)` {#TaggedRunMetadata.ClearField} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.CopyFrom(other_msg)` {#TaggedRunMetadata.CopyFrom} - -Copies the content of the specified message into the current message. - -The method clears the current message and then merges the specified -message using MergeFrom. - -##### Args: - - -* `other_msg`: Message to copy into the current one. - - -- - - - -#### `tf.summary.TaggedRunMetadata.DiscardUnknownFields()` {#TaggedRunMetadata.DiscardUnknownFields} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.FindInitializationErrors()` {#TaggedRunMetadata.FindInitializationErrors} - -Finds required fields which are not initialized. - -##### Returns: - - A list of strings. Each string is a path to an uninitialized field from - the top-level message, e.g. "foo.bar[5].baz". - - -- - - - -#### `tf.summary.TaggedRunMetadata.FromString(s)` {#TaggedRunMetadata.FromString} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.HasExtension(extension_handle)` {#TaggedRunMetadata.HasExtension} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.HasField(field_name)` {#TaggedRunMetadata.HasField} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.IsInitialized(errors=None)` {#TaggedRunMetadata.IsInitialized} - -Checks if all required fields of a message are set. - -##### Args: - - -* `errors`: A list which, if provided, will be populated with the field - paths of all missing required fields. - -##### Returns: - - True iff the specified message has all required fields set. - - -- - - - -#### `tf.summary.TaggedRunMetadata.ListFields()` {#TaggedRunMetadata.ListFields} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.MergeFrom(msg)` {#TaggedRunMetadata.MergeFrom} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.MergeFromString(serialized)` {#TaggedRunMetadata.MergeFromString} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.ParseFromString(serialized)` {#TaggedRunMetadata.ParseFromString} - -Parse serialized protocol buffer data into this message. - -Like MergeFromString(), except we clear the object first and -do not return the value that MergeFromString returns. - - -- - - - -#### `tf.summary.TaggedRunMetadata.RegisterExtension(extension_handle)` {#TaggedRunMetadata.RegisterExtension} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.SerializePartialToString()` {#TaggedRunMetadata.SerializePartialToString} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.SerializeToString()` {#TaggedRunMetadata.SerializeToString} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.SetInParent()` {#TaggedRunMetadata.SetInParent} - -Sets the _cached_byte_size_dirty bit to true, -and propagates this to our listener iff this was a state change. - - -- - - - -#### `tf.summary.TaggedRunMetadata.WhichOneof(oneof_name)` {#TaggedRunMetadata.WhichOneof} - -Returns the name of the currently set field inside a oneof, or None. - - -- - - - -#### `tf.summary.TaggedRunMetadata.__deepcopy__(memo=None)` {#TaggedRunMetadata.__deepcopy__} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.__eq__(other)` {#TaggedRunMetadata.__eq__} - - - - - - - #### `tf.summary.TaggedRunMetadata.__getstate__()` {#TaggedRunMetadata.__getstate__} @@ -923,67 +505,4 @@ Returns the name of the currently set field inside a oneof, or None. Support the pickle protocol. -- - - - -#### `tf.summary.TaggedRunMetadata.__hash__()` {#TaggedRunMetadata.__hash__} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.__init__(**kwargs)` {#TaggedRunMetadata.__init__} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.__ne__(other_msg)` {#TaggedRunMetadata.__ne__} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.__repr__()` {#TaggedRunMetadata.__repr__} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.__setstate__(state)` {#TaggedRunMetadata.__setstate__} - -Support the pickle protocol. - - -- - - - -#### `tf.summary.TaggedRunMetadata.__str__()` {#TaggedRunMetadata.__str__} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.__unicode__()` {#TaggedRunMetadata.__unicode__} - - - - -- - - - -#### `tf.summary.TaggedRunMetadata.run_metadata` {#TaggedRunMetadata.run_metadata} - -Magic attribute generated for "run_metadata" proto field. - - -- - - - -#### `tf.summary.TaggedRunMetadata.tag` {#TaggedRunMetadata.tag} - -Magic attribute generated for "tag" proto field. - - diff --git a/tensorflow/g3doc/api_docs/python/test.md b/tensorflow/g3doc/api_docs/python/test.md index 265e4028d0f..c95f9718894 100644 --- a/tensorflow/g3doc/api_docs/python/test.md +++ b/tensorflow/g3doc/api_docs/python/test.md @@ -213,6 +213,125 @@ Checks that for all elements of farray1 and farray2 * `err`: a float value. +- - - + +#### `tf.test.TestCase.assertBetween(value, minv, maxv, msg=None)` {#TestCase.assertBetween} + +Asserts that value is between minv and maxv (inclusive). + + +- - - + +#### `tf.test.TestCase.assertCommandFails(command, regexes, env=None, close_fds=True, msg=None)` {#TestCase.assertCommandFails} + +Asserts a shell command fails and the error matches a regex in a list. + +##### Args: + + +* `command`: List or string representing the command to run. +* `regexes`: the list of regular expression strings. +* `env`: Dictionary of environment variable settings. +* `close_fds`: Whether or not to close all open fd's in the child after + forking. +* `msg`: Optional message to report on failure. + + +- - - + +#### `tf.test.TestCase.assertCommandSucceeds(command, regexes=('',), env=None, close_fds=True, msg=None)` {#TestCase.assertCommandSucceeds} + +Asserts that a shell command succeeds (i.e. exits with code 0). + +##### Args: + + +* `command`: List or string representing the command to run. +* `regexes`: List of regular expression byte strings that match success. +* `env`: Dictionary of environment variable settings. +* `close_fds`: Whether or not to close all open fd's in the child after + forking. +* `msg`: Optional message to report on failure. + + +- - - + +#### `tf.test.TestCase.assertContainsExactSubsequence(container, subsequence, msg=None)` {#TestCase.assertContainsExactSubsequence} + +Assert that "container" contains "subsequence" as an exact subsequence. + +Asserts that "container" contains all the elements of "subsequence", in +order, and without other elements interspersed. For example, [1, 2, 3] is an +exact subsequence of [0, 0, 1, 2, 3, 0] but not of [0, 0, 1, 2, 0, 3, 0]. + +##### Args: + + +* `container`: the list we're testing for subsequence inclusion. +* `subsequence`: the list we hope will be an exact subsequence of container. +* `msg`: Optional message to report on failure. + + +- - - + +#### `tf.test.TestCase.assertContainsInOrder(strings, target, msg=None)` {#TestCase.assertContainsInOrder} + +Asserts that the strings provided are found in the target in order. + +This may be useful for checking HTML output. + +##### Args: + + +* `strings`: A list of strings, such as [ 'fox', 'dog' ] +* `target`: A target string in which to look for the strings, such as + 'The quick brown fox jumped over the lazy dog'. +* `msg`: Optional message to report on failure. + + +- - - + +#### `tf.test.TestCase.assertContainsSubsequence(container, subsequence, msg=None)` {#TestCase.assertContainsSubsequence} + +Assert that "container" contains "subsequence" as a subsequence. + +Asserts that "container" contains all the elements of "subsequence", in +order, but possibly with other elements interspersed. For example, [1, 2, 3] +is a subsequence of [0, 0, 1, 2, 0, 3, 0] but not of [0, 0, 1, 3, 0, 2, 0]. + +##### Args: + + +* `container`: the list we're testing for subsequence inclusion. +* `subsequence`: the list we hope will be a subsequence of container. +* `msg`: Optional message to report on failure. + + +- - - + +#### `tf.test.TestCase.assertContainsSubset(expected_subset, actual_set, msg=None)` {#TestCase.assertContainsSubset} + +Checks whether actual iterable is a superset of expected iterable. + + +- - - + +#### `tf.test.TestCase.assertCountEqual(*args, **kwargs)` {#TestCase.assertCountEqual} + +An unordered sequence specific comparison. + +Equivalent to assertItemsEqual(). This method is a compatibility layer +for Python 3k, since 2to3 does not convert assertItemsEqual() calls into +assertCountEqual() calls. + +##### Args: + + +* `expected_seq`: A sequence containing elements we are expecting. +* `actual_seq`: The sequence that we are testing. +* `msg`: The message to be printed if the test fails. + + - - - #### `tf.test.TestCase.assertDeviceEqual(device1, device2)` {#TestCase.assertDeviceEqual} @@ -235,9 +354,48 @@ Checks whether actual is a superset of expected. - - - -#### `tf.test.TestCase.assertDictEqual(d1, d2, msg=None)` {#TestCase.assertDictEqual} +#### `tf.test.TestCase.assertDictEqual(a, b, msg=None)` {#TestCase.assertDictEqual} + +Raises AssertionError if a and b are not equal dictionaries. + +##### Args: +* `a`: A dict, the expected value. +* `b`: A dict, the actual value. +* `msg`: An optional str, the associated message. + +##### Raises: + + +* `AssertionError`: if the dictionaries are not equal. + + +- - - + +#### `tf.test.TestCase.assertEmpty(container, msg=None)` {#TestCase.assertEmpty} + +Assert that an object has zero length. + +##### Args: + + +* `container`: Anything that implements the collections.Sized interface. +* `msg`: Optional message to report on failure. + + +- - - + +#### `tf.test.TestCase.assertEndsWith(actual, expected_end, msg=None)` {#TestCase.assertEndsWith} + +Assert that actual.endswith(expected_end) is True. + +##### Args: + + +* `actual`: str +* `expected_end`: str +* `msg`: Optional message to report on failure. - - - @@ -322,10 +480,11 @@ Included for symmetry with assertIsNone. - - - -#### `tf.test.TestCase.assertItemsEqual(expected_seq, actual_seq, msg=None)` {#TestCase.assertItemsEqual} +#### `tf.test.TestCase.assertItemsEqual(*args, **kwargs)` {#TestCase.assertItemsEqual} -An unordered sequence specific comparison. It asserts that -actual_seq and expected_seq have the same element counts. +An unordered sequence specific comparison. + +It asserts that actual_seq and expected_seq have the same element counts. Equivalent to:: self.assertEqual(Counter(iter(actual_seq)), @@ -338,6 +497,30 @@ Asserts that each element has the same count in both sequences. - [0, 1, 1] and [1, 0, 1] compare equal. - [0, 0, 1] and [0, 1] compare unequal. +##### Args: + + +* `expected_seq`: A sequence containing elements we are expecting. +* `actual_seq`: The sequence that we are testing. +* `msg`: The message to be printed if the test fails. + + +- - - + +#### `tf.test.TestCase.assertJsonEqual(first, second, msg=None)` {#TestCase.assertJsonEqual} + +Asserts that the JSON objects defined in two strings are equal. + +A summary of the differences will be included in the failure message +using assertSameStructure. + +##### Args: + + +* `first`: A string contining JSON to decode and compare to second. +* `second`: A string contining JSON to decode and compare to first. +* `msg`: Additional text to include in the failure message. + - - - @@ -407,6 +590,13 @@ if not. * `msg`: An optional string message to append to the failure message. +- - - + +#### `tf.test.TestCase.assertNoCommonElements(expected_seq, actual_seq, msg=None)` {#TestCase.assertNoCommonElements} + +Checks whether actual iterable and expected iterable are disjoint. + + - - - #### `tf.test.TestCase.assertNotAlmostEqual(first, second, places=None, msg=None, delta=None)` {#TestCase.assertNotAlmostEqual} @@ -437,6 +627,33 @@ as significant digits (measured from the most signficant digit). Objects that are equal automatically fail. +- - - + +#### `tf.test.TestCase.assertNotEmpty(container, msg=None)` {#TestCase.assertNotEmpty} + +Assert that an object has non-zero length. + +##### Args: + + +* `container`: Anything that implements the collections.Sized interface. +* `msg`: Optional message to report on failure. + + +- - - + +#### `tf.test.TestCase.assertNotEndsWith(actual, unexpected_end, msg=None)` {#TestCase.assertNotEndsWith} + +Assert that actual.endswith(unexpected_end) is False. + +##### Args: + + +* `actual`: str +* `unexpected_end`: str +* `msg`: Optional message to report on failure. + + - - - #### `tf.test.TestCase.assertNotEqual(first, second, msg=None)` {#TestCase.assertNotEqual} @@ -474,6 +691,20 @@ Included for symmetry with assertIsInstance. Fail the test if the text matches the regular expression. +- - - + +#### `tf.test.TestCase.assertNotStartsWith(actual, unexpected_start, msg=None)` {#TestCase.assertNotStartsWith} + +Assert that actual.startswith(unexpected_start) is False. + +##### Args: + + +* `actual`: str +* `unexpected_start`: str +* `msg`: Optional message to report on failure. + + - - - #### `tf.test.TestCase.assertProtoEquals(expected_message_maybe_ascii, message)` {#TestCase.assertProtoEquals} @@ -548,6 +779,38 @@ Asserts that the message in a raised exception matches a regexp. * `kwargs`: Extra kwargs. +- - - + +#### `tf.test.TestCase.assertRaisesWithLiteralMatch(expected_exception, expected_exception_message, callable_obj=None, *args, **kwargs)` {#TestCase.assertRaisesWithLiteralMatch} + +Asserts that the message in a raised exception equals the given string. + +Unlike assertRaisesRegexp, this method takes a literal string, not +a regular expression. + +with self.assertRaisesWithLiteralMatch(ExType, 'message'): + DoSomething() + +##### Args: + + +* `expected_exception`: Exception class expected to be raised. +* `expected_exception_message`: String message expected in the raised + exception. For a raise exception e, expected_exception_message must + equal str(e). +* `callable_obj`: Function to be called, or None to return a context. +* `args`: Extra args. +* `kwargs`: Extra kwargs. + +##### Returns: + + A context manager if callable_obj is None. Otherwise, None. + +##### Raises: + + self.failureException if callable_obj does not raise a macthing exception. + + - - - #### `tf.test.TestCase.assertRaisesWithPredicateMatch(exception_type, expected_err_re_or_predicate)` {#TestCase.assertRaisesWithPredicateMatch} @@ -572,6 +835,71 @@ predicate search. exception. +- - - + +#### `tf.test.TestCase.assertRaisesWithRegexpMatch(expected_exception, expected_regexp, callable_obj=None, *args, **kwargs)` {#TestCase.assertRaisesWithRegexpMatch} + +Asserts that the message in a raised exception matches the given regexp. + +This is just a wrapper around assertRaisesRegexp. Please use +assertRaisesRegexp instead of assertRaisesWithRegexpMatch. + +##### Args: + + +* `expected_exception`: Exception class expected to be raised. +* `expected_regexp`: Regexp (re pattern object or string) expected to be + found in error message. +* `callable_obj`: Function to be called, or None to return a context. +* `args`: Extra args. +* `kwargs`: Extra keyword args. + +##### Returns: + + A context manager if callable_obj is None. Otherwise, None. + +##### Raises: + + self.failureException if callable_obj does not raise a macthing exception. + + +- - - + +#### `tf.test.TestCase.assertRegexMatch(actual_str, regexes, message=None)` {#TestCase.assertRegexMatch} + +Asserts that at least one regex in regexes matches str. + + If possible you should use assertRegexpMatches, which is a simpler + version of this method. assertRegexpMatches takes a single regular + expression (a string or re compiled object) instead of a list. + + Notes: + 1. This function uses substring matching, i.e. the matching + succeeds if *any* substring of the error message matches *any* + regex in the list. This is more convenient for the user than + full-string matching. + + 2. If regexes is the empty list, the matching will always fail. + + 3. Use regexes=[''] for a regex that will always pass. + + 4. '.' matches any single character *except* the newline. To + match any character, use '(.| +)'. + + 5. '^' matches the beginning of each line, not just the beginning + of the string. Similarly, '$' matches the end of each line. + + 6. An exception will be thrown if regexes contains an invalid + regex. + + Args: + actual_str: The string we try to match with the items in regexes. + regexes: The regular expressions we want to match against str. + See "Notes" above for detailed notes on how this is interpreted. + message: The message to be printed if the test fails. + + - - - #### `tf.test.TestCase.assertRegexpMatches(text, expected_regexp, msg=None)` {#TestCase.assertRegexpMatches} @@ -579,6 +907,79 @@ predicate search. Fail the test unless the text matches the regular expression. +- - - + +#### `tf.test.TestCase.assertSameElements(expected_seq, actual_seq, msg=None)` {#TestCase.assertSameElements} + +Assert that two sequences have the same elements (in any order). + +This method, unlike assertItemsEqual, doesn't care about any +duplicates in the expected and actual sequences. + + >> assertSameElements([1, 1, 1, 0, 0, 0], [0, 1]) + # Doesn't raise an AssertionError + +If possible, you should use assertItemsEqual instead of +assertSameElements. + +##### Args: + + +* `expected_seq`: A sequence containing elements we are expecting. +* `actual_seq`: The sequence that we are testing. +* `msg`: The message to be printed if the test fails. + + +- - - + +#### `tf.test.TestCase.assertSameStructure(a, b, aname='a', bname='b', msg=None)` {#TestCase.assertSameStructure} + +Asserts that two values contain the same structural content. + +The two arguments should be data trees consisting of trees of dicts and +lists. They will be deeply compared by walking into the contents of dicts +and lists; other items will be compared using the == operator. +If the two structures differ in content, the failure message will indicate +the location within the structures where the first difference is found. +This may be helpful when comparing large structures. + +##### Args: + + +* `a`: The first structure to compare. +* `b`: The second structure to compare. +* `aname`: Variable name to use for the first structure in assertion messages. +* `bname`: Variable name to use for the second structure. +* `msg`: Additional text to include in the failure message. + + +- - - + +#### `tf.test.TestCase.assertSequenceAlmostEqual(expected_seq, actual_seq, places=None, msg=None, delta=None)` {#TestCase.assertSequenceAlmostEqual} + +An approximate equality assertion for ordered sequences. + +Fail if the two sequences are unequal as determined by their value +differences rounded to the given number of decimal places (default 7) and +comparing to zero, or by comparing that the difference between each value +in the two sequences is more than the given delta. + +Note that decimal places (from zero) are usually not the same as significant +digits (measured from the most signficant digit). + +If the two sequences compare equal then they will automatically compare +almost equal. + +##### Args: + + +* `expected_seq`: A sequence containing elements we are expecting. +* `actual_seq`: The sequence that we are testing. +* `places`: The number of decimal places to compare. +* `msg`: The message to be printed if the test fails. +* `delta`: The OK difference between compared values. + + - - - #### `tf.test.TestCase.assertSequenceEqual(seq1, seq2, msg=None, seq_type=None)` {#TestCase.assertSequenceEqual} @@ -599,6 +1000,26 @@ which can be indexed, has a length, and has an equality operator. differences. +- - - + +#### `tf.test.TestCase.assertSequenceStartsWith(prefix, whole, msg=None)` {#TestCase.assertSequenceStartsWith} + +An equality assertion for the beginning of ordered sequences. + +If prefix is an empty sequence, it will raise an error unless whole is also +an empty sequence. + +If prefix is not a sequence, it will raise an error if the first element of +whole does not match. + +##### Args: + + +* `prefix`: A sequence expected at the beginning of the whole parameter. +* `whole`: The sequence in which to look for prefix. +* `msg`: Optional message to report on failure. + + - - - #### `tf.test.TestCase.assertSetEqual(set1, set2, msg=None)` {#TestCase.assertSetEqual} @@ -650,6 +1071,51 @@ Assert that actual.startswith(expected_start) is True. * `msg`: Optional message to report on failure. +- - - + +#### `tf.test.TestCase.assertTotallyOrdered(*groups, **kwargs)` {#TestCase.assertTotallyOrdered} + +Asserts that total ordering has been implemented correctly. + +For example, say you have a class A that compares only on its attribute x. +Comparators other than __lt__ are omitted for brevity. + +class A(object): + def __init__(self, x, y): + self.x = x + self.y = y + + def __hash__(self): + return hash(self.x) + + def __lt__(self, other): + try: + return self.x < other.x + except AttributeError: + return NotImplemented + +assertTotallyOrdered will check that instances can be ordered correctly. +For example, + +self.assertTotallyOrdered( + [None], # None should come before everything else. + [1], # Integers sort earlier. + [A(1, 'a')], + [A(2, 'b')], # 2 is after 1. + [A(3, 'c'), A(3, 'd')], # The second argument is irrelevant. + [A(4, 'z')], + ['foo']) # Strings sort last. + +##### Args: + + +* `*groups`: A list of groups of elements. Each group of elements is a list + of objects that are equal. The elements in each group must be less than + the elements in the group after it. For example, these groups are + totally ordered: [None], [1], [2, 2], [3]. +* `**kwargs`: optional msg keyword argument can be passed. + + - - - #### `tf.test.TestCase.assertTrue(expr, msg=None)` {#TestCase.assertTrue} @@ -672,6 +1138,13 @@ A tuple-specific equality assertion. differences. +- - - + +#### `tf.test.TestCase.assertUrlEqual(a, b, msg=None)` {#TestCase.assertUrlEqual} + +Asserts that urls are equal, ignoring ordering of query params. + + - - - #### `tf.test.TestCase.assert_(expr, msg=None)` {#TestCase.assert_} @@ -733,9 +1206,9 @@ tearDown. - - - -#### `tf.test.TestCase.fail(msg=None)` {#TestCase.fail} +#### `tf.test.TestCase.fail(msg=None, prefix=None)` {#TestCase.fail} -Fail immediately, with the given message. +Fail immediately with the given message, optionally prefixed. - - - @@ -787,6 +1260,13 @@ Fail immediately, with the given message. +- - - + +#### `tf.test.TestCase.getRecordedProperties()` {#TestCase.getRecordedProperties} + +Return any properties that the user has recorded. + + - - - #### `tf.test.TestCase.get_temp_dir()` {#TestCase.get_temp_dir} @@ -809,6 +1289,20 @@ pollute each others environment. +- - - + +#### `tf.test.TestCase.recordProperty(property_name, property_value)` {#TestCase.recordProperty} + +Record an arbitrary property for later use. + +##### Args: + + +* `property_name`: str, name of property to record; must be a valid XML + attribute name +* `property_value`: value of property; must be valid XML attribute value + + - - - #### `tf.test.TestCase.run(result=None)` {#TestCase.run} @@ -834,11 +1328,18 @@ Hook method for setting up class fixture before running tests in the class. #### `tf.test.TestCase.shortDescription()` {#TestCase.shortDescription} -Returns a one-line description of the test, or None if no -description has been provided. +Format both the test method name and the first line of its docstring. -The default implementation of this method returns the first line of -the specified test method's docstring. +If no docstring is given, only returns the method name. + +This method overrides unittest.TestCase.shortDescription(), which +only returns the first line of the docstring, obscuring the name +of the test upon failure. + +##### Returns: + + +* `desc`: A short description of a test method. - - - From 202d00b4e3cf726d7934e9cc85cc985b7200b1c3 Mon Sep 17 00:00:00 2001 From: "A. Unique TensorFlower" Date: Thu, 12 Jan 2017 18:57:21 -0800 Subject: [PATCH 02/51] Task VariableV2 into account. Change: 144399224 --- tensorflow/tools/tfprof/README.md | 2 +- tensorflow/tools/tfprof/tfprof_main.cc | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/tensorflow/tools/tfprof/README.md b/tensorflow/tools/tfprof/README.md index 865a21d6a09..02eca8af6a2 100644 --- a/tensorflow/tools/tfprof/README.md +++ b/tensorflow/tools/tfprof/README.md @@ -152,7 +152,7 @@ tfprof> -min_float_ops 0 -device_regexes .* -order_by name --account_type_regexes Variable +-account_type_regexes Variable,VariableV2 -start_name_regexes .* -trim_name_regexes -show_name_regexes .* diff --git a/tensorflow/tools/tfprof/tfprof_main.cc b/tensorflow/tools/tfprof/tfprof_main.cc index 92e9510ea82..a8ed6e38132 100644 --- a/tensorflow/tools/tfprof/tfprof_main.cc +++ b/tensorflow/tools/tfprof/tfprof_main.cc @@ -75,7 +75,7 @@ int main(int argc, char** argv) { tensorflow::int64 FLAGS_min_float_ops = 0; tensorflow::string FLAGS_device_regexes = ".*"; tensorflow::string FLAGS_order_by = "name"; - tensorflow::string FLAGS_account_type_regexes = "Variable"; + tensorflow::string FLAGS_account_type_regexes = "Variable,VariableV2"; tensorflow::string FLAGS_start_name_regexes = ".*"; tensorflow::string FLAGS_trim_name_regexes = ""; tensorflow::string FLAGS_show_name_regexes = ".*"; From 04b30700fea43a8a5f47e6d189333f5b38644116 Mon Sep 17 00:00:00 2001 From: "A. Unique TensorFlower" Date: Thu, 12 Jan 2017 20:05:39 -0800 Subject: [PATCH 03/51] [XLA] Add a flag do_prefix to hlo_graph_dumper::DumpText() Change: 144402914 --- tensorflow/compiler/xla/service/hlo_graph_dumper.cc | 6 ++++-- tensorflow/compiler/xla/service/hlo_graph_dumper.h | 6 +++++- 2 files changed, 9 insertions(+), 3 deletions(-) diff --git a/tensorflow/compiler/xla/service/hlo_graph_dumper.cc b/tensorflow/compiler/xla/service/hlo_graph_dumper.cc index 4865a8fb45c..990173e2e52 100644 --- a/tensorflow/compiler/xla/service/hlo_graph_dumper.cc +++ b/tensorflow/compiler/xla/service/hlo_graph_dumper.cc @@ -495,11 +495,13 @@ string DumpGraph(const HloComputation& computation, const string& label, } void DumpText(const HloModule& module, const string& label, - const string& directory_path) { + const string& directory_path, bool do_prefix) { Env* env = Env::Default(); TF_CHECK_OK(env->RecursivelyCreateDir(directory_path)); string prefix = StrCat(env->NowMicros()); - string path = JoinPath(directory_path, StrCat(prefix, "-", label, ".txt")); + string filename = + do_prefix ? StrCat(prefix, "-", label, ".txt") : StrCat(label, ".txt"); + string path = JoinPath(directory_path, filename); TF_CHECK_OK(WriteStringToFile(env, path, module.ToString())); } diff --git a/tensorflow/compiler/xla/service/hlo_graph_dumper.h b/tensorflow/compiler/xla/service/hlo_graph_dumper.h index 45fd46352f9..5f841da1f35 100644 --- a/tensorflow/compiler/xla/service/hlo_graph_dumper.h +++ b/tensorflow/compiler/xla/service/hlo_graph_dumper.h @@ -33,8 +33,12 @@ string DumpGraph(const HloComputation& computation, const string& label, // Dumps the HloModule::ToString() as a file into the provided directory path // suffixed with the provided label. +// +// If do_prefix is true, a timestamp will be prepended onto the label to +// construct a filename in the directory path; otherwise, the label is used +// as the filename directly. void DumpText(const HloModule& module, const string& label, - const string& directory_path); + const string& directory_path, bool do_prefix = true); // Abstract interface for classes that render DOT graphs. class GraphRendererInterface { From 1e1e59892c9454d7773a21326f0d8e5fb39349c0 Mon Sep 17 00:00:00 2001 From: "A. Unique TensorFlower" Date: Thu, 12 Jan 2017 21:00:20 -0800 Subject: [PATCH 04/51] Change comment to use C++, not Python, syntax. Complete list of types you can pass to the Input constructor. Change: 144405478 --- tensorflow/cc/framework/ops.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/tensorflow/cc/framework/ops.h b/tensorflow/cc/framework/ops.h index 71bfc6617c1..a9c9dee82f1 100644 --- a/tensorflow/cc/framework/ops.h +++ b/tensorflow/cc/framework/ops.h @@ -193,6 +193,7 @@ class Input { // * A scalar, or a multi-dimensional tensor specified as a recursive // initializer list. This enables directly passing constants as // inputs to op wrappers. + // * A Tensor object. Input(const Output& o) : output_(o) {} // NOLINT(runtime/explicit) template OutputList; class InputList { public: // Implicitly convert a list of outputs to a list of inputs. This is useful to - // write code such as tf.Concat(tf.Split(x, 4)). + // write code such as ops::Concat(ops::Split(x, 4)). InputList(const OutputList& out) { // NOLINT(runtime/explicit) for (auto const& x : out) { inputs_.push_back(x); From 19d41f02a864065c4000a56ba5ce1c49dc2f92f0 Mon Sep 17 00:00:00 2001 From: Asim Shankar Date: Thu, 12 Jan 2017 21:47:00 -0800 Subject: [PATCH 05/51] Docs: Tweaks to the versioning guarantees before the 1.0 release. Change: 144407717 --- tensorflow/g3doc/resources/versions.md | 70 +++++++++++--------------- 1 file changed, 28 insertions(+), 42 deletions(-) diff --git a/tensorflow/g3doc/resources/versions.md b/tensorflow/g3doc/resources/versions.md index 34a8e6bc308..a8e211b5c8b 100644 --- a/tensorflow/g3doc/resources/versions.md +++ b/tensorflow/g3doc/resources/versions.md @@ -2,10 +2,9 @@ ## Semantic Versioning 2.0 -Once we reach version 1.0, TensorFlow will follow Semantic Versioning 2.0 -([semver](http://semver.org)) for its public API. Each release version of -TensorFlow has the form `MAJOR.MINOR.PATCH`.  Changes to the each number have -the following meaning: +TensorFlow follows Semantic Versioning 2.0 ([semver](http://semver.org)) for its +public API. Each release version of TensorFlow has the form `MAJOR.MINOR.PATCH`. +Changes to the each number have the following meaning: * **MAJOR**:  Backwards incompatible changes.  Code and data that worked with a previous major release will not necessarily work with a new release. @@ -20,23 +19,23 @@ the following meaning: * **PATCH**: Backwards compatible bug fixes. -Before 1.0, semver allows backwards incompatible changes at any time.  However, -to support users now, we will use the format `0.MAJOR.MINOR` (shifted one step -to the right).  Thus 0.5.0 to 0.6.0 may be backwards incompatible, but 0.6.0 to -0.6.1 will include only backwards compatible features and bug fixes. - -At some point (especially as we approach 1.0) we will likely use prerelease -versions such as X.Y.Z-alpha.1, but we do not yet have specific plans (beyond -the restrictions of semver). - - ## Public API -Only the C, C++, and Python public APIs of TensorFlow are backwards compatible -across minor and patch versions.  The public APIs consist of +Only the public APIs of TensorFlow are backwards compatible across minor and +patch versions.  The public APIs consist of -* The documented [Python](../api_docs/python), [C++](../api_docs/cc) and - the [C](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/c/c_api.h) APIs. +* The documented public [Python](../api_docs/python) API, excluding `tf.contrib`. + This includes all public functions and classes (with names not starting with + `_`) in the tensorflow module and its submodules. Note that the code in + the `examples/` to `tools/` directories is not reachable through the + tensorflow Python module and is thus not covered by the compatibility + guarantee. + + If a symbol is available through the tensorflow Python module or its + submodules, but is not documented, then it is _not_ considered part of the + public API. + +* The [C API](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/c/c_api.h). * The following protocol buffer files: [`attr_value`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/framework/attr_value.proto), @@ -50,23 +49,18 @@ across minor and patch versions.  The public APIs consist of [`tensor_shape`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/framework/tensor_shape.proto), and [`types`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/framework/types.proto). -The public C++ API is exposed through the header files in -[`tensorflow/core/public`](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/core/public). +## Other Languages -The public Python API is unfortunately **not** everything available through the -tensorflow python module and its submodules, since we do not yet use `__all__` -everywhere ([#421](https://github.com/tensorflow/tensorflow/issues/421)). -Please refer to the documentation to determine whether a given Python feature -is part of the public API. For now, the protocol buffers are defined in -[`tensorflow/core/framework/*.proto`](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/core/framework) -([#484](https://github.com/tensorflow/tensorflow/issues/484)). +In addition to Python and C, TensorFlow also provides APIs for: -> The [Java](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/java) -> ([#5](https://github.com/tensorflow/tensorflow/issues/5)) and -> [Go](https://godoc.org/github.com/tensorflow/tensorflow/tensorflow/go) APIs -> are experimental and are **not** covered by the versioning scheme at this time. -> They are not guaranteed to backward compatible between releases. +- [C++](../api_docs/cc) (exposed through header files in +[`tensorflow/cc`](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/cc). +- [Java](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/java) +([#5](https://github.com/tensorflow/tensorflow/issues/5)), and +- [Go](https://godoc.org/github.com/tensorflow/tensorflow/tensorflow/go) +However, these three are **not** covered by the versioning scheme at this time +and can be changed in backward incompatible ways between releases. ## Details That Are Not Public @@ -76,7 +70,7 @@ fixes require it: * **Details of composite ops:**  Many public functions in Python expand to several primitive ops in the graph, and these details will be part of any - graphs saved to disk as GraphDefs.  These details are allowed to change for + graphs saved to disk as `GraphDef`s.  These details are allowed to change for minor releases. In particular, regressions tests that check for exact matching between graphs are likely to break across minor releases, even though the behavior of the graph should be unchanged and existing checkpoints will @@ -98,7 +92,7 @@ fixes require it: such intended changes will be documented. -## Compatibility for Graphs and Checkpoints {#graphs} +## Compatibility for Graphs and Checkpoints Many users of TensorFlow will be saving graphs and trained models to disk for later evaluation or more training, often changing versions of TensorFlow in the @@ -145,11 +139,3 @@ provide tools for automatically converting graphs to a newer supported For developer-level details about `GraphDef` versioning, including how to evolve the versions to account for changes, see [TensorFlow Data Versioning](data_versions.md). - - -## C++ ABI Compatibility - -Only patch releases will be binary compatible at the C++ level.  That is, minor -releases are backwards compatible in terms of behavior but may require a -recompile for downstream C++ code.  As always, backwards compatibility is only -provided for the public C++ API. From c94aece691e45a6de503343d6601bfad439a14e1 Mon Sep 17 00:00:00 2001 From: "A. Unique TensorFlower" Date: Thu, 12 Jan 2017 22:32:04 -0800 Subject: [PATCH 06/51] Automated rollback of change 144270020 Change: 144409845 --- .../tf_image_dashboard/tf-image-loader.html | 150 +++--------------- 1 file changed, 24 insertions(+), 126 deletions(-) diff --git a/tensorflow/tensorboard/components/tf_image_dashboard/tf-image-loader.html b/tensorflow/tensorboard/components/tf_image_dashboard/tf-image-loader.html index 357655e2582..fdf2c4494f7 100644 --- a/tensorflow/tensorboard/components/tf_image_dashboard/tf-image-loader.html +++ b/tensorflow/tensorboard/components/tf_image_dashboard/tf-image-loader.html @@ -16,7 +16,6 @@ limitations under the License. --> - @@ -29,29 +28,15 @@ future for loading older images.