Merge changes from github.
Change: 147897309
This commit is contained in:
parent
eb9624017a
commit
93a975e114
1
.gitignore
vendored
1
.gitignore
vendored
@ -13,3 +13,4 @@ node_modules
|
||||
*.pyc
|
||||
__pycache__
|
||||
*.swp
|
||||
.vscode/
|
@ -21,7 +21,7 @@ If you have improvements to TensorFlow, send us your pull requests! For those
|
||||
just getting started, Github has a [howto](https://help.github.com/articles/using-pull-requests/).
|
||||
|
||||
If you want to contribute but you're not sure where to start, take a look at the
|
||||
[issues with the "contributions welcome" label](https://github.com/tensorflow/tensorflow/labels/contributions%20welcome).
|
||||
[issues with the "contributions welcome" label](https://github.com/tensorflow/tensorflow/labels/stat%3Acontributions%20welcome).
|
||||
These are issues that we believe are particularly well suited for outside
|
||||
contributions, often because we probably won't get to them right now. If you
|
||||
decide to start on an issue, leave a comment so that other people know that
|
||||
|
@ -33,10 +33,10 @@ and discussion.**
|
||||
|
||||
People who are a little more adventurous can also try our nightly binaries:
|
||||
|
||||
* Linux CPU-only: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=cpu-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.0.0rc1-cp27-none-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=cpu-slave)) / [Python 3.4](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=cpu-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.0.0rc1-cp34-cp34m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=cpu-slave/)) / [Python 3.5](https://ci.tensorflow.org/view/Nightly/job/nightly-python35-linux-cpu/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.0.0rc1-cp35-cp35m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-python35-linux-cpu/))
|
||||
* Linux GPU: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-linux/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.0.0rc1-cp27-none-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-linux/)) / [Python 3.4](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-linux/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.0.0rc1-cp34-cp34m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-linux/)) / [Python 3.5](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3.5,label=gpu-linux/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.0.0rc1-cp35-cp35m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3.5,label=gpu-linux/))
|
||||
* Mac CPU-only: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=mac-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.0.0rc1-py2-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=mac-slave/)) / [Python 3](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=mac-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.0.0rc1-py3-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=mac-slave/))
|
||||
* Mac GPU: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-mac/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.0.0rc1-py2-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-mac/)) / [Python 3](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-mac/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.0.0rc1-py3-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-mac/))
|
||||
* Linux CPU-only: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=cpu-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.0.0rc2-cp27-none-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=cpu-slave)) / [Python 3.4](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=cpu-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.0.0rc2-cp34-cp34m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=cpu-slave/)) / [Python 3.5](https://ci.tensorflow.org/view/Nightly/job/nightly-python35-linux-cpu/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.0.0rc2-cp35-cp35m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-python35-linux-cpu/))
|
||||
* Linux GPU: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-linux/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.0.0rc2-cp27-none-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-linux/)) / [Python 3.4](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-linux/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.0.0rc2-cp34-cp34m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-linux/)) / [Python 3.5](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3.5,label=gpu-linux/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.0.0rc2-cp35-cp35m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3.5,label=gpu-linux/))
|
||||
* Mac CPU-only: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=mac-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.0.0rc2-py2-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=mac-slave/)) / [Python 3](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=mac-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.0.0rc2-py3-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=mac-slave/))
|
||||
* Mac GPU: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-mac/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.0.0rc2-py2-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-mac/)) / [Python 3](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-mac/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.0.0rc2-py3-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-mac/))
|
||||
* [Android](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-android/TF_BUILD_CONTAINER_TYPE=ANDROID,TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=NO_PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=android-slave/lastSuccessfulBuild/artifact/bazel-out/local_linux/bin/tensorflow/examples/android/tensorflow_demo.apk) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-android/TF_BUILD_CONTAINER_TYPE=ANDROID,TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=NO_PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=android-slave/))
|
||||
* Android: [demo APK](https://ci.tensorflow.org/view/Nightly/job/nightly-android/lastSuccessfulBuild/artifact/out/tensorflow_demo.apk), [native libs](http://ci.tensorflow.org/view/Nightly/job/nightly-android/lastSuccessfulBuild/artifact/out/native/)
|
||||
([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-android/))
|
||||
|
@ -193,7 +193,7 @@ answered questions, and were part of inspiring discussions.
|
||||
indexing now starts from 1 instead of 0, and `bus_id==0` is used where
|
||||
previously `BUS_ANY` was used.
|
||||
* `Env::FileExists` and `FileSystem::FileExists` now return a tensorflow::Status
|
||||
intead of a bool. Any callers to this function can be converted to a bool
|
||||
instead of a bool. Any callers to this function can be converted to a bool
|
||||
by adding .ok() to the call.
|
||||
* The C API type `TF_SessionWithGraph` has been renamed to `TF_Session`,
|
||||
indicating its preferred use in language bindings for TensorFlow.
|
||||
@ -212,7 +212,7 @@ answered questions, and were part of inspiring discussions.
|
||||
* `SparseTensor.shape` has been renamed to `SparseTensor.dense_shape`. Same for
|
||||
`SparseTensorValue.shape`.
|
||||
* `Env::FileExists` and `FileSystem::FileExists` now return a
|
||||
`tensorflow::Status` intead of a bool. Any callers to this function can be
|
||||
`tensorflow::Status` instead of a bool. Any callers to this function can be
|
||||
converted to a bool by adding `.ok()` to the call.
|
||||
* C API: Type `TF_SessionWithGraph` has been renamed to `TF_Session`, indicating
|
||||
its preferred use in language bindings for TensorFlow. What was previously
|
||||
|
@ -480,7 +480,7 @@ new_http_archive(
|
||||
)
|
||||
|
||||
new_http_archive(
|
||||
name = "polymer_archive",
|
||||
name = "polymer",
|
||||
build_file = "bower.BUILD",
|
||||
url = "https://github.com/polymer/polymer/archive/v1.7.0.tar.gz",
|
||||
strip_prefix = "polymer-1.7.0",
|
||||
|
7
configure
vendored
7
configure
vendored
@ -418,7 +418,12 @@ while true; do
|
||||
fi
|
||||
|
||||
if is_linux; then
|
||||
CUDNN_PATH_FROM_LDCONFIG="$(ldconfig -p | sed -n 's/.*libcudnn.so .* => \(.*\)/\1/p')"
|
||||
if ! type ldconfig > /dev/null 2>&1; then
|
||||
LDCONFIG_BIN=/sbin/ldconfig
|
||||
else
|
||||
LDCONFIG_BIN=ldconfig
|
||||
fi
|
||||
CUDNN_PATH_FROM_LDCONFIG="$($LDCONFIG_BIN -p | sed -n 's/.*libcudnn.so .* => \(.*\)/\1/p')"
|
||||
if [ -e "${CUDNN_PATH_FROM_LDCONFIG}${TF_CUDNN_EXT}" ]; then
|
||||
export TF_CUDNN_VERSION
|
||||
export CUDNN_INSTALL_PATH="$(dirname ${CUDNN_PATH_FROM_LDCONFIG})"
|
||||
|
@ -104,6 +104,12 @@ config_setting(
|
||||
visibility = ["//visibility:public"],
|
||||
)
|
||||
|
||||
config_setting(
|
||||
name = "freebsd",
|
||||
values = {"cpu": "freebsd"},
|
||||
visibility = ["//visibility:public"],
|
||||
)
|
||||
|
||||
package_group(
|
||||
name = "internal",
|
||||
packages = ["//tensorflow/..."],
|
||||
@ -164,6 +170,7 @@ filegroup(
|
||||
"//tensorflow/contrib/framework:all_files",
|
||||
"//tensorflow/contrib/graph_editor:all_files",
|
||||
"//tensorflow/contrib/grid_rnn:all_files",
|
||||
"//tensorflow/contrib/hooks:all_files",
|
||||
"//tensorflow/contrib/image:all_files",
|
||||
"//tensorflow/contrib/imperative:all_files",
|
||||
"//tensorflow/contrib/input_pipeline:all_files",
|
||||
|
@ -25,6 +25,7 @@ py_library(
|
||||
"//tensorflow/contrib/framework:framework_py",
|
||||
"//tensorflow/contrib/graph_editor:graph_editor_py",
|
||||
"//tensorflow/contrib/grid_rnn:grid_rnn_py",
|
||||
"//tensorflow/contrib/hooks",
|
||||
"//tensorflow/contrib/image:image_py",
|
||||
"//tensorflow/contrib/imperative",
|
||||
"//tensorflow/contrib/input_pipeline:input_pipeline_py",
|
||||
|
@ -264,6 +264,7 @@ add_python_module("tensorflow/contrib/grid_rnn")
|
||||
add_python_module("tensorflow/contrib/grid_rnn/python")
|
||||
add_python_module("tensorflow/contrib/grid_rnn/python/kernel_tests")
|
||||
add_python_module("tensorflow/contrib/grid_rnn/python/ops")
|
||||
add_python_module("tensorflow/contrib/hooks")
|
||||
add_python_module("tensorflow/contrib/image")
|
||||
add_python_module("tensorflow/contrib/image/python")
|
||||
add_python_module("tensorflow/contrib/image/python/ops")
|
||||
|
@ -448,7 +448,7 @@ def remove_control_inputs(op, cops):
|
||||
|
||||
|
||||
def add_control_inputs(op, cops):
|
||||
"""Add the control inputs cops to co.
|
||||
"""Add the control inputs cops to op.
|
||||
|
||||
Warning: this function is directly manipulating the internals of the tf.Graph.
|
||||
|
||||
@ -464,8 +464,8 @@ def add_control_inputs(op, cops):
|
||||
cops = _util.make_list_of_op(cops, allow_graph=False)
|
||||
for cop in cops:
|
||||
if cop in op.control_inputs:
|
||||
raise ValueError("{} is already a control_input of {}".format(op.name,
|
||||
cop.name))
|
||||
raise ValueError("{} is already a control_input of {}".format(cop.name,
|
||||
op.name))
|
||||
# pylint: disable=protected-access
|
||||
op._control_inputs += cops
|
||||
op._recompute_node_def()
|
||||
|
54
tensorflow/contrib/hooks/BUILD
Normal file
54
tensorflow/contrib/hooks/BUILD
Normal file
@ -0,0 +1,54 @@
|
||||
# Description:
|
||||
# Contains `SessionRunHook`s for use with `MonitoredSession` and the
|
||||
# wrappers around it.
|
||||
|
||||
licenses(["notice"]) # Apache 2.0
|
||||
|
||||
exports_files(["LICENSE"])
|
||||
|
||||
package(default_visibility = ["//tensorflow:__subpackages__"])
|
||||
|
||||
load("//tensorflow:tensorflow.bzl", "py_test")
|
||||
|
||||
py_library(
|
||||
name = "hooks",
|
||||
srcs = [
|
||||
"__init__.py",
|
||||
"python/training/__init__.py",
|
||||
"python/training/profiler_hook.py",
|
||||
],
|
||||
srcs_version = "PY2AND3",
|
||||
deps = [
|
||||
"//tensorflow/contrib/framework:framework_py",
|
||||
"//tensorflow/python:framework",
|
||||
"//tensorflow/python:framework_for_generated_wrappers",
|
||||
"//tensorflow/python:state_ops",
|
||||
"//tensorflow/python:training",
|
||||
"//tensorflow/python:variables",
|
||||
],
|
||||
)
|
||||
|
||||
py_test(
|
||||
name = "profiler_hook_test",
|
||||
size = "small",
|
||||
srcs = ["python/training/profiler_hook_test.py"],
|
||||
srcs_version = "PY2AND3",
|
||||
deps = [
|
||||
":hooks",
|
||||
"//tensorflow/python:client_testlib",
|
||||
"//tensorflow/python:framework_for_generated_wrappers",
|
||||
"//tensorflow/python:framework_test_lib",
|
||||
"//tensorflow/python:platform_test",
|
||||
],
|
||||
)
|
||||
|
||||
filegroup(
|
||||
name = "all_files",
|
||||
srcs = glob(
|
||||
["**/*"],
|
||||
exclude = [
|
||||
"**/METADATA",
|
||||
"**/OWNERS",
|
||||
],
|
||||
),
|
||||
)
|
30
tensorflow/contrib/hooks/README.md
Normal file
30
tensorflow/contrib/hooks/README.md
Normal file
@ -0,0 +1,30 @@
|
||||
# TensorFlow Experimental SessionRunHooks
|
||||
|
||||
These hooks complement those in tensorflow/python/training. They are instances
|
||||
of `SessionRunHook` and are to be used with helpers like `MonitoredSession`
|
||||
and `learn.Estimator` that wrap `tensorflow.Session`.
|
||||
|
||||
The hooks are called between invocations of `Session.run()` to perform custom
|
||||
behaviour.
|
||||
|
||||
For example the `ProfilerHook` periodically collects `RunMetadata` after
|
||||
`Session.run()` and saves profiling information that can be viewed in a
|
||||
neat timeline through a Chromium-based web browser (via
|
||||
[about:tracing](chrome://tracing)) or the standalone [Catapult](https://github.com/catapult-project/catapult/blob/master/tracing/README.md) tool.
|
||||
|
||||
```python
|
||||
from tensorflow.contrib.hooks import ProfilerHook
|
||||
|
||||
hooks = [ProfilerHook(save_secs=30, output_dir="profiling")]
|
||||
with SingularMonitoredSession(hooks=hooks) as sess:
|
||||
while not sess.should_stop():
|
||||
sess.run(some_op)
|
||||
```
|
||||
|
||||
Or similarly with contrib.learn:
|
||||
|
||||
```python
|
||||
hooks = [ProfilerHook(save_steps=10, output_dir="profiling")]
|
||||
estimator = learn.Estimator(...)
|
||||
estimator.fit(input_fn, monitors=hooks)
|
||||
```
|
32
tensorflow/contrib/hooks/__init__.py
Normal file
32
tensorflow/contrib/hooks/__init__.py
Normal file
@ -0,0 +1,32 @@
|
||||
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
"""hooks: A module containing `SessionRunHook`s for use with `MonitoredSession`.
|
||||
|
||||
@@ProfilerHook
|
||||
"""
|
||||
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
|
||||
# pylint: disable=wildcard-import
|
||||
from tensorflow.contrib.hooks.python.training import *
|
||||
# pylint: enable=wildcard-import
|
||||
|
||||
from tensorflow.python.util.all_util import remove_undocumented
|
||||
|
||||
_allowed_symbols = ['ProfilerHook']
|
||||
|
||||
remove_undocumented(__name__, _allowed_symbols)
|
24
tensorflow/contrib/hooks/python/__init__.py
Normal file
24
tensorflow/contrib/hooks/python/__init__.py
Normal file
@ -0,0 +1,24 @@
|
||||
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
|
||||
"""Experimental `SessionRunHooks` for use with `MonitoredSession`."""
|
||||
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
|
||||
# pylint: disable=wildcard-import
|
||||
from tensorflow.contrib.hooks.python.training import *
|
||||
# pylint: enable=wildcard-import
|
22
tensorflow/contrib/hooks/python/training/__init__.py
Normal file
22
tensorflow/contrib/hooks/python/training/__init__.py
Normal file
@ -0,0 +1,22 @@
|
||||
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
"""hooks: A module containing `SessionRunHook`s for use with `MonitoredSession`.
|
||||
"""
|
||||
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
|
||||
from tensorflow.contrib.hooks.python.training.profiler_hook import ProfilerHook
|
104
tensorflow/contrib/hooks/python/training/profiler_hook.py
Normal file
104
tensorflow/contrib/hooks/python/training/profiler_hook.py
Normal file
@ -0,0 +1,104 @@
|
||||
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
"""Additional `SessionRunHook` implementations to complement those in
|
||||
tensorflow/python/training.
|
||||
|
||||
"""
|
||||
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
|
||||
import os.path
|
||||
|
||||
from tensorflow.core.protobuf import config_pb2
|
||||
from tensorflow.python.client import timeline
|
||||
from tensorflow.python.platform import gfile
|
||||
from tensorflow.python.platform import tf_logging as logging
|
||||
from tensorflow.python.training.basic_session_run_hooks import SecondOrStepTimer
|
||||
from tensorflow.python.training.session_run_hook import SessionRunArgs
|
||||
from tensorflow.python.training import session_run_hook
|
||||
from tensorflow.python.training import training_util
|
||||
|
||||
|
||||
class ProfilerHook(session_run_hook.SessionRunHook):
|
||||
"""Captures CPU/GPU profiling information every N steps or seconds.
|
||||
|
||||
This produces files called "timeline-<step>.json", which are in Chrome
|
||||
Trace format.
|
||||
|
||||
For more information see:
|
||||
https://github.com/catapult-project/catapult/blob/master/tracing/README.md"""
|
||||
|
||||
def __init__(self,
|
||||
save_steps=None,
|
||||
save_secs=None,
|
||||
output_dir="",
|
||||
show_dataflow=True,
|
||||
show_memory=False):
|
||||
"""Initializes a hook that takes periodic profiling snapshots.
|
||||
|
||||
Args:
|
||||
save_steps: `int`, save profile traces every N steps. Exactly one of
|
||||
`save_secs` and `save_steps` should be set.
|
||||
save_secs: `int`, save profile traces every N seconds.
|
||||
output_dir: `string`, the directory to save the profile traces to.
|
||||
Defaults to the current directory.
|
||||
show_dataflow: `bool`, if True, add flow events to the trace connecting
|
||||
producers and consumers of tensors.
|
||||
show_memory: `bool`, if True, add object snapshot events to the trace
|
||||
showing the sizes and lifetimes of tensors.
|
||||
"""
|
||||
self._output_file = os.path.join(output_dir, "timeline-{}.json")
|
||||
self._show_dataflow = show_dataflow
|
||||
self._show_memory = show_memory
|
||||
self._timer = SecondOrStepTimer(every_secs=save_secs,
|
||||
every_steps=save_steps)
|
||||
|
||||
def begin(self):
|
||||
self._next_step = None
|
||||
self._global_step_tensor = training_util.get_global_step()
|
||||
if self._global_step_tensor is None:
|
||||
raise RuntimeError(
|
||||
"Global step should be created to use ProfilerHook.")
|
||||
|
||||
def before_run(self, run_context):
|
||||
self._request_summary = (
|
||||
self._next_step is None or
|
||||
self._timer.should_trigger_for_step(self._next_step))
|
||||
requests = {"global_step": self._global_step_tensor}
|
||||
opts = (config_pb2.RunOptions(trace_level=config_pb2.RunOptions.FULL_TRACE)
|
||||
if self._request_summary else None)
|
||||
|
||||
return SessionRunArgs(requests, options=opts)
|
||||
|
||||
def after_run(self, run_context, run_values):
|
||||
global_step = run_values.results["global_step"]
|
||||
|
||||
if self._request_summary:
|
||||
self._timer.update_last_triggered_step(global_step)
|
||||
self._save(global_step,
|
||||
self._output_file.format(global_step),
|
||||
run_values.run_metadata.step_stats)
|
||||
|
||||
self._next_step = global_step + 1
|
||||
|
||||
def _save(self, step, save_path, step_stats):
|
||||
logging.info("Saving timeline for %d into '%s'.", step, save_path)
|
||||
with gfile.Open(save_path, "w") as f:
|
||||
trace = timeline.Timeline(step_stats)
|
||||
f.write(trace.generate_chrome_trace_format(
|
||||
show_dataflow=self._show_dataflow,
|
||||
show_memory=self._show_memory))
|
122
tensorflow/contrib/hooks/python/training/profiler_hook_test.py
Normal file
122
tensorflow/contrib/hooks/python/training/profiler_hook_test.py
Normal file
@ -0,0 +1,122 @@
|
||||
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
"""Tests for profiler_hook."""
|
||||
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
|
||||
import os.path
|
||||
import shutil
|
||||
import tempfile
|
||||
|
||||
from tensorflow.contrib.framework.python.ops import variables
|
||||
from tensorflow.contrib.hooks.python.training import ProfilerHook
|
||||
from tensorflow.python.framework import ops
|
||||
from tensorflow.python.ops import state_ops
|
||||
from tensorflow.python.platform import gfile
|
||||
from tensorflow.python.platform import test
|
||||
from tensorflow.python.training import monitored_session
|
||||
|
||||
|
||||
class ProfilerHookTest(test.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
super(ProfilerHookTest, self).setUp()
|
||||
self.output_dir = tempfile.mkdtemp()
|
||||
self.graph = ops.Graph()
|
||||
self.filepattern = os.path.join(self.output_dir, "timeline-*.json")
|
||||
with self.graph.as_default():
|
||||
self.global_step = variables.get_or_create_global_step()
|
||||
self.train_op = state_ops.assign_add(self.global_step, 1)
|
||||
|
||||
def tearDown(self):
|
||||
super(ProfilerHookTest, self).tearDown()
|
||||
shutil.rmtree(self.output_dir, ignore_errors=True)
|
||||
|
||||
def _count_timeline_files(self):
|
||||
return len(gfile.Glob(self.filepattern))
|
||||
|
||||
def test_raise_in_both_secs_and_steps(self):
|
||||
with self.assertRaises(ValueError):
|
||||
ProfilerHook(save_secs=10, save_steps=20)
|
||||
|
||||
def test_raise_in_none_secs_and_steps(self):
|
||||
with self.assertRaises(ValueError):
|
||||
ProfilerHook(save_secs=None, save_steps=None)
|
||||
|
||||
def test_save_secs_saves_in_first_step(self):
|
||||
with self.graph.as_default():
|
||||
hook = ProfilerHook(save_secs=2, output_dir=self.output_dir)
|
||||
with monitored_session.SingularMonitoredSession(hooks=[hook]) as sess:
|
||||
sess.run(self.train_op)
|
||||
self.assertEqual(1, self._count_timeline_files())
|
||||
|
||||
@test.mock.patch('time.time')
|
||||
def test_save_secs_saves_periodically(self, mock_time):
|
||||
# Pick a fixed start time.
|
||||
current_time = 1484863632.320497
|
||||
|
||||
with self.graph.as_default():
|
||||
mock_time.return_value = current_time
|
||||
hook = ProfilerHook(save_secs=2, output_dir=self.output_dir)
|
||||
with monitored_session.SingularMonitoredSession(hooks=[hook]) as sess:
|
||||
sess.run(self.train_op) # Saved.
|
||||
self.assertEqual(1, self._count_timeline_files())
|
||||
sess.run(self.train_op) # Not saved.
|
||||
self.assertEqual(1, self._count_timeline_files())
|
||||
# Simulate 2.5 seconds of sleep.
|
||||
mock_time.return_value = current_time + 2.5
|
||||
sess.run(self.train_op) # Saved.
|
||||
|
||||
# Pretend some small amount of time has passed.
|
||||
mock_time.return_value = current_time + 0.1
|
||||
sess.run(self.train_op) # Not saved.
|
||||
# Edge test just before we should save the timeline.
|
||||
mock_time.return_value = current_time + 1.9
|
||||
sess.run(self.train_op) # Not saved.
|
||||
self.assertEqual(2, self._count_timeline_files())
|
||||
|
||||
mock_time.return_value = current_time + 4.5
|
||||
sess.run(self.train_op) # Saved.
|
||||
self.assertEqual(3, self._count_timeline_files())
|
||||
|
||||
def test_save_steps_saves_in_first_step(self):
|
||||
with self.graph.as_default():
|
||||
hook = ProfilerHook(save_secs=2, output_dir=self.output_dir)
|
||||
with monitored_session.SingularMonitoredSession(hooks=[hook]) as sess:
|
||||
sess.run(self.train_op) # Saved.
|
||||
sess.run(self.train_op) # Not saved.
|
||||
self.assertEqual(1, self._count_timeline_files())
|
||||
|
||||
def test_save_steps_saves_periodically(self):
|
||||
with self.graph.as_default():
|
||||
hook = ProfilerHook(save_steps=2, output_dir=self.output_dir)
|
||||
with monitored_session.SingularMonitoredSession(hooks=[hook]) as sess:
|
||||
self.assertEqual(0, self._count_timeline_files())
|
||||
sess.run(self.train_op) # Saved.
|
||||
self.assertEqual(1, self._count_timeline_files())
|
||||
sess.run(self.train_op) # Not saved.
|
||||
self.assertEqual(1, self._count_timeline_files())
|
||||
sess.run(self.train_op) # Saved.
|
||||
self.assertEqual(2, self._count_timeline_files())
|
||||
sess.run(self.train_op) # Not saved.
|
||||
self.assertEqual(2, self._count_timeline_files())
|
||||
sess.run(self.train_op) # Saved.
|
||||
self.assertEqual(3, self._count_timeline_files())
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
test.main()
|
155
tensorflow/contrib/imperative/README.md
Normal file
155
tensorflow/contrib/imperative/README.md
Normal file
@ -0,0 +1,155 @@
|
||||
## Imperative programming in TensorFlow
|
||||
|
||||
In the standard TensorFlow library, the specification of the computation is done
|
||||
statically in terms of a computation graph, and is separate from the execution
|
||||
of the graph. This model of programming is referred to as *lazy*, *deferred*,
|
||||
*dynamic*, or, *asynchronous*. This library brings imperative style programming (à
|
||||
la [NumPy](http://www.numpy.org)) to TensorFlow. Using this library, you can:
|
||||
|
||||
* Write code in an imperative style: the results of the computation are available
|
||||
right after the execution of a line of code.
|
||||
* Use TensorFlow operations on tensors, and get all the benefits of GPU
|
||||
acceleration.
|
||||
* Include any Python control flow statements like `while` and `if` when
|
||||
specifying the computation.
|
||||
* Perform automatic differentiation on your code with the
|
||||
standard
|
||||
[`tf.gradients`](https://www.tensorflow.org/api_docs/python/train/gradient_computation#gradients) function.
|
||||
|
||||
### Getting started
|
||||
|
||||
This library is a thin wrapper over the standard TensorFlow Python library. The
|
||||
source code is
|
||||
available
|
||||
[here](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/imperative). You
|
||||
can get started on Linux by installing the nightly PIP package linked off
|
||||
[the main page](https://github.com/tensorflow/tensorflow). Please
|
||||
consult [this](https://github.com/tensorflow/tensorflow#installation) document for other platforms and the PIP package including GPU
|
||||
support.
|
||||
|
||||
|
||||
### Write your first imperative TensorFlow program
|
||||
|
||||
```shell
|
||||
$ python
|
||||
```
|
||||
|
||||
```python
|
||||
>>> import tensorflow.contrib.imperative as tf
|
||||
>>> x = tf.constant([[7.], [6]])
|
||||
>>> y = tf.constant([[6., 7]])
|
||||
>>> tf.matmul(x, y)
|
||||
array([[ 42., 49.],
|
||||
[ 36., 42.]], dtype=float32)
|
||||
```
|
||||
|
||||
Note that this code is identical in terms of the programmer's mental model to
|
||||
the following NumPy code:
|
||||
|
||||
```python
|
||||
>>> import numpy as np
|
||||
>>> x = np.array([[7.], [6]])
|
||||
>>> y = np.array([[6., 7]])
|
||||
>>> x.dot(y)
|
||||
array([[ 42., 49.],
|
||||
[ 36., 42.]])
|
||||
```
|
||||
|
||||
The library can be imported as `import tensorflow.contrib.imperative as tf`
|
||||
(contrast with importing standard TensorFlow, which is done as `import
|
||||
tensorflow as tf`). This import statement makes all of standard TensorFlow
|
||||
available in the `tf` symbol. However, it is not necessary to create a session
|
||||
object and set it up to run and fetch tensors.
|
||||
|
||||
|
||||
### Features
|
||||
|
||||
The library provides the following additional features on top of standard
|
||||
TensorFlow:
|
||||
|
||||
* Tensors are automatically fetched when used in contexts that expect their
|
||||
value.
|
||||
|
||||
- Printing
|
||||
|
||||
```python
|
||||
x = tf.constant(10)
|
||||
y = tf.constant(32)
|
||||
print(x + y)
|
||||
42
|
||||
```
|
||||
|
||||
- Use in conditionals
|
||||
|
||||
```python
|
||||
x = tf.constant(30)
|
||||
if x > 4:
|
||||
print('Greater than 4')
|
||||
Greater than 4
|
||||
|
||||
x = tf.random_normal([3])
|
||||
y = x * 2
|
||||
while tf.global_norm([y]) < 1000:
|
||||
y = y * 2
|
||||
print(y)
|
||||
[ -213.2868042 -511.02456665 1026.66882324]
|
||||
```
|
||||
|
||||
* Variables are automatically initialized, no need to run the
|
||||
[`tf.global_variables_initializer()`](https://www.tensorflow.org/api_docs/python/state_ops/variable_helper_functions#global_variables_initializer) operation.
|
||||
|
||||
```python
|
||||
x = tf.Variable(np.random.normal(size=[2, 2]), dtype=tf.float32)
|
||||
y = tf.constant([[1, 2.]])
|
||||
z = tf.matmul(y, x)
|
||||
print(z)
|
||||
array([[-1.231673 , 3.14744973]], dtype=float32)
|
||||
```
|
||||
|
||||
* Gradients work as expected using the standard `tf.gradients` function.
|
||||
|
||||
```python
|
||||
x = tf.Variable(np.random.rand(1, 3))
|
||||
y = tf.exp(x)
|
||||
dy = tf.gradients(y, x)
|
||||
# dy/dx should be equal to y (= exp(x))
|
||||
print(y, dy)
|
||||
(array([[ 1.79997761, 2.00581881, 2.37302414]]), [array([[ 1.79997761, 2.00581881, 2.37302414]])])
|
||||
```
|
||||
|
||||
### Caveats
|
||||
|
||||
The library is implemented on top of standard TensorFlow. It still constructs a
|
||||
graph in the background and defers op execution. But when an op executes for the
|
||||
first time, its results are cached and the cached value is returned for future
|
||||
executions, thus providing imperative semantics. Because of this implementation
|
||||
choice, this library comes with the following caveats:
|
||||
|
||||
* **Use inside Python loops:** A graph is constructed and kept around in
|
||||
the background, both for just executing using the standard TensorFlow runtime,
|
||||
and also for allowing automatic differentiation via `tf.gradients`. This means
|
||||
that the graph keeps growing when TensorFlow functions are called inside a
|
||||
Python loop. This library provides a `tf.new_step` method that clears the
|
||||
graph as well as the cached tensors that have been kept around for gradient
|
||||
computation. `tf.new_step` can be used as a context manager around, say, a
|
||||
training loop to clear the graph after each training step.
|
||||
|
||||
```python
|
||||
x = tf.Variable(constant_op.constant(1.0))
|
||||
for i in range(10):
|
||||
# Create a new training step
|
||||
with tf.new_step() as step:
|
||||
# Perform computation and variable updates
|
||||
step.run(tf.assign_sub(x, 0.1))
|
||||
self.assertAllClose(tf.identity(x), 1.0 - (i + 1) * 0.1)
|
||||
# The graph within this context is cleared at this point.
|
||||
```
|
||||
|
||||
* **Speed:** Redundant graph construction and caching of tensor values adds
|
||||
overheads that are not present in standard TensorFlow, where typically the
|
||||
graph is constructed once and executed multiple times. This library is
|
||||
intended as a vehicle to prototype the imperative programming model in
|
||||
TensorFlow. The runtime overheads can be alleviated with various optimizations
|
||||
to the runtime that would equally benefit the deferred execution mode as
|
||||
well.
|
||||
|
@ -193,8 +193,8 @@ class DataSet(object):
|
||||
start = 0
|
||||
self._index_in_epoch = batch_size - rest_num_examples
|
||||
end = self._index_in_epoch
|
||||
images_new_part = self.images[start:end]
|
||||
labels_new_part = self.labels[start:end]
|
||||
images_new_part = self._images[start:end]
|
||||
labels_new_part = self._labels[start:end]
|
||||
return numpy.concatenate((images_rest_part, images_new_part), axis=0) , numpy.concatenate((labels_rest_part, labels_new_part), axis=0)
|
||||
else:
|
||||
self._index_in_epoch += batch_size
|
||||
|
@ -171,7 +171,7 @@ def _dnn_linear_combined_model_fn(features, labels, mode, params, config=None):
|
||||
dnn_feature_columns = params.get("dnn_feature_columns")
|
||||
dnn_optimizer = params.get("dnn_optimizer") or "Adagrad"
|
||||
dnn_hidden_units = params.get("dnn_hidden_units")
|
||||
dnn_activation_fn = params.get("dnn_activation_fn")
|
||||
dnn_activation_fn = params.get("dnn_activation_fn") or nn.relu
|
||||
dnn_dropout = params.get("dnn_dropout")
|
||||
gradient_clip_norm = params.get("gradient_clip_norm")
|
||||
input_layer_min_slice_size = (
|
||||
@ -346,7 +346,7 @@ class _DNNLinearCombinedEstimator(estimator.Estimator):
|
||||
dnn_feature_columns=None,
|
||||
dnn_optimizer=None,
|
||||
dnn_hidden_units=None,
|
||||
dnn_activation_fn=nn.relu,
|
||||
dnn_activation_fn=None,
|
||||
dnn_dropout=None,
|
||||
gradient_clip_norm=None,
|
||||
config=None,
|
||||
|
@ -407,7 +407,8 @@ class Experiment(object):
|
||||
performing evaluation allows for the second.
|
||||
|
||||
Returns:
|
||||
The result of the `evaluate` call to the `Estimator`.
|
||||
The result of the `evaluate` call to the `Estimator` as well as the
|
||||
export results using the specified `ExportStrategy`.
|
||||
"""
|
||||
# The directory to which evaluation summaries are written are determined
|
||||
# by adding a suffix to 'eval'; that suffix is the 'name' parameter to
|
||||
|
@ -32,7 +32,7 @@ application that will let you check your application.
|
||||
First, clone this TensorFlow repository.
|
||||
|
||||
You will need to download all dependencies as well. We have provided a script
|
||||
that does so, to be run (as with all commands) at the root of the repository:
|
||||
that does so, to be run (as with all commands) **at the root of the repository**:
|
||||
|
||||
```bash
|
||||
tensorflow/contrib/makefile/download_dependencies.sh
|
||||
@ -142,6 +142,8 @@ xcode-select --install
|
||||
If this is a new install, you will need to run XCode once to agree to the
|
||||
license before continuing.
|
||||
|
||||
(You will also need to have [Homebrew](http://brew.sh/) installed.)
|
||||
|
||||
Then install [automake](https://en.wikipedia.org/wiki/Automake)/[libtool](https://en.wikipedia.org/wiki/GNU_Libtool):
|
||||
|
||||
```bash
|
||||
|
@ -165,7 +165,7 @@ CXXFLAGS="-frtti -fexceptions ${march_option} \
|
||||
-I${NDK_ROOT}/sources/cxx-stl/gnu-libstdc++/4.9/include \
|
||||
-I${NDK_ROOT}/sources/cxx-stl/gnu-libstdc++/4.9/libs/${ARCHITECTURE}/include" \
|
||||
LDFLAGS="-L${NDK_ROOT}/sources/cxx-stl/gnu-libstdc++/4.9/libs/${ARCHITECTURE}" \
|
||||
LIBS="-lz -lgnustl_static"
|
||||
LIBS="-llog -lz -lgnustl_static"
|
||||
|
||||
if [ $? -ne 0 ]
|
||||
then
|
||||
|
@ -102,6 +102,7 @@ Status LoadJpegFile(string file_name, std::vector<tensorflow::uint8>* data,
|
||||
cinfo.client_data = &jpeg_jmpbuf;
|
||||
jerr.error_exit = CatchError;
|
||||
if (setjmp(jpeg_jmpbuf)) {
|
||||
fclose(infile);
|
||||
return tensorflow::errors::Unknown("JPEG decoding failed");
|
||||
}
|
||||
|
||||
|
@ -115,7 +115,7 @@ cuda_py_tests(
|
||||
|
||||
cuda_py_tests(
|
||||
name = "core_rnn_test",
|
||||
size = "medium",
|
||||
size = "large",
|
||||
srcs = ["python/kernel_tests/core_rnn_test.py"],
|
||||
additional_deps = [
|
||||
":rnn_py",
|
||||
|
@ -18,7 +18,6 @@ from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
|
||||
from tensorflow.contrib.util import loader
|
||||
from tensorflow.python.framework import dtypes
|
||||
from tensorflow.python.framework import ops
|
||||
from tensorflow.python.ops import array_ops
|
||||
|
@ -1329,6 +1329,7 @@ def batch_sequences_with_states(input_key,
|
||||
input_key=key,
|
||||
input_sequences=sequences,
|
||||
input_context=context,
|
||||
input_length=tf.shape(sequences["input"])[0],
|
||||
initial_states=initial_states,
|
||||
num_unroll=num_unroll,
|
||||
batch_size=batch_size,
|
||||
|
@ -1197,7 +1197,10 @@ cc_library(
|
||||
],
|
||||
copts = tf_copts(),
|
||||
defines = tf_additional_lib_defines(),
|
||||
linkopts = ["-ldl"],
|
||||
linkopts = select({
|
||||
"//tensorflow:freebsd": [],
|
||||
"//conditions:default": ["-ldl"],
|
||||
}),
|
||||
deps = tf_additional_lib_deps() + [
|
||||
":lib_hash_crc32c_accelerate_internal",
|
||||
":lib_proto_parsing",
|
||||
@ -1224,7 +1227,10 @@ cc_library(
|
||||
],
|
||||
hdrs = ["lib/gif/gif_io.h"],
|
||||
copts = tf_copts(),
|
||||
linkopts = ["-ldl"],
|
||||
linkopts = select({
|
||||
"//tensorflow:freebsd": [],
|
||||
"//conditions:default": ["-ldl"],
|
||||
}),
|
||||
deps = [
|
||||
":lib",
|
||||
"//tensorflow/core/platform/default/build_config:gif",
|
||||
@ -1243,7 +1249,10 @@ cc_library(
|
||||
"lib/jpeg/jpeg_mem.h",
|
||||
],
|
||||
copts = tf_copts(),
|
||||
linkopts = ["-ldl"],
|
||||
linkopts = select({
|
||||
"//tensorflow:freebsd": [],
|
||||
"//conditions:default": ["-ldl"],
|
||||
}),
|
||||
deps = [
|
||||
":lib",
|
||||
"//tensorflow/core/platform/default/build_config:jpeg",
|
||||
@ -1321,8 +1330,10 @@ tf_cuda_library(
|
||||
"util/tensor_slice_util.h",
|
||||
],
|
||||
copts = tf_copts(),
|
||||
linkopts = [
|
||||
"-ldl",
|
||||
linkopts = select({
|
||||
"//tensorflow:freebsd": [],
|
||||
"//conditions:default": ["-ldl"],
|
||||
}) + [
|
||||
"-lm",
|
||||
],
|
||||
deps = [
|
||||
|
@ -304,7 +304,8 @@ std::unique_ptr<Master> GrpcServer::CreateMaster(MasterEnv* master_env) {
|
||||
/* static */
|
||||
Status GrpcServer::Create(const ServerDef& server_def, Env* env,
|
||||
std::unique_ptr<ServerInterface>* out_server) {
|
||||
std::unique_ptr<GrpcServer> ret(new GrpcServer(server_def, Env::Default()));
|
||||
std::unique_ptr<GrpcServer> ret(new GrpcServer(server_def,
|
||||
env == nullptr ? Env::Default() : env));
|
||||
TF_RETURN_IF_ERROR(ret->Init());
|
||||
*out_server = std::move(ret);
|
||||
return Status::OK();
|
||||
|
@ -254,7 +254,7 @@ struct UnsortedSegmentMaxFunctor<CPUDevice, T, Index>
|
||||
typename TTypes<Index>::ConstFlat segment_ids,
|
||||
const Index data_size, const T* data,
|
||||
typename TTypes<T, 2>::Tensor output) override {
|
||||
output.setConstant(std::numeric_limits<T>::min());
|
||||
output.setConstant(std::numeric_limits<T>::lowest());
|
||||
if (data_size == 0) {
|
||||
return;
|
||||
}
|
||||
|
@ -31,6 +31,12 @@ limitations under the License.
|
||||
#endif
|
||||
#endif /* __SSE4_2__ */
|
||||
|
||||
// This version of Apple clang has a bug:
|
||||
// https://llvm.org/bugs/show_bug.cgi?id=25510
|
||||
#if defined(__APPLE__) && (__clang_major__ <= 8)
|
||||
#undef USE_SSE_CRC32C
|
||||
#endif
|
||||
|
||||
#ifdef USE_SSE_CRC32C
|
||||
#include <nmmintrin.h>
|
||||
#endif
|
||||
|
@ -25373,6 +25373,59 @@ op {
|
||||
summary: "Computes the sum along segments of a tensor."
|
||||
description: "Read [the section on\nSegmentation](../../api_docs/python/math_ops.md#segmentation) for an explanation\nof segments.\n\nComputes a tensor such that\n`(output[i] = sum_{j...} data[j...]` where the sum is over tuples `j...` such\nthat `segment_ids[j...] == i`. Unlike `SegmentSum`, `segment_ids`\nneed not be sorted and need not cover all values in the full\nrange of valid values.\n\nIf the sum is empty for a given segment ID `i`, `output[i] = 0`.\n\n`num_segments` should equal the number of distinct segment IDs.\n\n<div style=\"width:70%; margin:auto; margin-bottom:10px; margin-top:20px;\">\n<img style=\"width:100%\" src=\"../../images/UnsortedSegmentSum.png\" alt>\n</div>"
|
||||
}
|
||||
op {
|
||||
name: "UnsortedSegmentSum"
|
||||
input_arg {
|
||||
name: "data"
|
||||
type_attr: "T"
|
||||
}
|
||||
input_arg {
|
||||
name: "segment_ids"
|
||||
description: "A tensor whose shape is a prefix of `data.shape`."
|
||||
type_attr: "Tindices"
|
||||
}
|
||||
input_arg {
|
||||
name: "num_segments"
|
||||
type: DT_INT32
|
||||
}
|
||||
output_arg {
|
||||
name: "output"
|
||||
description: "Has same shape as data, except for the first `segment_ids.rank`\ndimensions, which are replaced with a single dimension which has size\n`num_segments`."
|
||||
type_attr: "T"
|
||||
}
|
||||
attr {
|
||||
name: "T"
|
||||
type: "type"
|
||||
allowed_values {
|
||||
list {
|
||||
type: DT_FLOAT
|
||||
type: DT_DOUBLE
|
||||
type: DT_INT64
|
||||
type: DT_INT32
|
||||
type: DT_UINT8
|
||||
type: DT_UINT16
|
||||
type: DT_INT16
|
||||
type: DT_INT8
|
||||
type: DT_QINT8
|
||||
type: DT_QUINT8
|
||||
type: DT_QINT32
|
||||
type: DT_HALF
|
||||
}
|
||||
}
|
||||
}
|
||||
attr {
|
||||
name: "Tindices"
|
||||
type: "type"
|
||||
allowed_values {
|
||||
list {
|
||||
type: DT_INT32
|
||||
type: DT_INT64
|
||||
}
|
||||
}
|
||||
}
|
||||
summary: "Computes the max along segments of a tensor."
|
||||
description: "Read [the section on\nSegmentation](../../api_docs/python/math_ops.md#segmentation) for an explanation\nof segments.\n\nComputes a tensor such that\n\\\\(output_i = \\sum_j data_j\\\\) where sum is over `j` such\nthat `segment_ids[j] == i`. Unlike `SegmentSum`, `segment_ids`\nneed not be sorted and need not cover all values in the full\n range of valid values.\n\nIf the sum is empty for a given segment ID `i`, `output[i] = 0`.\n\n`num_segments` should equal the number of distinct segment IDs.\n\n<div style=\"width:70%; margin:auto; margin-bottom:10px; margin-top:20px;\">\n<img style=\"width:100%\" src=\"../../images/UnsortedSegmentSum.png\" alt>\n</div>"
|
||||
}
|
||||
op {
|
||||
name: "Unstage"
|
||||
output_arg {
|
||||
|
@ -131,7 +131,7 @@ error::Code ErrnoToCode(int err_number) {
|
||||
case ENETUNREACH: // Network unreachable
|
||||
case ENOLCK: // No locks available
|
||||
case ENOLINK: // Link has been severed
|
||||
#if !(defined(__APPLE__) || defined(_WIN32))
|
||||
#if !(defined(__APPLE__) || defined(__FreeBSD__) || defined(_WIN32))
|
||||
case ENONET: // Machine is not on the network
|
||||
#endif
|
||||
code = error::UNAVAILABLE;
|
||||
|
@ -24,6 +24,7 @@ limitations under the License.
|
||||
#include <string.h>
|
||||
#include <sys/ioctl.h>
|
||||
#include <sys/syscall.h>
|
||||
#include <sys/types.h>
|
||||
#include <unistd.h>
|
||||
|
||||
#include "tensorflow/core/lib/strings/stringprintf.h"
|
||||
|
@ -16,6 +16,8 @@ limitations under the License.
|
||||
#ifndef TENSORFLOW_PLATFORM_PROFILEUTILS_ANDROID_ARMV7A_CPU_UTILS_HELPER_H__
|
||||
#define TENSORFLOW_PLATFORM_PROFILEUTILS_ANDROID_ARMV7A_CPU_UTILS_HELPER_H__
|
||||
|
||||
#include <sys/types.h>
|
||||
|
||||
#include "tensorflow/core/platform/macros.h"
|
||||
#include "tensorflow/core/platform/profile_utils/i_cpu_utils_helper.h"
|
||||
#include "tensorflow/core/platform/types.h"
|
||||
|
@ -20,7 +20,7 @@ limitations under the License.
|
||||
|
||||
#define TF_MAJOR_VERSION 1
|
||||
#define TF_MINOR_VERSION 0
|
||||
#define TF_PATCH_VERSION 0-rc1
|
||||
#define TF_PATCH_VERSION 0-rc2
|
||||
|
||||
// TF_VERSION_SUFFIX is non-empty for pre-releases (e.g. "-alpha", "-alpha.1",
|
||||
// "-beta", "-rc", "-rc.1")
|
||||
|
@ -44,10 +44,11 @@ def main(unused_argv):
|
||||
# Fit
|
||||
regressor.fit(x_train, y_train, steps=5000, batch_size=1)
|
||||
|
||||
# Transform
|
||||
x_transformed = scaler.transform(x_test)
|
||||
|
||||
# Predict and score
|
||||
y_predicted = list(
|
||||
regressor.predict(
|
||||
scaler.transform(x_test), as_iterable=True))
|
||||
y_predicted = list(regressor.predict(x_transformed, as_iterable=True))
|
||||
score = metrics.mean_squared_error(y_predicted, y_test)
|
||||
|
||||
print('MSE: {0:f}'.format(score))
|
||||
|
@ -43,7 +43,7 @@ def my_model(features, target):
|
||||
|
||||
# Compute logits (1 per class) and compute loss.
|
||||
logits = layers.fully_connected(features, 3, activation_fn=None)
|
||||
loss = tf.contrib.losses.softmax_cross_entropy(logits, target)
|
||||
loss = tf.losses.softmax_cross_entropy(target, logits)
|
||||
|
||||
# Create a tensor for training op.
|
||||
train_op = tf.contrib.layers.optimize_loss(
|
||||
|
@ -67,7 +67,7 @@ def conv_model(feature, target, mode):
|
||||
|
||||
# Compute logits (1 per class) and compute loss.
|
||||
logits = layers.fully_connected(h_fc1, 10, activation_fn=None)
|
||||
loss = tf.contrib.losses.softmax_cross_entropy(logits, target)
|
||||
loss = tf.losses.softmax_cross_entropy(target, logits)
|
||||
|
||||
# Create a tensor for training op.
|
||||
train_op = layers.optimize_loss(
|
||||
|
@ -60,7 +60,7 @@ def my_model(features, target):
|
||||
with tf.device('/gpu:2'):
|
||||
# Compute logits (1 per class) and compute loss.
|
||||
logits = layers.fully_connected(features, 3, activation_fn=None)
|
||||
loss = tf.contrib.losses.softmax_cross_entropy(logits, target)
|
||||
loss = tf.losses.softmax_cross_entropy(target, logits)
|
||||
|
||||
# Create a tensor for training op.
|
||||
train_op = tf.contrib.layers.optimize_loss(
|
||||
|
@ -144,7 +144,7 @@ def res_net(x, y, activation=tf.nn.relu):
|
||||
|
||||
target = tf.one_hot(y, depth=10, dtype=tf.float32)
|
||||
logits = tf.contrib.layers.fully_connected(net, 10, activation_fn=None)
|
||||
loss = tf.contrib.losses.softmax_cross_entropy(logits, target)
|
||||
loss = tf.losses.softmax_cross_entropy(target, logits)
|
||||
return tf.softmax(logits), loss
|
||||
|
||||
|
||||
|
@ -49,7 +49,7 @@ def char_cnn_model(features, target):
|
||||
"""Character level convolutional neural network model to predict classes."""
|
||||
target = tf.one_hot(target, 15, 1, 0)
|
||||
byte_list = tf.reshape(
|
||||
tf.one_hot(features, 256, 1, 0), [-1, MAX_DOCUMENT_LENGTH, 256, 1])
|
||||
tf.one_hot(features, 256), [-1, MAX_DOCUMENT_LENGTH, 256, 1])
|
||||
with tf.variable_scope('CNN_Layer1'):
|
||||
# Apply Convolution filtering on input sequence.
|
||||
conv1 = tf.contrib.layers.convolution2d(
|
||||
@ -73,7 +73,7 @@ def char_cnn_model(features, target):
|
||||
|
||||
# Apply regular WX + B and classification.
|
||||
logits = tf.contrib.layers.fully_connected(pool2, 15, activation_fn=None)
|
||||
loss = tf.contrib.losses.softmax_cross_entropy(logits, target)
|
||||
loss = tf.losses.softmax_cross_entropy(target, logits)
|
||||
|
||||
train_op = tf.contrib.layers.optimize_loss(
|
||||
loss,
|
||||
|
@ -112,6 +112,8 @@ def generate_batch(batch_size, num_skips, skip_window):
|
||||
labels[i * num_skips + j, 0] = buffer[target]
|
||||
buffer.append(data[data_index])
|
||||
data_index = (data_index + 1) % len(data)
|
||||
# Backtrack a little bit to avoid skipping words in the end of a batch
|
||||
data_index = (data_index + len(data) - span) % len(data)
|
||||
return batch, labels
|
||||
|
||||
batch, labels = generate_batch(batch_size=8, num_skips=2, skip_window=1)
|
||||
|
@ -219,7 +219,7 @@
|
||||
" print('Extracting data for %s. This may take a while. Please wait.' % root)\n",
|
||||
" tar = tarfile.open(filename)\n",
|
||||
" sys.stdout.flush()\n",
|
||||
" tar.extractall()\n",
|
||||
" tar.extractall(data_root)\n",
|
||||
" tar.close()\n",
|
||||
" data_folders = [\n",
|
||||
" os.path.join(root, d) for d in sorted(os.listdir(root))\n",
|
||||
|
@ -1,4 +1,185 @@
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.ByteSize()` {#TaggedRunMetadata.ByteSize}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.Clear()` {#TaggedRunMetadata.Clear}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.ClearExtension(extension_handle)` {#TaggedRunMetadata.ClearExtension}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.ClearField(field_name)` {#TaggedRunMetadata.ClearField}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.CopyFrom(other_msg)` {#TaggedRunMetadata.CopyFrom}
|
||||
|
||||
Copies the content of the specified message into the current message.
|
||||
|
||||
The method clears the current message and then merges the specified
|
||||
message using MergeFrom.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`other_msg`</b>: Message to copy into the current one.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.DiscardUnknownFields()` {#TaggedRunMetadata.DiscardUnknownFields}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.FindInitializationErrors()` {#TaggedRunMetadata.FindInitializationErrors}
|
||||
|
||||
Finds required fields which are not initialized.
|
||||
|
||||
##### Returns:
|
||||
|
||||
A list of strings. Each string is a path to an uninitialized field from
|
||||
the top-level message, e.g. "foo.bar[5].baz".
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.FromString(s)` {#TaggedRunMetadata.FromString}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.HasExtension(extension_handle)` {#TaggedRunMetadata.HasExtension}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.HasField(field_name)` {#TaggedRunMetadata.HasField}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.IsInitialized(errors=None)` {#TaggedRunMetadata.IsInitialized}
|
||||
|
||||
Checks if all required fields of a message are set.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`errors`</b>: A list which, if provided, will be populated with the field
|
||||
paths of all missing required fields.
|
||||
|
||||
##### Returns:
|
||||
|
||||
True iff the specified message has all required fields set.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.ListFields()` {#TaggedRunMetadata.ListFields}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.MergeFrom(msg)` {#TaggedRunMetadata.MergeFrom}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.MergeFromString(serialized)` {#TaggedRunMetadata.MergeFromString}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.ParseFromString(serialized)` {#TaggedRunMetadata.ParseFromString}
|
||||
|
||||
Parse serialized protocol buffer data into this message.
|
||||
|
||||
Like MergeFromString(), except we clear the object first and
|
||||
do not return the value that MergeFromString returns.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.RegisterExtension(extension_handle)` {#TaggedRunMetadata.RegisterExtension}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.SerializePartialToString()` {#TaggedRunMetadata.SerializePartialToString}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.SerializeToString()` {#TaggedRunMetadata.SerializeToString}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.SetInParent()` {#TaggedRunMetadata.SetInParent}
|
||||
|
||||
Sets the _cached_byte_size_dirty bit to true,
|
||||
and propagates this to our listener iff this was a state change.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.WhichOneof(oneof_name)` {#TaggedRunMetadata.WhichOneof}
|
||||
|
||||
Returns the name of the currently set field inside a oneof, or None.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.__deepcopy__(memo=None)` {#TaggedRunMetadata.__deepcopy__}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.__eq__(other)` {#TaggedRunMetadata.__eq__}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.__getstate__()` {#TaggedRunMetadata.__getstate__}
|
||||
@ -6,3 +187,66 @@
|
||||
Support the pickle protocol.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.__hash__()` {#TaggedRunMetadata.__hash__}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.__init__(**kwargs)` {#TaggedRunMetadata.__init__}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.__ne__(other_msg)` {#TaggedRunMetadata.__ne__}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.__repr__()` {#TaggedRunMetadata.__repr__}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.__setstate__(state)` {#TaggedRunMetadata.__setstate__}
|
||||
|
||||
Support the pickle protocol.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.__str__()` {#TaggedRunMetadata.__str__}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.__unicode__()` {#TaggedRunMetadata.__unicode__}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.run_metadata` {#TaggedRunMetadata.run_metadata}
|
||||
|
||||
Magic attribute generated for "run_metadata" proto field.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.tag` {#TaggedRunMetadata.tag}
|
||||
|
||||
Magic attribute generated for "tag" proto field.
|
||||
|
||||
|
||||
|
@ -1,4 +1,185 @@
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.ByteSize()` {#SummaryDescription.ByteSize}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.Clear()` {#SummaryDescription.Clear}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.ClearExtension(extension_handle)` {#SummaryDescription.ClearExtension}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.ClearField(field_name)` {#SummaryDescription.ClearField}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.CopyFrom(other_msg)` {#SummaryDescription.CopyFrom}
|
||||
|
||||
Copies the content of the specified message into the current message.
|
||||
|
||||
The method clears the current message and then merges the specified
|
||||
message using MergeFrom.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`other_msg`</b>: Message to copy into the current one.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.DiscardUnknownFields()` {#SummaryDescription.DiscardUnknownFields}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.FindInitializationErrors()` {#SummaryDescription.FindInitializationErrors}
|
||||
|
||||
Finds required fields which are not initialized.
|
||||
|
||||
##### Returns:
|
||||
|
||||
A list of strings. Each string is a path to an uninitialized field from
|
||||
the top-level message, e.g. "foo.bar[5].baz".
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.FromString(s)` {#SummaryDescription.FromString}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.HasExtension(extension_handle)` {#SummaryDescription.HasExtension}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.HasField(field_name)` {#SummaryDescription.HasField}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.IsInitialized(errors=None)` {#SummaryDescription.IsInitialized}
|
||||
|
||||
Checks if all required fields of a message are set.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`errors`</b>: A list which, if provided, will be populated with the field
|
||||
paths of all missing required fields.
|
||||
|
||||
##### Returns:
|
||||
|
||||
True iff the specified message has all required fields set.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.ListFields()` {#SummaryDescription.ListFields}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.MergeFrom(msg)` {#SummaryDescription.MergeFrom}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.MergeFromString(serialized)` {#SummaryDescription.MergeFromString}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.ParseFromString(serialized)` {#SummaryDescription.ParseFromString}
|
||||
|
||||
Parse serialized protocol buffer data into this message.
|
||||
|
||||
Like MergeFromString(), except we clear the object first and
|
||||
do not return the value that MergeFromString returns.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.RegisterExtension(extension_handle)` {#SummaryDescription.RegisterExtension}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.SerializePartialToString()` {#SummaryDescription.SerializePartialToString}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.SerializeToString()` {#SummaryDescription.SerializeToString}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.SetInParent()` {#SummaryDescription.SetInParent}
|
||||
|
||||
Sets the _cached_byte_size_dirty bit to true,
|
||||
and propagates this to our listener iff this was a state change.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.WhichOneof(oneof_name)` {#SummaryDescription.WhichOneof}
|
||||
|
||||
Returns the name of the currently set field inside a oneof, or None.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.__deepcopy__(memo=None)` {#SummaryDescription.__deepcopy__}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.__eq__(other)` {#SummaryDescription.__eq__}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.__getstate__()` {#SummaryDescription.__getstate__}
|
||||
@ -6,3 +187,59 @@
|
||||
Support the pickle protocol.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.__hash__()` {#SummaryDescription.__hash__}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.__init__(**kwargs)` {#SummaryDescription.__init__}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.__ne__(other_msg)` {#SummaryDescription.__ne__}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.__repr__()` {#SummaryDescription.__repr__}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.__setstate__(state)` {#SummaryDescription.__setstate__}
|
||||
|
||||
Support the pickle protocol.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.__str__()` {#SummaryDescription.__str__}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.__unicode__()` {#SummaryDescription.__unicode__}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.type_hint` {#SummaryDescription.type_hint}
|
||||
|
||||
Magic attribute generated for "type_hint" proto field.
|
||||
|
||||
|
||||
|
@ -177,125 +177,6 @@ Checks that for all elements of farray1 and farray2
|
||||
* <b>`err`</b>: a float value.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertBetween(value, minv, maxv, msg=None)` {#TestCase.assertBetween}
|
||||
|
||||
Asserts that value is between minv and maxv (inclusive).
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertCommandFails(command, regexes, env=None, close_fds=True, msg=None)` {#TestCase.assertCommandFails}
|
||||
|
||||
Asserts a shell command fails and the error matches a regex in a list.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`command`</b>: List or string representing the command to run.
|
||||
* <b>`regexes`</b>: the list of regular expression strings.
|
||||
* <b>`env`</b>: Dictionary of environment variable settings.
|
||||
* <b>`close_fds`</b>: Whether or not to close all open fd's in the child after
|
||||
forking.
|
||||
* <b>`msg`</b>: Optional message to report on failure.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertCommandSucceeds(command, regexes=('',), env=None, close_fds=True, msg=None)` {#TestCase.assertCommandSucceeds}
|
||||
|
||||
Asserts that a shell command succeeds (i.e. exits with code 0).
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`command`</b>: List or string representing the command to run.
|
||||
* <b>`regexes`</b>: List of regular expression byte strings that match success.
|
||||
* <b>`env`</b>: Dictionary of environment variable settings.
|
||||
* <b>`close_fds`</b>: Whether or not to close all open fd's in the child after
|
||||
forking.
|
||||
* <b>`msg`</b>: Optional message to report on failure.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertContainsExactSubsequence(container, subsequence, msg=None)` {#TestCase.assertContainsExactSubsequence}
|
||||
|
||||
Assert that "container" contains "subsequence" as an exact subsequence.
|
||||
|
||||
Asserts that "container" contains all the elements of "subsequence", in
|
||||
order, and without other elements interspersed. For example, [1, 2, 3] is an
|
||||
exact subsequence of [0, 0, 1, 2, 3, 0] but not of [0, 0, 1, 2, 0, 3, 0].
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`container`</b>: the list we're testing for subsequence inclusion.
|
||||
* <b>`subsequence`</b>: the list we hope will be an exact subsequence of container.
|
||||
* <b>`msg`</b>: Optional message to report on failure.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertContainsInOrder(strings, target, msg=None)` {#TestCase.assertContainsInOrder}
|
||||
|
||||
Asserts that the strings provided are found in the target in order.
|
||||
|
||||
This may be useful for checking HTML output.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`strings`</b>: A list of strings, such as [ 'fox', 'dog' ]
|
||||
* <b>`target`</b>: A target string in which to look for the strings, such as
|
||||
'The quick brown fox jumped over the lazy dog'.
|
||||
* <b>`msg`</b>: Optional message to report on failure.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertContainsSubsequence(container, subsequence, msg=None)` {#TestCase.assertContainsSubsequence}
|
||||
|
||||
Assert that "container" contains "subsequence" as a subsequence.
|
||||
|
||||
Asserts that "container" contains all the elements of "subsequence", in
|
||||
order, but possibly with other elements interspersed. For example, [1, 2, 3]
|
||||
is a subsequence of [0, 0, 1, 2, 0, 3, 0] but not of [0, 0, 1, 3, 0, 2, 0].
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`container`</b>: the list we're testing for subsequence inclusion.
|
||||
* <b>`subsequence`</b>: the list we hope will be a subsequence of container.
|
||||
* <b>`msg`</b>: Optional message to report on failure.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertContainsSubset(expected_subset, actual_set, msg=None)` {#TestCase.assertContainsSubset}
|
||||
|
||||
Checks whether actual iterable is a superset of expected iterable.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertCountEqual(*args, **kwargs)` {#TestCase.assertCountEqual}
|
||||
|
||||
An unordered sequence specific comparison.
|
||||
|
||||
Equivalent to assertItemsEqual(). This method is a compatibility layer
|
||||
for Python 3k, since 2to3 does not convert assertItemsEqual() calls into
|
||||
assertCountEqual() calls.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`expected_seq`</b>: A sequence containing elements we are expecting.
|
||||
* <b>`actual_seq`</b>: The sequence that we are testing.
|
||||
* <b>`msg`</b>: The message to be printed if the test fails.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertDeviceEqual(device1, device2)` {#TestCase.assertDeviceEqual}
|
||||
@ -318,48 +199,9 @@ Checks whether actual is a superset of expected.
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertDictEqual(a, b, msg=None)` {#TestCase.assertDictEqual}
|
||||
|
||||
Raises AssertionError if a and b are not equal dictionaries.
|
||||
|
||||
##### Args:
|
||||
#### `tf.test.TestCase.assertDictEqual(d1, d2, msg=None)` {#TestCase.assertDictEqual}
|
||||
|
||||
|
||||
* <b>`a`</b>: A dict, the expected value.
|
||||
* <b>`b`</b>: A dict, the actual value.
|
||||
* <b>`msg`</b>: An optional str, the associated message.
|
||||
|
||||
##### Raises:
|
||||
|
||||
|
||||
* <b>`AssertionError`</b>: if the dictionaries are not equal.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertEmpty(container, msg=None)` {#TestCase.assertEmpty}
|
||||
|
||||
Assert that an object has zero length.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`container`</b>: Anything that implements the collections.Sized interface.
|
||||
* <b>`msg`</b>: Optional message to report on failure.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertEndsWith(actual, expected_end, msg=None)` {#TestCase.assertEndsWith}
|
||||
|
||||
Assert that actual.endswith(expected_end) is True.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`actual`</b>: str
|
||||
* <b>`expected_end`</b>: str
|
||||
* <b>`msg`</b>: Optional message to report on failure.
|
||||
|
||||
|
||||
- - -
|
||||
@ -444,11 +286,10 @@ Included for symmetry with assertIsNone.
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertItemsEqual(*args, **kwargs)` {#TestCase.assertItemsEqual}
|
||||
#### `tf.test.TestCase.assertItemsEqual(expected_seq, actual_seq, msg=None)` {#TestCase.assertItemsEqual}
|
||||
|
||||
An unordered sequence specific comparison.
|
||||
|
||||
It asserts that actual_seq and expected_seq have the same element counts.
|
||||
An unordered sequence specific comparison. It asserts that
|
||||
actual_seq and expected_seq have the same element counts.
|
||||
Equivalent to::
|
||||
|
||||
self.assertEqual(Counter(iter(actual_seq)),
|
||||
@ -461,30 +302,6 @@ Asserts that each element has the same count in both sequences.
|
||||
- [0, 1, 1] and [1, 0, 1] compare equal.
|
||||
- [0, 0, 1] and [0, 1] compare unequal.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`expected_seq`</b>: A sequence containing elements we are expecting.
|
||||
* <b>`actual_seq`</b>: The sequence that we are testing.
|
||||
* <b>`msg`</b>: The message to be printed if the test fails.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertJsonEqual(first, second, msg=None)` {#TestCase.assertJsonEqual}
|
||||
|
||||
Asserts that the JSON objects defined in two strings are equal.
|
||||
|
||||
A summary of the differences will be included in the failure message
|
||||
using assertSameStructure.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`first`</b>: A string contining JSON to decode and compare to second.
|
||||
* <b>`second`</b>: A string contining JSON to decode and compare to first.
|
||||
* <b>`msg`</b>: Additional text to include in the failure message.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
@ -554,13 +371,6 @@ if not.
|
||||
* <b>`msg`</b>: An optional string message to append to the failure message.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertNoCommonElements(expected_seq, actual_seq, msg=None)` {#TestCase.assertNoCommonElements}
|
||||
|
||||
Checks whether actual iterable and expected iterable are disjoint.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertNotAlmostEqual(first, second, places=None, msg=None, delta=None)` {#TestCase.assertNotAlmostEqual}
|
||||
@ -591,33 +401,6 @@ as significant digits (measured from the most signficant digit).
|
||||
Objects that are equal automatically fail.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertNotEmpty(container, msg=None)` {#TestCase.assertNotEmpty}
|
||||
|
||||
Assert that an object has non-zero length.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`container`</b>: Anything that implements the collections.Sized interface.
|
||||
* <b>`msg`</b>: Optional message to report on failure.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertNotEndsWith(actual, unexpected_end, msg=None)` {#TestCase.assertNotEndsWith}
|
||||
|
||||
Assert that actual.endswith(unexpected_end) is False.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`actual`</b>: str
|
||||
* <b>`unexpected_end`</b>: str
|
||||
* <b>`msg`</b>: Optional message to report on failure.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertNotEqual(first, second, msg=None)` {#TestCase.assertNotEqual}
|
||||
@ -655,20 +438,6 @@ Included for symmetry with assertIsInstance.
|
||||
Fail the test if the text matches the regular expression.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertNotStartsWith(actual, unexpected_start, msg=None)` {#TestCase.assertNotStartsWith}
|
||||
|
||||
Assert that actual.startswith(unexpected_start) is False.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`actual`</b>: str
|
||||
* <b>`unexpected_start`</b>: str
|
||||
* <b>`msg`</b>: Optional message to report on failure.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertProtoEquals(expected_message_maybe_ascii, message)` {#TestCase.assertProtoEquals}
|
||||
@ -743,38 +512,6 @@ Asserts that the message in a raised exception matches a regexp.
|
||||
* <b>`kwargs`</b>: Extra kwargs.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertRaisesWithLiteralMatch(expected_exception, expected_exception_message, callable_obj=None, *args, **kwargs)` {#TestCase.assertRaisesWithLiteralMatch}
|
||||
|
||||
Asserts that the message in a raised exception equals the given string.
|
||||
|
||||
Unlike assertRaisesRegexp, this method takes a literal string, not
|
||||
a regular expression.
|
||||
|
||||
with self.assertRaisesWithLiteralMatch(ExType, 'message'):
|
||||
DoSomething()
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`expected_exception`</b>: Exception class expected to be raised.
|
||||
* <b>`expected_exception_message`</b>: String message expected in the raised
|
||||
exception. For a raise exception e, expected_exception_message must
|
||||
equal str(e).
|
||||
* <b>`callable_obj`</b>: Function to be called, or None to return a context.
|
||||
* <b>`args`</b>: Extra args.
|
||||
* <b>`kwargs`</b>: Extra kwargs.
|
||||
|
||||
##### Returns:
|
||||
|
||||
A context manager if callable_obj is None. Otherwise, None.
|
||||
|
||||
##### Raises:
|
||||
|
||||
self.failureException if callable_obj does not raise a macthing exception.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertRaisesWithPredicateMatch(exception_type, expected_err_re_or_predicate)` {#TestCase.assertRaisesWithPredicateMatch}
|
||||
@ -799,71 +536,6 @@ predicate search.
|
||||
exception.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertRaisesWithRegexpMatch(expected_exception, expected_regexp, callable_obj=None, *args, **kwargs)` {#TestCase.assertRaisesWithRegexpMatch}
|
||||
|
||||
Asserts that the message in a raised exception matches the given regexp.
|
||||
|
||||
This is just a wrapper around assertRaisesRegexp. Please use
|
||||
assertRaisesRegexp instead of assertRaisesWithRegexpMatch.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`expected_exception`</b>: Exception class expected to be raised.
|
||||
* <b>`expected_regexp`</b>: Regexp (re pattern object or string) expected to be
|
||||
found in error message.
|
||||
* <b>`callable_obj`</b>: Function to be called, or None to return a context.
|
||||
* <b>`args`</b>: Extra args.
|
||||
* <b>`kwargs`</b>: Extra keyword args.
|
||||
|
||||
##### Returns:
|
||||
|
||||
A context manager if callable_obj is None. Otherwise, None.
|
||||
|
||||
##### Raises:
|
||||
|
||||
self.failureException if callable_obj does not raise a macthing exception.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertRegexMatch(actual_str, regexes, message=None)` {#TestCase.assertRegexMatch}
|
||||
|
||||
Asserts that at least one regex in regexes matches str.
|
||||
|
||||
If possible you should use assertRegexpMatches, which is a simpler
|
||||
version of this method. assertRegexpMatches takes a single regular
|
||||
expression (a string or re compiled object) instead of a list.
|
||||
|
||||
Notes:
|
||||
1. This function uses substring matching, i.e. the matching
|
||||
succeeds if *any* substring of the error message matches *any*
|
||||
regex in the list. This is more convenient for the user than
|
||||
full-string matching.
|
||||
|
||||
2. If regexes is the empty list, the matching will always fail.
|
||||
|
||||
3. Use regexes=[''] for a regex that will always pass.
|
||||
|
||||
4. '.' matches any single character *except* the newline. To
|
||||
match any character, use '(.|
|
||||
)'.
|
||||
|
||||
5. '^' matches the beginning of each line, not just the beginning
|
||||
of the string. Similarly, '$' matches the end of each line.
|
||||
|
||||
6. An exception will be thrown if regexes contains an invalid
|
||||
regex.
|
||||
|
||||
Args:
|
||||
actual_str: The string we try to match with the items in regexes.
|
||||
regexes: The regular expressions we want to match against str.
|
||||
See "Notes" above for detailed notes on how this is interpreted.
|
||||
message: The message to be printed if the test fails.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertRegexpMatches(text, expected_regexp, msg=None)` {#TestCase.assertRegexpMatches}
|
||||
@ -871,79 +543,6 @@ Asserts that at least one regex in regexes matches str.
|
||||
Fail the test unless the text matches the regular expression.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertSameElements(expected_seq, actual_seq, msg=None)` {#TestCase.assertSameElements}
|
||||
|
||||
Assert that two sequences have the same elements (in any order).
|
||||
|
||||
This method, unlike assertItemsEqual, doesn't care about any
|
||||
duplicates in the expected and actual sequences.
|
||||
|
||||
>> assertSameElements([1, 1, 1, 0, 0, 0], [0, 1])
|
||||
# Doesn't raise an AssertionError
|
||||
|
||||
If possible, you should use assertItemsEqual instead of
|
||||
assertSameElements.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`expected_seq`</b>: A sequence containing elements we are expecting.
|
||||
* <b>`actual_seq`</b>: The sequence that we are testing.
|
||||
* <b>`msg`</b>: The message to be printed if the test fails.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertSameStructure(a, b, aname='a', bname='b', msg=None)` {#TestCase.assertSameStructure}
|
||||
|
||||
Asserts that two values contain the same structural content.
|
||||
|
||||
The two arguments should be data trees consisting of trees of dicts and
|
||||
lists. They will be deeply compared by walking into the contents of dicts
|
||||
and lists; other items will be compared using the == operator.
|
||||
If the two structures differ in content, the failure message will indicate
|
||||
the location within the structures where the first difference is found.
|
||||
This may be helpful when comparing large structures.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`a`</b>: The first structure to compare.
|
||||
* <b>`b`</b>: The second structure to compare.
|
||||
* <b>`aname`</b>: Variable name to use for the first structure in assertion messages.
|
||||
* <b>`bname`</b>: Variable name to use for the second structure.
|
||||
* <b>`msg`</b>: Additional text to include in the failure message.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertSequenceAlmostEqual(expected_seq, actual_seq, places=None, msg=None, delta=None)` {#TestCase.assertSequenceAlmostEqual}
|
||||
|
||||
An approximate equality assertion for ordered sequences.
|
||||
|
||||
Fail if the two sequences are unequal as determined by their value
|
||||
differences rounded to the given number of decimal places (default 7) and
|
||||
comparing to zero, or by comparing that the difference between each value
|
||||
in the two sequences is more than the given delta.
|
||||
|
||||
Note that decimal places (from zero) are usually not the same as significant
|
||||
digits (measured from the most signficant digit).
|
||||
|
||||
If the two sequences compare equal then they will automatically compare
|
||||
almost equal.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`expected_seq`</b>: A sequence containing elements we are expecting.
|
||||
* <b>`actual_seq`</b>: The sequence that we are testing.
|
||||
* <b>`places`</b>: The number of decimal places to compare.
|
||||
* <b>`msg`</b>: The message to be printed if the test fails.
|
||||
* <b>`delta`</b>: The OK difference between compared values.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertSequenceEqual(seq1, seq2, msg=None, seq_type=None)` {#TestCase.assertSequenceEqual}
|
||||
@ -964,26 +563,6 @@ which can be indexed, has a length, and has an equality operator.
|
||||
differences.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertSequenceStartsWith(prefix, whole, msg=None)` {#TestCase.assertSequenceStartsWith}
|
||||
|
||||
An equality assertion for the beginning of ordered sequences.
|
||||
|
||||
If prefix is an empty sequence, it will raise an error unless whole is also
|
||||
an empty sequence.
|
||||
|
||||
If prefix is not a sequence, it will raise an error if the first element of
|
||||
whole does not match.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`prefix`</b>: A sequence expected at the beginning of the whole parameter.
|
||||
* <b>`whole`</b>: The sequence in which to look for prefix.
|
||||
* <b>`msg`</b>: Optional message to report on failure.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertSetEqual(set1, set2, msg=None)` {#TestCase.assertSetEqual}
|
||||
@ -1035,51 +614,6 @@ Assert that actual.startswith(expected_start) is True.
|
||||
* <b>`msg`</b>: Optional message to report on failure.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertTotallyOrdered(*groups, **kwargs)` {#TestCase.assertTotallyOrdered}
|
||||
|
||||
Asserts that total ordering has been implemented correctly.
|
||||
|
||||
For example, say you have a class A that compares only on its attribute x.
|
||||
Comparators other than __lt__ are omitted for brevity.
|
||||
|
||||
class A(object):
|
||||
def __init__(self, x, y):
|
||||
self.x = x
|
||||
self.y = y
|
||||
|
||||
def __hash__(self):
|
||||
return hash(self.x)
|
||||
|
||||
def __lt__(self, other):
|
||||
try:
|
||||
return self.x < other.x
|
||||
except AttributeError:
|
||||
return NotImplemented
|
||||
|
||||
assertTotallyOrdered will check that instances can be ordered correctly.
|
||||
For example,
|
||||
|
||||
self.assertTotallyOrdered(
|
||||
[None], # None should come before everything else.
|
||||
[1], # Integers sort earlier.
|
||||
[A(1, 'a')],
|
||||
[A(2, 'b')], # 2 is after 1.
|
||||
[A(3, 'c'), A(3, 'd')], # The second argument is irrelevant.
|
||||
[A(4, 'z')],
|
||||
['foo']) # Strings sort last.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`*groups`</b>: A list of groups of elements. Each group of elements is a list
|
||||
of objects that are equal. The elements in each group must be less than
|
||||
the elements in the group after it. For example, these groups are
|
||||
totally ordered: [None], [1], [2, 2], [3].
|
||||
* <b>`**kwargs`</b>: optional msg keyword argument can be passed.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertTrue(expr, msg=None)` {#TestCase.assertTrue}
|
||||
@ -1102,13 +636,6 @@ A tuple-specific equality assertion.
|
||||
differences.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertUrlEqual(a, b, msg=None)` {#TestCase.assertUrlEqual}
|
||||
|
||||
Asserts that urls are equal, ignoring ordering of query params.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assert_(expr, msg=None)` {#TestCase.assert_}
|
||||
@ -1170,9 +697,9 @@ tearDown.
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.fail(msg=None, prefix=None)` {#TestCase.fail}
|
||||
#### `tf.test.TestCase.fail(msg=None)` {#TestCase.fail}
|
||||
|
||||
Fail immediately with the given message, optionally prefixed.
|
||||
Fail immediately, with the given message.
|
||||
|
||||
|
||||
- - -
|
||||
@ -1224,13 +751,6 @@ Fail immediately with the given message, optionally prefixed.
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.getRecordedProperties()` {#TestCase.getRecordedProperties}
|
||||
|
||||
Return any properties that the user has recorded.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.get_temp_dir()` {#TestCase.get_temp_dir}
|
||||
@ -1253,20 +773,6 @@ pollute each others environment.
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.recordProperty(property_name, property_value)` {#TestCase.recordProperty}
|
||||
|
||||
Record an arbitrary property for later use.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`property_name`</b>: str, name of property to record; must be a valid XML
|
||||
attribute name
|
||||
* <b>`property_value`</b>: value of property; must be valid XML attribute value
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.run(result=None)` {#TestCase.run}
|
||||
@ -1292,18 +798,11 @@ Hook method for setting up class fixture before running tests in the class.
|
||||
|
||||
#### `tf.test.TestCase.shortDescription()` {#TestCase.shortDescription}
|
||||
|
||||
Format both the test method name and the first line of its docstring.
|
||||
Returns a one-line description of the test, or None if no
|
||||
description has been provided.
|
||||
|
||||
If no docstring is given, only returns the method name.
|
||||
|
||||
This method overrides unittest.TestCase.shortDescription(), which
|
||||
only returns the first line of the docstring, obscuring the name
|
||||
of the test upon failure.
|
||||
|
||||
##### Returns:
|
||||
|
||||
|
||||
* <b>`desc`</b>: A short description of a test method.
|
||||
The default implementation of this method returns the first line of
|
||||
the specified test method's docstring.
|
||||
|
||||
|
||||
- - -
|
||||
|
@ -0,0 +1,4 @@
|
||||
#### `tf.summary.SummaryDescription.RegisterExtension(extension_handle)` {#SummaryDescription.RegisterExtension}
|
||||
|
||||
|
||||
|
@ -0,0 +1,4 @@
|
||||
#### `tf.summary.SummaryDescription.FromString(s)` {#SummaryDescription.FromString}
|
||||
|
||||
|
||||
|
@ -0,0 +1,4 @@
|
||||
#### `tf.summary.TaggedRunMetadata.RegisterExtension(extension_handle)` {#TaggedRunMetadata.RegisterExtension}
|
||||
|
||||
|
||||
|
@ -0,0 +1,4 @@
|
||||
#### `tf.summary.TaggedRunMetadata.FromString(s)` {#TaggedRunMetadata.FromString}
|
||||
|
||||
|
||||
|
@ -500,6 +500,187 @@ metadata is stored in its NodeDef. This method retrieves the description.
|
||||
### `class tf.summary.SummaryDescription` {#SummaryDescription}
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.ByteSize()` {#SummaryDescription.ByteSize}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.Clear()` {#SummaryDescription.Clear}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.ClearExtension(extension_handle)` {#SummaryDescription.ClearExtension}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.ClearField(field_name)` {#SummaryDescription.ClearField}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.CopyFrom(other_msg)` {#SummaryDescription.CopyFrom}
|
||||
|
||||
Copies the content of the specified message into the current message.
|
||||
|
||||
The method clears the current message and then merges the specified
|
||||
message using MergeFrom.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`other_msg`</b>: Message to copy into the current one.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.DiscardUnknownFields()` {#SummaryDescription.DiscardUnknownFields}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.FindInitializationErrors()` {#SummaryDescription.FindInitializationErrors}
|
||||
|
||||
Finds required fields which are not initialized.
|
||||
|
||||
##### Returns:
|
||||
|
||||
A list of strings. Each string is a path to an uninitialized field from
|
||||
the top-level message, e.g. "foo.bar[5].baz".
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.FromString(s)` {#SummaryDescription.FromString}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.HasExtension(extension_handle)` {#SummaryDescription.HasExtension}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.HasField(field_name)` {#SummaryDescription.HasField}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.IsInitialized(errors=None)` {#SummaryDescription.IsInitialized}
|
||||
|
||||
Checks if all required fields of a message are set.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`errors`</b>: A list which, if provided, will be populated with the field
|
||||
paths of all missing required fields.
|
||||
|
||||
##### Returns:
|
||||
|
||||
True iff the specified message has all required fields set.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.ListFields()` {#SummaryDescription.ListFields}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.MergeFrom(msg)` {#SummaryDescription.MergeFrom}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.MergeFromString(serialized)` {#SummaryDescription.MergeFromString}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.ParseFromString(serialized)` {#SummaryDescription.ParseFromString}
|
||||
|
||||
Parse serialized protocol buffer data into this message.
|
||||
|
||||
Like MergeFromString(), except we clear the object first and
|
||||
do not return the value that MergeFromString returns.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.RegisterExtension(extension_handle)` {#SummaryDescription.RegisterExtension}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.SerializePartialToString()` {#SummaryDescription.SerializePartialToString}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.SerializeToString()` {#SummaryDescription.SerializeToString}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.SetInParent()` {#SummaryDescription.SetInParent}
|
||||
|
||||
Sets the _cached_byte_size_dirty bit to true,
|
||||
and propagates this to our listener iff this was a state change.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.WhichOneof(oneof_name)` {#SummaryDescription.WhichOneof}
|
||||
|
||||
Returns the name of the currently set field inside a oneof, or None.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.__deepcopy__(memo=None)` {#SummaryDescription.__deepcopy__}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.__eq__(other)` {#SummaryDescription.__eq__}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.__getstate__()` {#SummaryDescription.__getstate__}
|
||||
@ -507,12 +688,249 @@ metadata is stored in its NodeDef. This method retrieves the description.
|
||||
Support the pickle protocol.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.__hash__()` {#SummaryDescription.__hash__}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.__init__(**kwargs)` {#SummaryDescription.__init__}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.__ne__(other_msg)` {#SummaryDescription.__ne__}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.__repr__()` {#SummaryDescription.__repr__}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.__setstate__(state)` {#SummaryDescription.__setstate__}
|
||||
|
||||
Support the pickle protocol.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.__str__()` {#SummaryDescription.__str__}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.__unicode__()` {#SummaryDescription.__unicode__}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.SummaryDescription.type_hint` {#SummaryDescription.type_hint}
|
||||
|
||||
Magic attribute generated for "type_hint" proto field.
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
### `class tf.summary.TaggedRunMetadata` {#TaggedRunMetadata}
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.ByteSize()` {#TaggedRunMetadata.ByteSize}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.Clear()` {#TaggedRunMetadata.Clear}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.ClearExtension(extension_handle)` {#TaggedRunMetadata.ClearExtension}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.ClearField(field_name)` {#TaggedRunMetadata.ClearField}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.CopyFrom(other_msg)` {#TaggedRunMetadata.CopyFrom}
|
||||
|
||||
Copies the content of the specified message into the current message.
|
||||
|
||||
The method clears the current message and then merges the specified
|
||||
message using MergeFrom.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`other_msg`</b>: Message to copy into the current one.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.DiscardUnknownFields()` {#TaggedRunMetadata.DiscardUnknownFields}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.FindInitializationErrors()` {#TaggedRunMetadata.FindInitializationErrors}
|
||||
|
||||
Finds required fields which are not initialized.
|
||||
|
||||
##### Returns:
|
||||
|
||||
A list of strings. Each string is a path to an uninitialized field from
|
||||
the top-level message, e.g. "foo.bar[5].baz".
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.FromString(s)` {#TaggedRunMetadata.FromString}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.HasExtension(extension_handle)` {#TaggedRunMetadata.HasExtension}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.HasField(field_name)` {#TaggedRunMetadata.HasField}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.IsInitialized(errors=None)` {#TaggedRunMetadata.IsInitialized}
|
||||
|
||||
Checks if all required fields of a message are set.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`errors`</b>: A list which, if provided, will be populated with the field
|
||||
paths of all missing required fields.
|
||||
|
||||
##### Returns:
|
||||
|
||||
True iff the specified message has all required fields set.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.ListFields()` {#TaggedRunMetadata.ListFields}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.MergeFrom(msg)` {#TaggedRunMetadata.MergeFrom}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.MergeFromString(serialized)` {#TaggedRunMetadata.MergeFromString}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.ParseFromString(serialized)` {#TaggedRunMetadata.ParseFromString}
|
||||
|
||||
Parse serialized protocol buffer data into this message.
|
||||
|
||||
Like MergeFromString(), except we clear the object first and
|
||||
do not return the value that MergeFromString returns.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.RegisterExtension(extension_handle)` {#TaggedRunMetadata.RegisterExtension}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.SerializePartialToString()` {#TaggedRunMetadata.SerializePartialToString}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.SerializeToString()` {#TaggedRunMetadata.SerializeToString}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.SetInParent()` {#TaggedRunMetadata.SetInParent}
|
||||
|
||||
Sets the _cached_byte_size_dirty bit to true,
|
||||
and propagates this to our listener iff this was a state change.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.WhichOneof(oneof_name)` {#TaggedRunMetadata.WhichOneof}
|
||||
|
||||
Returns the name of the currently set field inside a oneof, or None.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.__deepcopy__(memo=None)` {#TaggedRunMetadata.__deepcopy__}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.__eq__(other)` {#TaggedRunMetadata.__eq__}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.__getstate__()` {#TaggedRunMetadata.__getstate__}
|
||||
@ -520,4 +938,67 @@ Support the pickle protocol.
|
||||
Support the pickle protocol.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.__hash__()` {#TaggedRunMetadata.__hash__}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.__init__(**kwargs)` {#TaggedRunMetadata.__init__}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.__ne__(other_msg)` {#TaggedRunMetadata.__ne__}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.__repr__()` {#TaggedRunMetadata.__repr__}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.__setstate__(state)` {#TaggedRunMetadata.__setstate__}
|
||||
|
||||
Support the pickle protocol.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.__str__()` {#TaggedRunMetadata.__str__}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.__unicode__()` {#TaggedRunMetadata.__unicode__}
|
||||
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.run_metadata` {#TaggedRunMetadata.run_metadata}
|
||||
|
||||
Magic attribute generated for "run_metadata" proto field.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.summary.TaggedRunMetadata.tag` {#TaggedRunMetadata.tag}
|
||||
|
||||
Magic attribute generated for "tag" proto field.
|
||||
|
||||
|
||||
|
||||
|
@ -195,125 +195,6 @@ Checks that for all elements of farray1 and farray2
|
||||
* <b>`err`</b>: a float value.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertBetween(value, minv, maxv, msg=None)` {#TestCase.assertBetween}
|
||||
|
||||
Asserts that value is between minv and maxv (inclusive).
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertCommandFails(command, regexes, env=None, close_fds=True, msg=None)` {#TestCase.assertCommandFails}
|
||||
|
||||
Asserts a shell command fails and the error matches a regex in a list.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`command`</b>: List or string representing the command to run.
|
||||
* <b>`regexes`</b>: the list of regular expression strings.
|
||||
* <b>`env`</b>: Dictionary of environment variable settings.
|
||||
* <b>`close_fds`</b>: Whether or not to close all open fd's in the child after
|
||||
forking.
|
||||
* <b>`msg`</b>: Optional message to report on failure.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertCommandSucceeds(command, regexes=('',), env=None, close_fds=True, msg=None)` {#TestCase.assertCommandSucceeds}
|
||||
|
||||
Asserts that a shell command succeeds (i.e. exits with code 0).
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`command`</b>: List or string representing the command to run.
|
||||
* <b>`regexes`</b>: List of regular expression byte strings that match success.
|
||||
* <b>`env`</b>: Dictionary of environment variable settings.
|
||||
* <b>`close_fds`</b>: Whether or not to close all open fd's in the child after
|
||||
forking.
|
||||
* <b>`msg`</b>: Optional message to report on failure.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertContainsExactSubsequence(container, subsequence, msg=None)` {#TestCase.assertContainsExactSubsequence}
|
||||
|
||||
Assert that "container" contains "subsequence" as an exact subsequence.
|
||||
|
||||
Asserts that "container" contains all the elements of "subsequence", in
|
||||
order, and without other elements interspersed. For example, [1, 2, 3] is an
|
||||
exact subsequence of [0, 0, 1, 2, 3, 0] but not of [0, 0, 1, 2, 0, 3, 0].
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`container`</b>: the list we're testing for subsequence inclusion.
|
||||
* <b>`subsequence`</b>: the list we hope will be an exact subsequence of container.
|
||||
* <b>`msg`</b>: Optional message to report on failure.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertContainsInOrder(strings, target, msg=None)` {#TestCase.assertContainsInOrder}
|
||||
|
||||
Asserts that the strings provided are found in the target in order.
|
||||
|
||||
This may be useful for checking HTML output.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`strings`</b>: A list of strings, such as [ 'fox', 'dog' ]
|
||||
* <b>`target`</b>: A target string in which to look for the strings, such as
|
||||
'The quick brown fox jumped over the lazy dog'.
|
||||
* <b>`msg`</b>: Optional message to report on failure.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertContainsSubsequence(container, subsequence, msg=None)` {#TestCase.assertContainsSubsequence}
|
||||
|
||||
Assert that "container" contains "subsequence" as a subsequence.
|
||||
|
||||
Asserts that "container" contains all the elements of "subsequence", in
|
||||
order, but possibly with other elements interspersed. For example, [1, 2, 3]
|
||||
is a subsequence of [0, 0, 1, 2, 0, 3, 0] but not of [0, 0, 1, 3, 0, 2, 0].
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`container`</b>: the list we're testing for subsequence inclusion.
|
||||
* <b>`subsequence`</b>: the list we hope will be a subsequence of container.
|
||||
* <b>`msg`</b>: Optional message to report on failure.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertContainsSubset(expected_subset, actual_set, msg=None)` {#TestCase.assertContainsSubset}
|
||||
|
||||
Checks whether actual iterable is a superset of expected iterable.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertCountEqual(*args, **kwargs)` {#TestCase.assertCountEqual}
|
||||
|
||||
An unordered sequence specific comparison.
|
||||
|
||||
Equivalent to assertItemsEqual(). This method is a compatibility layer
|
||||
for Python 3k, since 2to3 does not convert assertItemsEqual() calls into
|
||||
assertCountEqual() calls.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`expected_seq`</b>: A sequence containing elements we are expecting.
|
||||
* <b>`actual_seq`</b>: The sequence that we are testing.
|
||||
* <b>`msg`</b>: The message to be printed if the test fails.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertDeviceEqual(device1, device2)` {#TestCase.assertDeviceEqual}
|
||||
@ -336,48 +217,9 @@ Checks whether actual is a superset of expected.
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertDictEqual(a, b, msg=None)` {#TestCase.assertDictEqual}
|
||||
|
||||
Raises AssertionError if a and b are not equal dictionaries.
|
||||
|
||||
##### Args:
|
||||
#### `tf.test.TestCase.assertDictEqual(d1, d2, msg=None)` {#TestCase.assertDictEqual}
|
||||
|
||||
|
||||
* <b>`a`</b>: A dict, the expected value.
|
||||
* <b>`b`</b>: A dict, the actual value.
|
||||
* <b>`msg`</b>: An optional str, the associated message.
|
||||
|
||||
##### Raises:
|
||||
|
||||
|
||||
* <b>`AssertionError`</b>: if the dictionaries are not equal.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertEmpty(container, msg=None)` {#TestCase.assertEmpty}
|
||||
|
||||
Assert that an object has zero length.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`container`</b>: Anything that implements the collections.Sized interface.
|
||||
* <b>`msg`</b>: Optional message to report on failure.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertEndsWith(actual, expected_end, msg=None)` {#TestCase.assertEndsWith}
|
||||
|
||||
Assert that actual.endswith(expected_end) is True.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`actual`</b>: str
|
||||
* <b>`expected_end`</b>: str
|
||||
* <b>`msg`</b>: Optional message to report on failure.
|
||||
|
||||
|
||||
- - -
|
||||
@ -462,11 +304,10 @@ Included for symmetry with assertIsNone.
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertItemsEqual(*args, **kwargs)` {#TestCase.assertItemsEqual}
|
||||
#### `tf.test.TestCase.assertItemsEqual(expected_seq, actual_seq, msg=None)` {#TestCase.assertItemsEqual}
|
||||
|
||||
An unordered sequence specific comparison.
|
||||
|
||||
It asserts that actual_seq and expected_seq have the same element counts.
|
||||
An unordered sequence specific comparison. It asserts that
|
||||
actual_seq and expected_seq have the same element counts.
|
||||
Equivalent to::
|
||||
|
||||
self.assertEqual(Counter(iter(actual_seq)),
|
||||
@ -479,30 +320,6 @@ Asserts that each element has the same count in both sequences.
|
||||
- [0, 1, 1] and [1, 0, 1] compare equal.
|
||||
- [0, 0, 1] and [0, 1] compare unequal.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`expected_seq`</b>: A sequence containing elements we are expecting.
|
||||
* <b>`actual_seq`</b>: The sequence that we are testing.
|
||||
* <b>`msg`</b>: The message to be printed if the test fails.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertJsonEqual(first, second, msg=None)` {#TestCase.assertJsonEqual}
|
||||
|
||||
Asserts that the JSON objects defined in two strings are equal.
|
||||
|
||||
A summary of the differences will be included in the failure message
|
||||
using assertSameStructure.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`first`</b>: A string contining JSON to decode and compare to second.
|
||||
* <b>`second`</b>: A string contining JSON to decode and compare to first.
|
||||
* <b>`msg`</b>: Additional text to include in the failure message.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
@ -572,13 +389,6 @@ if not.
|
||||
* <b>`msg`</b>: An optional string message to append to the failure message.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertNoCommonElements(expected_seq, actual_seq, msg=None)` {#TestCase.assertNoCommonElements}
|
||||
|
||||
Checks whether actual iterable and expected iterable are disjoint.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertNotAlmostEqual(first, second, places=None, msg=None, delta=None)` {#TestCase.assertNotAlmostEqual}
|
||||
@ -609,33 +419,6 @@ as significant digits (measured from the most signficant digit).
|
||||
Objects that are equal automatically fail.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertNotEmpty(container, msg=None)` {#TestCase.assertNotEmpty}
|
||||
|
||||
Assert that an object has non-zero length.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`container`</b>: Anything that implements the collections.Sized interface.
|
||||
* <b>`msg`</b>: Optional message to report on failure.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertNotEndsWith(actual, unexpected_end, msg=None)` {#TestCase.assertNotEndsWith}
|
||||
|
||||
Assert that actual.endswith(unexpected_end) is False.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`actual`</b>: str
|
||||
* <b>`unexpected_end`</b>: str
|
||||
* <b>`msg`</b>: Optional message to report on failure.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertNotEqual(first, second, msg=None)` {#TestCase.assertNotEqual}
|
||||
@ -673,20 +456,6 @@ Included for symmetry with assertIsInstance.
|
||||
Fail the test if the text matches the regular expression.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertNotStartsWith(actual, unexpected_start, msg=None)` {#TestCase.assertNotStartsWith}
|
||||
|
||||
Assert that actual.startswith(unexpected_start) is False.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`actual`</b>: str
|
||||
* <b>`unexpected_start`</b>: str
|
||||
* <b>`msg`</b>: Optional message to report on failure.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertProtoEquals(expected_message_maybe_ascii, message)` {#TestCase.assertProtoEquals}
|
||||
@ -761,38 +530,6 @@ Asserts that the message in a raised exception matches a regexp.
|
||||
* <b>`kwargs`</b>: Extra kwargs.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertRaisesWithLiteralMatch(expected_exception, expected_exception_message, callable_obj=None, *args, **kwargs)` {#TestCase.assertRaisesWithLiteralMatch}
|
||||
|
||||
Asserts that the message in a raised exception equals the given string.
|
||||
|
||||
Unlike assertRaisesRegexp, this method takes a literal string, not
|
||||
a regular expression.
|
||||
|
||||
with self.assertRaisesWithLiteralMatch(ExType, 'message'):
|
||||
DoSomething()
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`expected_exception`</b>: Exception class expected to be raised.
|
||||
* <b>`expected_exception_message`</b>: String message expected in the raised
|
||||
exception. For a raise exception e, expected_exception_message must
|
||||
equal str(e).
|
||||
* <b>`callable_obj`</b>: Function to be called, or None to return a context.
|
||||
* <b>`args`</b>: Extra args.
|
||||
* <b>`kwargs`</b>: Extra kwargs.
|
||||
|
||||
##### Returns:
|
||||
|
||||
A context manager if callable_obj is None. Otherwise, None.
|
||||
|
||||
##### Raises:
|
||||
|
||||
self.failureException if callable_obj does not raise a macthing exception.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertRaisesWithPredicateMatch(exception_type, expected_err_re_or_predicate)` {#TestCase.assertRaisesWithPredicateMatch}
|
||||
@ -817,71 +554,6 @@ predicate search.
|
||||
exception.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertRaisesWithRegexpMatch(expected_exception, expected_regexp, callable_obj=None, *args, **kwargs)` {#TestCase.assertRaisesWithRegexpMatch}
|
||||
|
||||
Asserts that the message in a raised exception matches the given regexp.
|
||||
|
||||
This is just a wrapper around assertRaisesRegexp. Please use
|
||||
assertRaisesRegexp instead of assertRaisesWithRegexpMatch.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`expected_exception`</b>: Exception class expected to be raised.
|
||||
* <b>`expected_regexp`</b>: Regexp (re pattern object or string) expected to be
|
||||
found in error message.
|
||||
* <b>`callable_obj`</b>: Function to be called, or None to return a context.
|
||||
* <b>`args`</b>: Extra args.
|
||||
* <b>`kwargs`</b>: Extra keyword args.
|
||||
|
||||
##### Returns:
|
||||
|
||||
A context manager if callable_obj is None. Otherwise, None.
|
||||
|
||||
##### Raises:
|
||||
|
||||
self.failureException if callable_obj does not raise a macthing exception.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertRegexMatch(actual_str, regexes, message=None)` {#TestCase.assertRegexMatch}
|
||||
|
||||
Asserts that at least one regex in regexes matches str.
|
||||
|
||||
If possible you should use assertRegexpMatches, which is a simpler
|
||||
version of this method. assertRegexpMatches takes a single regular
|
||||
expression (a string or re compiled object) instead of a list.
|
||||
|
||||
Notes:
|
||||
1. This function uses substring matching, i.e. the matching
|
||||
succeeds if *any* substring of the error message matches *any*
|
||||
regex in the list. This is more convenient for the user than
|
||||
full-string matching.
|
||||
|
||||
2. If regexes is the empty list, the matching will always fail.
|
||||
|
||||
3. Use regexes=[''] for a regex that will always pass.
|
||||
|
||||
4. '.' matches any single character *except* the newline. To
|
||||
match any character, use '(.|
|
||||
)'.
|
||||
|
||||
5. '^' matches the beginning of each line, not just the beginning
|
||||
of the string. Similarly, '$' matches the end of each line.
|
||||
|
||||
6. An exception will be thrown if regexes contains an invalid
|
||||
regex.
|
||||
|
||||
Args:
|
||||
actual_str: The string we try to match with the items in regexes.
|
||||
regexes: The regular expressions we want to match against str.
|
||||
See "Notes" above for detailed notes on how this is interpreted.
|
||||
message: The message to be printed if the test fails.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertRegexpMatches(text, expected_regexp, msg=None)` {#TestCase.assertRegexpMatches}
|
||||
@ -889,79 +561,6 @@ Asserts that at least one regex in regexes matches str.
|
||||
Fail the test unless the text matches the regular expression.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertSameElements(expected_seq, actual_seq, msg=None)` {#TestCase.assertSameElements}
|
||||
|
||||
Assert that two sequences have the same elements (in any order).
|
||||
|
||||
This method, unlike assertItemsEqual, doesn't care about any
|
||||
duplicates in the expected and actual sequences.
|
||||
|
||||
>> assertSameElements([1, 1, 1, 0, 0, 0], [0, 1])
|
||||
# Doesn't raise an AssertionError
|
||||
|
||||
If possible, you should use assertItemsEqual instead of
|
||||
assertSameElements.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`expected_seq`</b>: A sequence containing elements we are expecting.
|
||||
* <b>`actual_seq`</b>: The sequence that we are testing.
|
||||
* <b>`msg`</b>: The message to be printed if the test fails.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertSameStructure(a, b, aname='a', bname='b', msg=None)` {#TestCase.assertSameStructure}
|
||||
|
||||
Asserts that two values contain the same structural content.
|
||||
|
||||
The two arguments should be data trees consisting of trees of dicts and
|
||||
lists. They will be deeply compared by walking into the contents of dicts
|
||||
and lists; other items will be compared using the == operator.
|
||||
If the two structures differ in content, the failure message will indicate
|
||||
the location within the structures where the first difference is found.
|
||||
This may be helpful when comparing large structures.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`a`</b>: The first structure to compare.
|
||||
* <b>`b`</b>: The second structure to compare.
|
||||
* <b>`aname`</b>: Variable name to use for the first structure in assertion messages.
|
||||
* <b>`bname`</b>: Variable name to use for the second structure.
|
||||
* <b>`msg`</b>: Additional text to include in the failure message.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertSequenceAlmostEqual(expected_seq, actual_seq, places=None, msg=None, delta=None)` {#TestCase.assertSequenceAlmostEqual}
|
||||
|
||||
An approximate equality assertion for ordered sequences.
|
||||
|
||||
Fail if the two sequences are unequal as determined by their value
|
||||
differences rounded to the given number of decimal places (default 7) and
|
||||
comparing to zero, or by comparing that the difference between each value
|
||||
in the two sequences is more than the given delta.
|
||||
|
||||
Note that decimal places (from zero) are usually not the same as significant
|
||||
digits (measured from the most signficant digit).
|
||||
|
||||
If the two sequences compare equal then they will automatically compare
|
||||
almost equal.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`expected_seq`</b>: A sequence containing elements we are expecting.
|
||||
* <b>`actual_seq`</b>: The sequence that we are testing.
|
||||
* <b>`places`</b>: The number of decimal places to compare.
|
||||
* <b>`msg`</b>: The message to be printed if the test fails.
|
||||
* <b>`delta`</b>: The OK difference between compared values.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertSequenceEqual(seq1, seq2, msg=None, seq_type=None)` {#TestCase.assertSequenceEqual}
|
||||
@ -982,26 +581,6 @@ which can be indexed, has a length, and has an equality operator.
|
||||
differences.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertSequenceStartsWith(prefix, whole, msg=None)` {#TestCase.assertSequenceStartsWith}
|
||||
|
||||
An equality assertion for the beginning of ordered sequences.
|
||||
|
||||
If prefix is an empty sequence, it will raise an error unless whole is also
|
||||
an empty sequence.
|
||||
|
||||
If prefix is not a sequence, it will raise an error if the first element of
|
||||
whole does not match.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`prefix`</b>: A sequence expected at the beginning of the whole parameter.
|
||||
* <b>`whole`</b>: The sequence in which to look for prefix.
|
||||
* <b>`msg`</b>: Optional message to report on failure.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertSetEqual(set1, set2, msg=None)` {#TestCase.assertSetEqual}
|
||||
@ -1053,51 +632,6 @@ Assert that actual.startswith(expected_start) is True.
|
||||
* <b>`msg`</b>: Optional message to report on failure.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertTotallyOrdered(*groups, **kwargs)` {#TestCase.assertTotallyOrdered}
|
||||
|
||||
Asserts that total ordering has been implemented correctly.
|
||||
|
||||
For example, say you have a class A that compares only on its attribute x.
|
||||
Comparators other than __lt__ are omitted for brevity.
|
||||
|
||||
class A(object):
|
||||
def __init__(self, x, y):
|
||||
self.x = x
|
||||
self.y = y
|
||||
|
||||
def __hash__(self):
|
||||
return hash(self.x)
|
||||
|
||||
def __lt__(self, other):
|
||||
try:
|
||||
return self.x < other.x
|
||||
except AttributeError:
|
||||
return NotImplemented
|
||||
|
||||
assertTotallyOrdered will check that instances can be ordered correctly.
|
||||
For example,
|
||||
|
||||
self.assertTotallyOrdered(
|
||||
[None], # None should come before everything else.
|
||||
[1], # Integers sort earlier.
|
||||
[A(1, 'a')],
|
||||
[A(2, 'b')], # 2 is after 1.
|
||||
[A(3, 'c'), A(3, 'd')], # The second argument is irrelevant.
|
||||
[A(4, 'z')],
|
||||
['foo']) # Strings sort last.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`*groups`</b>: A list of groups of elements. Each group of elements is a list
|
||||
of objects that are equal. The elements in each group must be less than
|
||||
the elements in the group after it. For example, these groups are
|
||||
totally ordered: [None], [1], [2, 2], [3].
|
||||
* <b>`**kwargs`</b>: optional msg keyword argument can be passed.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertTrue(expr, msg=None)` {#TestCase.assertTrue}
|
||||
@ -1120,13 +654,6 @@ A tuple-specific equality assertion.
|
||||
differences.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assertUrlEqual(a, b, msg=None)` {#TestCase.assertUrlEqual}
|
||||
|
||||
Asserts that urls are equal, ignoring ordering of query params.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.assert_(expr, msg=None)` {#TestCase.assert_}
|
||||
@ -1188,9 +715,9 @@ tearDown.
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.fail(msg=None, prefix=None)` {#TestCase.fail}
|
||||
#### `tf.test.TestCase.fail(msg=None)` {#TestCase.fail}
|
||||
|
||||
Fail immediately with the given message, optionally prefixed.
|
||||
Fail immediately, with the given message.
|
||||
|
||||
|
||||
- - -
|
||||
@ -1242,13 +769,6 @@ Fail immediately with the given message, optionally prefixed.
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.getRecordedProperties()` {#TestCase.getRecordedProperties}
|
||||
|
||||
Return any properties that the user has recorded.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.get_temp_dir()` {#TestCase.get_temp_dir}
|
||||
@ -1271,20 +791,6 @@ pollute each others environment.
|
||||
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.recordProperty(property_name, property_value)` {#TestCase.recordProperty}
|
||||
|
||||
Record an arbitrary property for later use.
|
||||
|
||||
##### Args:
|
||||
|
||||
|
||||
* <b>`property_name`</b>: str, name of property to record; must be a valid XML
|
||||
attribute name
|
||||
* <b>`property_value`</b>: value of property; must be valid XML attribute value
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
#### `tf.test.TestCase.run(result=None)` {#TestCase.run}
|
||||
@ -1310,18 +816,11 @@ Hook method for setting up class fixture before running tests in the class.
|
||||
|
||||
#### `tf.test.TestCase.shortDescription()` {#TestCase.shortDescription}
|
||||
|
||||
Format both the test method name and the first line of its docstring.
|
||||
Returns a one-line description of the test, or None if no
|
||||
description has been provided.
|
||||
|
||||
If no docstring is given, only returns the method name.
|
||||
|
||||
This method overrides unittest.TestCase.shortDescription(), which
|
||||
only returns the first line of the docstring, obscuring the name
|
||||
of the test upon failure.
|
||||
|
||||
##### Returns:
|
||||
|
||||
|
||||
* <b>`desc`</b>: A short description of a test method.
|
||||
The default implementation of this method returns the first line of
|
||||
the specified test method's docstring.
|
||||
|
||||
|
||||
- - -
|
||||
|
@ -78,51 +78,51 @@ If the above commands do not work on your system, you can follow these instructi
|
||||
|
||||
```bash
|
||||
# Ubuntu/Linux 64-bit, CPU only, Python 2.7
|
||||
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0rc1-cp27-none-linux_x86_64.whl
|
||||
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0rc2-cp27-none-linux_x86_64.whl
|
||||
|
||||
# Ubuntu/Linux 64-bit, GPU enabled, Python 2.7
|
||||
# Requires CUDA toolkit 8.0 and CuDNN v5. For other versions, see "Installing from sources" below.
|
||||
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.0rc1-cp27-none-linux_x86_64.whl
|
||||
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.0rc2-cp27-none-linux_x86_64.whl
|
||||
|
||||
# Mac OS X, CPU only, Python 2.7:
|
||||
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.0.0rc1-py2-none-any.whl
|
||||
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.0.0rc2-py2-none-any.whl
|
||||
|
||||
# Mac OS X, GPU enabled, Python 2.7:
|
||||
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow_gpu-1.0.0rc1-py2-none-any.whl
|
||||
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow_gpu-1.0.0rc2-py2-none-any.whl
|
||||
|
||||
# Ubuntu/Linux 64-bit, CPU only, Python 3.3
|
||||
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0rc1-cp33-cp33m-linux_x86_64.whl
|
||||
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0rc2-cp33-cp33m-linux_x86_64.whl
|
||||
|
||||
# Ubuntu/Linux 64-bit, GPU enabled, Python 3.3
|
||||
# Requires CUDA toolkit 8.0 and CuDNN v5. For other versions, see "Installing from sources" below.
|
||||
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.0rc1-cp33-cp33m-linux_x86_64.whl
|
||||
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.0rc2-cp33-cp33m-linux_x86_64.whl
|
||||
|
||||
# Ubuntu/Linux 64-bit, CPU only, Python 3.4
|
||||
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0rc1-cp34-cp34m-linux_x86_64.whl
|
||||
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0rc2-cp34-cp34m-linux_x86_64.whl
|
||||
|
||||
# Ubuntu/Linux 64-bit, GPU enabled, Python 3.4
|
||||
# Requires CUDA toolkit 8.0 and CuDNN v5. For other versions, see "Installing from sources" below.
|
||||
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.0rc1-cp34-cp34m-linux_x86_64.whl
|
||||
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.0rc2-cp34-cp34m-linux_x86_64.whl
|
||||
|
||||
# Ubuntu/Linux 64-bit, CPU only, Python 3.5
|
||||
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0rc1-cp35-cp35m-linux_x86_64.whl
|
||||
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0rc2-cp35-cp35m-linux_x86_64.whl
|
||||
|
||||
# Ubuntu/Linux 64-bit, GPU enabled, Python 3.5
|
||||
# Requires CUDA toolkit 8.0 and CuDNN v5. For other versions, see "Installing from sources" below.
|
||||
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.0rc1-cp35-cp35m-linux_x86_64.whl
|
||||
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.0rc2-cp35-cp35m-linux_x86_64.whl
|
||||
|
||||
# Ubuntu/Linux 64-bit, CPU only, Python 3.6
|
||||
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0rc1-cp36-cp36m-linux_x86_64.whl
|
||||
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0rc2-cp36-cp36m-linux_x86_64.whl
|
||||
|
||||
# Ubuntu/Linux 64-bit, GPU enabled, Python 3.6
|
||||
# Requires CUDA toolkit 8.0 and CuDNN v5. For other versions, see "Installing from sources" below.
|
||||
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.0rc1-cp36-cp36m-linux_x86_64.whl
|
||||
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.0rc2-cp36-cp36m-linux_x86_64.whl
|
||||
|
||||
# Mac OS X, CPU only, Python 3.4 or 3.5:
|
||||
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.0.0rc1-py3-none-any.whl
|
||||
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.0.0rc2-py3-none-any.whl
|
||||
|
||||
# Mac OS X, GPU enabled, Python 3.4 or 3.5:
|
||||
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow_gpu-1.0.0rc1-py3-none-any.whl
|
||||
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow_gpu-1.0.0rc2-py3-none-any.whl
|
||||
```
|
||||
|
||||
Install TensorFlow:
|
||||
@ -164,14 +164,14 @@ Both distributions include pip. To install the CPU-only version of
|
||||
TensorFlow, enter the following command at a command prompt:
|
||||
|
||||
```bat
|
||||
C:\> pip install --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-1.0.0rc1-cp35-cp35m-win_amd64.whl
|
||||
C:\> pip install --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-1.0.0rc2-cp35-cp35m-win_amd64.whl
|
||||
```
|
||||
|
||||
To install the GPU version of TensorFlow, enter the following command
|
||||
at a command prompt:
|
||||
|
||||
```bat
|
||||
C:\> pip install --upgrade https://storage.googleapis.com/tensorflow/windows/gpu/tensorflow_gpu-1.0.0rc1-cp35-cp35m-win_amd64.whl
|
||||
C:\> pip install --upgrade https://storage.googleapis.com/tensorflow/windows/gpu/tensorflow_gpu-1.0.0rc2-cp35-cp35m-win_amd64.whl
|
||||
```
|
||||
|
||||
You can now [test your installation](#test-the-tensorflow-installation).
|
||||
@ -226,51 +226,51 @@ Now, install TensorFlow just as you would for a regular Pip installation. First
|
||||
|
||||
```bash
|
||||
# Ubuntu/Linux 64-bit, CPU only, Python 2.7
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0rc1-cp27-none-linux_x86_64.whl
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0rc2-cp27-none-linux_x86_64.whl
|
||||
|
||||
# Ubuntu/Linux 64-bit, GPU enabled, Python 2.7
|
||||
# Requires CUDA toolkit 8.0 and CuDNN v5. For other versions, see "Installing from sources" below.
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.0rc1-cp27-none-linux_x86_64.whl
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.0rc2-cp27-none-linux_x86_64.whl
|
||||
|
||||
# Mac OS X, CPU only, Python 2.7:
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.0.0rc1-py2-none-any.whl
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.0.0rc2-py2-none-any.whl
|
||||
|
||||
# Mac OS X, GPU enabled, Python 2.7:
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow_gpu-1.0.0rc1-py2-none-any.whl
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow_gpu-1.0.0rc2-py2-none-any.whl
|
||||
|
||||
# Ubuntu/Linux 64-bit, CPU only, Python 3.3
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0rc1-cp33-cp33m-linux_x86_64.whl
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0rc2-cp33-cp33m-linux_x86_64.whl
|
||||
|
||||
# Ubuntu/Linux 64-bit, GPU enabled, Python 3.3
|
||||
# Requires CUDA toolkit 8.0 and CuDNN v5. For other versions, see "Installing from sources" below.
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.0rc1-cp33-cp33m-linux_x86_64.whl
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.0rc2-cp33-cp33m-linux_x86_64.whl
|
||||
|
||||
# Ubuntu/Linux 64-bit, CPU only, Python 3.4
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0rc1-cp34-cp34m-linux_x86_64.whl
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0rc2-cp34-cp34m-linux_x86_64.whl
|
||||
|
||||
# Ubuntu/Linux 64-bit, GPU enabled, Python 3.4
|
||||
# Requires CUDA toolkit 8.0 and CuDNN v5. For other versions, see "Installing from sources" below.
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.0rc1-cp34-cp34m-linux_x86_64.whl
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.0rc2-cp34-cp34m-linux_x86_64.whl
|
||||
|
||||
# Ubuntu/Linux 64-bit, CPU only, Python 3.5
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0rc1-cp35-cp35m-linux_x86_64.whl
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0rc2-cp35-cp35m-linux_x86_64.whl
|
||||
|
||||
# Ubuntu/Linux 64-bit, GPU enabled, Python 3.5
|
||||
# Requires CUDA toolkit 8.0 and CuDNN v5. For other versions, see "Installing from sources" below.
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.0rc1-cp35-cp35m-linux_x86_64.whl
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.0rc2-cp35-cp35m-linux_x86_64.whl
|
||||
|
||||
# Ubuntu/Linux 64-bit, CPU only, Python 3.6
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0rc1-cp36-cp36m-linux_x86_64.whl
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0rc2-cp36-cp36m-linux_x86_64.whl
|
||||
|
||||
# Ubuntu/Linux 64-bit, GPU enabled, Python 3.6
|
||||
# Requires CUDA toolkit 8.0 and CuDNN v5. For other versions, see "Installing from sources" below.
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.0rc1-cp36-cp36m-linux_x86_64.whl
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.0rc2-cp36-cp36m-linux_x86_64.whl
|
||||
|
||||
# Mac OS X, CPU only, Python 3.4 or 3.5:
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.0.0rc1-py3-none-any.whl
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.0.0rc2-py3-none-any.whl
|
||||
|
||||
# Mac OS X, GPU enabled, Python 3.4 or 3.5:
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow_gpu-1.0.0rc1-py3-none-any.whl
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow_gpu-1.0.0rc2-py3-none-any.whl
|
||||
```
|
||||
|
||||
Finally install TensorFlow:
|
||||
@ -392,51 +392,51 @@ select the correct binary to install:
|
||||
|
||||
```bash
|
||||
# Ubuntu/Linux 64-bit, CPU only, Python 2.7
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0rc1-cp27-none-linux_x86_64.whl
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0rc2-cp27-none-linux_x86_64.whl
|
||||
|
||||
# Ubuntu/Linux 64-bit, GPU enabled, Python 2.7
|
||||
# Requires CUDA toolkit 8.0 and CuDNN v5. For other versions, see "Installing from sources" below.
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.0rc1-cp27-none-linux_x86_64.whl
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.0rc2-cp27-none-linux_x86_64.whl
|
||||
|
||||
# Mac OS X, CPU only, Python 2.7:
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.0.0rc1-py2-none-any.whl
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.0.0rc2-py2-none-any.whl
|
||||
|
||||
# Mac OS X, GPU enabled, Python 2.7:
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow_gpu-1.0.0rc1-py2-none-any.whl
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow_gpu-1.0.0rc2-py2-none-any.whl
|
||||
|
||||
# Ubuntu/Linux 64-bit, CPU only, Python 3.3
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0rc1-cp33-cp33m-linux_x86_64.whl
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0rc2-cp33-cp33m-linux_x86_64.whl
|
||||
|
||||
# Ubuntu/Linux 64-bit, GPU enabled, Python 3.3
|
||||
# Requires CUDA toolkit 8.0 and CuDNN v5. For other versions, see "Installing from sources" below.
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.0rc1-cp33-cp33m-linux_x86_64.whl
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.0rc2-cp33-cp33m-linux_x86_64.whl
|
||||
|
||||
# Ubuntu/Linux 64-bit, CPU only, Python 3.4
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0rc1-cp34-cp34m-linux_x86_64.whl
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0rc2-cp34-cp34m-linux_x86_64.whl
|
||||
|
||||
# Ubuntu/Linux 64-bit, GPU enabled, Python 3.4
|
||||
# Requires CUDA toolkit 8.0 and CuDNN v5. For other versions, see "Installing from sources" below.
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.0rc1-cp34-cp34m-linux_x86_64.whl
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.0rc2-cp34-cp34m-linux_x86_64.whl
|
||||
|
||||
# Ubuntu/Linux 64-bit, CPU only, Python 3.5
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0rc1-cp35-cp35m-linux_x86_64.whl
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0rc2-cp35-cp35m-linux_x86_64.whl
|
||||
|
||||
# Ubuntu/Linux 64-bit, GPU enabled, Python 3.5
|
||||
# Requires CUDA toolkit 8.0 and CuDNN v5. For other versions, see "Installing from sources" below.
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.0rc1-cp35-cp35m-linux_x86_64.whl
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.0rc2-cp35-cp35m-linux_x86_64.whl
|
||||
|
||||
# Ubuntu/Linux 64-bit, CPU only, Python 3.6
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0rc1-cp36-cp36m-linux_x86_64.whl
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0rc2-cp36-cp36m-linux_x86_64.whl
|
||||
|
||||
# Ubuntu/Linux 64-bit, GPU enabled, Python 3.6
|
||||
# Requires CUDA toolkit 8.0 and CuDNN v5. For other versions, see "Installing from sources" below.
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.0rc1-cp36-cp36m-linux_x86_64.whl
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.0rc2-cp36-cp36m-linux_x86_64.whl
|
||||
|
||||
# Mac OS X, CPU only, Python 3.4 or 3.5:
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.0.0rc1-py3-none-any.whl
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.0.0rc2-py3-none-any.whl
|
||||
|
||||
# Mac OS X, GPU enabled, Python 3.4 or 3.5:
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow_gpu-1.0.0rc1-py3-none-any.whl
|
||||
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow_gpu-1.0.0rc2-py3-none-any.whl
|
||||
```
|
||||
|
||||
Finally install TensorFlow:
|
||||
@ -504,7 +504,7 @@ code.
|
||||
code.
|
||||
|
||||
We also have tags with `latest` replaced by a released version (e.g.,
|
||||
`1.0.0-rc1-gpu`).
|
||||
`1.0.0-rc2-gpu`).
|
||||
|
||||
With Docker the installation is as follows:
|
||||
|
||||
@ -909,7 +909,7 @@ $ bazel build --config opt --config=sycl //tensorflow/tools/pip_package:build_pi
|
||||
$ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
|
||||
|
||||
# The name of the .whl file will depend on your platform.
|
||||
$ sudo pip install /tmp/tensorflow_pkg/tensorflow-1.0.0rc1-py2-none-any.whl
|
||||
$ sudo pip install /tmp/tensorflow_pkg/tensorflow-1.0.0rc2-py2-none-any.whl
|
||||
```
|
||||
|
||||
## Optimizing CPU performance
|
||||
@ -1254,6 +1254,12 @@ installed, such as:
|
||||
$ pip install --upgrade protobuf
|
||||
```
|
||||
|
||||
Or (if you have protobuf installed with Homebrew):
|
||||
|
||||
```bash
|
||||
$ brew upgrade protobuf
|
||||
```
|
||||
|
||||
### Mac OS X: Segmentation Fault when import tensorflow
|
||||
|
||||
On Mac OS X, you might get the following error when importing tensorflow in python:
|
||||
|
@ -134,7 +134,7 @@ Example:
|
||||
output_collections=['MY_OPS'], name='add_t1t2')
|
||||
[2.3, 3.4]
|
||||
"""
|
||||
with tf.op_scope([tensor_in, other_tensor_in], name, "my_op"):
|
||||
with tf.name_scope(name, "my_op", [tensor_in, other_tensor_in]):
|
||||
tensor_in = tf.convert_to_tensor(tensor_in)
|
||||
other_tensor_in = tf.convert_to_tensor(other_tensor_in)
|
||||
result = my_param * tensor_in + other_param * other_tensor_in
|
||||
|
@ -26,8 +26,8 @@ import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/tensorflow/tensorflow/tensorflow/go/op"
|
||||
tf "github.com/tensorflow/tensorflow/tensorflow/go"
|
||||
"github.com/tensorflow/tensorflow/tensorflow/go/op"
|
||||
)
|
||||
|
||||
func Example() {
|
||||
|
@ -16,4 +16,3 @@
|
||||
//go:generate go run ../genop/main.go -outfile wrappers.go
|
||||
|
||||
package op
|
||||
|
||||
|
@ -2636,7 +2636,6 @@ cuda_py_tests(
|
||||
"training/proximal_gradient_descent_test.py",
|
||||
"training/queue_runner_test.py",
|
||||
"training/rmsprop_test.py",
|
||||
"training/saver_test.py",
|
||||
"training/slot_creator_test.py",
|
||||
"training/tensorboard_logging_test.py",
|
||||
"training/training_ops_test.py",
|
||||
@ -2678,6 +2677,41 @@ cuda_py_tests(
|
||||
],
|
||||
)
|
||||
|
||||
cuda_py_test(
|
||||
name = "saver_test",
|
||||
size = "medium",
|
||||
srcs = [
|
||||
"training/saver_test.py",
|
||||
],
|
||||
additional_deps = [
|
||||
":array_ops",
|
||||
":client_testlib",
|
||||
":control_flow_ops",
|
||||
":data_flow_ops",
|
||||
":data_flow_ops_gen",
|
||||
":errors",
|
||||
":gradients",
|
||||
":math_ops",
|
||||
":nn_grad",
|
||||
":nn_ops",
|
||||
":partitioned_variables",
|
||||
":platform",
|
||||
":platform_test",
|
||||
":pywrap_tensorflow",
|
||||
":random_ops",
|
||||
":resource_variable_ops",
|
||||
":sparse_ops",
|
||||
":summary",
|
||||
":training",
|
||||
":util",
|
||||
":variable_scope",
|
||||
":variables",
|
||||
"//third_party/py/numpy",
|
||||
"@six_archive//:six",
|
||||
"//tensorflow/core:protos_all_py",
|
||||
],
|
||||
)
|
||||
|
||||
py_test(
|
||||
name = "saver_large_variable_test",
|
||||
size = "small",
|
||||
|
@ -456,8 +456,8 @@ def _make_specific_exception(node_def, op, message, error_code):
|
||||
|
||||
@contextlib.contextmanager
|
||||
def raise_exception_on_not_ok_status():
|
||||
try:
|
||||
status = pywrap_tensorflow.TF_NewStatus()
|
||||
try:
|
||||
yield status
|
||||
if pywrap_tensorflow.TF_GetCode(status) != 0:
|
||||
raise _make_specific_exception(
|
||||
|
@ -770,6 +770,10 @@ class Defun(object):
|
||||
default graph. Because the addition of the function into the graph
|
||||
is deferred, the decorator can be used anywhere in the program.
|
||||
|
||||
Definitions of functions are frozen in a graph as soon as the graph is used to
|
||||
create a session. Therefore, nodes using the function must be created in the
|
||||
graph before the corresponding session is created.
|
||||
|
||||
Example, but also see the [How To on functions](link_needed).
|
||||
|
||||
```python
|
||||
|
@ -618,8 +618,8 @@ class OpDefLibrary(object):
|
||||
if input_arg.is_ref:
|
||||
if not all(x._is_ref_dtype for x in types): # pylint: disable=protected-access
|
||||
raise TypeError(
|
||||
"Input '%s' of '%s' Op requires l-value input" %
|
||||
(input_name, op_type_name))
|
||||
("'%s' Op requires that input '%s' be a mutable tensor " +
|
||||
"(e.g.: a tf.Variable)") % (op_type_name, input_name))
|
||||
input_types.extend(types)
|
||||
else:
|
||||
input_types.extend(base_types)
|
||||
|
@ -1462,7 +1462,8 @@ class OpDefLibraryTest(test_util.TensorFlowTestCase):
|
||||
with self.assertRaises(TypeError) as cm:
|
||||
self._lib.apply_op("RefIn", a=2)
|
||||
self.assertEqual(str(cm.exception),
|
||||
"Input 'a' of 'RefIn' Op requires l-value input")
|
||||
"'RefIn' Op requires that input 'a' be a mutable tensor " +
|
||||
"(e.g.: a tf.Variable)")
|
||||
|
||||
input_a = self._lib.apply_op("RefOut", T=dtypes.int32, name="t")
|
||||
input_b = self._lib.apply_op("RefOut", T=dtypes.int32, name="u")
|
||||
|
@ -771,6 +771,12 @@ class PlaceholderWithDefaultTest(test.TestCase):
|
||||
self.assertAllEqual(
|
||||
[[3, 3], [3, 3]], a.eval(feed_dict={p: [[3, 3], [3, 3]]}))
|
||||
|
||||
def testGradient(self):
|
||||
with self.test_session():
|
||||
x = array_ops.placeholder(dtypes_lib.float32, [5, 7])
|
||||
y = array_ops.placeholder_with_default(x, None)
|
||||
err = gradient_checker.compute_gradient_error(x, [5, 7], y, [5, 7])
|
||||
self.assertLess(err, 1e-3)
|
||||
|
||||
if __name__ == "__main__":
|
||||
test.main()
|
||||
|
@ -419,6 +419,12 @@ class VariablesTestCase(test.TestCase):
|
||||
|
||||
self.assertAllClose(np.ones((5, 5), np.float32), var.eval())
|
||||
|
||||
def testRepr(self):
|
||||
var = variables.Variable(np.zeros((5, 5), np.float32), name='noop')
|
||||
self.assertEqual(
|
||||
"<tf.Variable 'noop:0' shape=(5, 5) dtype=float32_ref>",
|
||||
repr(var))
|
||||
|
||||
|
||||
class IsInitializedTest(test.TestCase):
|
||||
|
||||
|
@ -146,9 +146,7 @@ class FileIO(object):
|
||||
|
||||
def tell(self):
|
||||
"""Returns the current position in the file."""
|
||||
if not self._read_check_passed:
|
||||
raise errors.PermissionDeniedError(None, None,
|
||||
"File isn't open for reading")
|
||||
self._preread_check()
|
||||
return self._read_buf.Tell()
|
||||
|
||||
def __enter__(self):
|
||||
|
@ -354,6 +354,7 @@ class FileIoTest(test.TestCase):
|
||||
file_path = os.path.join(self._base_dir, "temp_file")
|
||||
with file_io.FileIO(file_path, mode="r+") as f:
|
||||
f.write("testing1\ntesting2\ntesting3\n\ntesting5")
|
||||
self.assertEqual(0, f.tell())
|
||||
self.assertEqual("testing1\n", f.readline())
|
||||
self.assertEqual(9, f.tell())
|
||||
self.assertEqual("testing2\n", f.readline())
|
||||
|
@ -382,6 +382,7 @@ def _CheckNumericsGrad(_, grad):
|
||||
grad, "Not a number (NaN) or infinity (Inf) values detected in gradient.")
|
||||
|
||||
|
||||
@ops.RegisterGradient("PlaceholderWithDefault")
|
||||
@ops.RegisterGradient("Identity")
|
||||
def _IdGrad(_, grad):
|
||||
return grad
|
||||
|
@ -616,7 +616,7 @@ def strided_slice(input_,
|
||||
tf.strided_slice(input, [1, 0, 0], [2, 1, 3], [1, 1, 1]) ==> [[[3, 3, 3]]]
|
||||
tf.strided_slice(input, [1, 0, 0], [2, 2, 3], [1, 1, 1]) ==> [[[3, 3, 3],
|
||||
[4, 4, 4]]]
|
||||
tf.strided_slice(input, [1, 1, 0], [2, -1, 3], [1, -1, 1]) ==>[[[4, 4, 4],
|
||||
tf.strided_slice(input, [1, -1, 0], [2, -3, 3], [1, -1, 1]) ==>[[[4, 4, 4],
|
||||
[3, 3, 3]]]
|
||||
```
|
||||
|
||||
|
@ -196,8 +196,9 @@ class Variable(object):
|
||||
dtype=dtype,
|
||||
expected_shape=expected_shape)
|
||||
|
||||
def __str__(self):
|
||||
return str(self._snapshot)
|
||||
def __repr__(self):
|
||||
return "<tf.Variable '%s' shape=%s dtype=%s>" % (
|
||||
self.name, self.get_shape(), self.dtype.name)
|
||||
|
||||
def _init_from_args(self,
|
||||
initial_value=None,
|
||||
|
@ -1027,7 +1027,7 @@ class SVStepCounterThread(coordinator.LooperThread):
|
||||
elapsed_time = current_time - self._last_time
|
||||
self._last_time = current_time
|
||||
# Reports the number of steps done per second
|
||||
steps_per_sec = added_steps / elapsed_time
|
||||
steps_per_sec = added_steps / elapsed_time if elapsed_time != 0. else float("inf")
|
||||
summary = Summary(value=[Summary.Value(tag=self._summary_tag,
|
||||
simple_value=steps_per_sec)])
|
||||
if self._sv.summary_writer:
|
||||
|
@ -267,6 +267,7 @@ class SyncReplicasOptimizerTest(test.TestCase):
|
||||
# Starts worker 1.
|
||||
thread_1.start()
|
||||
thread_1.join()
|
||||
thread_0.join()
|
||||
|
||||
# The global step should now be 2 and the gradients should have been
|
||||
# applied again.
|
||||
|
@ -24,9 +24,10 @@ cc_library(
|
||||
"lib/gtl/*.h",
|
||||
"platform/**/*.h",
|
||||
]),
|
||||
linkopts = [
|
||||
"-ldl",
|
||||
],
|
||||
linkopts = select({
|
||||
"//tensorflow:freebsd": [],
|
||||
"//conditions:default": ["-ldl"],
|
||||
}),
|
||||
visibility = ["//visibility:public"],
|
||||
deps = [
|
||||
"//tensorflow/core:lib",
|
||||
@ -45,9 +46,10 @@ cc_library(
|
||||
exclude = ["cuda/cuda_platform_id.cc"],
|
||||
),
|
||||
),
|
||||
linkopts = [
|
||||
"-ldl",
|
||||
],
|
||||
linkopts = select({
|
||||
"//tensorflow:freebsd": [],
|
||||
"//conditions:default": ["-ldl"],
|
||||
}),
|
||||
visibility = ["//visibility:public"],
|
||||
deps = [
|
||||
":stream_executor",
|
||||
|
@ -922,7 +922,7 @@ class CudnnRnnParamsDescriptor : public CudnnDescriptorCommon<void> {
|
||||
const CudnnRnnDescriptor& rnn_desc);
|
||||
~CudnnRnnParamsDescriptor() {
|
||||
cudnnStatus_t status = wrap::cudnnDestroyFilterDescriptor(parent_, handle_);
|
||||
CUDNN_RETURN_IF_FAIL(status, "Failed to destroy RNN filter desciptor");
|
||||
CUDNN_RETURN_IF_FAIL(status, "Failed to destroy RNN filter descriptor");
|
||||
}
|
||||
cudnnFilterDescriptor_t handle() const {
|
||||
if (!ok()) return nullptr;
|
||||
@ -1202,7 +1202,7 @@ class CudnnRnnSequenceTensorDescriptor
|
||||
// Only the first one needs to be destroyed. All others are the same.
|
||||
cudnnStatus_t status =
|
||||
wrap::cudnnDestroyTensorDescriptor(parent_, handles_[0]);
|
||||
CUDNN_RETURN_IF_FAIL(status, "Failed to destroy sequence tensor desciptor");
|
||||
CUDNN_RETURN_IF_FAIL(status, "Failed to destroy sequence tensor descriptor");
|
||||
}
|
||||
|
||||
const cudnnTensorDescriptor_t* handles() const {
|
||||
|
@ -123,9 +123,13 @@ static mutex& GetRpathMutex() {
|
||||
port::Status s =
|
||||
port::Env::Default()->LoadLibrary(path_string.c_str(), dso_handle);
|
||||
if (!s.ok()) {
|
||||
#if !defined(PLATFORM_WINDOWS)
|
||||
char* ld_library_path = getenv("LD_LIBRARY_PATH");
|
||||
#endif
|
||||
LOG(INFO) << "Couldn't open CUDA library " << path
|
||||
#if !defined(PLATFORM_WINDOWS)
|
||||
<< ". LD_LIBRARY_PATH: " << getenv("LD_LIBRARY_PATH")
|
||||
<< ". LD_LIBRARY_PATH: "
|
||||
<< (ld_library_path != nullptr ? ld_library_path : "")
|
||||
#endif
|
||||
;
|
||||
return port::Status(port::error::FAILED_PRECONDITION,
|
||||
|
@ -41,7 +41,7 @@ bool RngSupport::CheckSeed(const uint8 *seed, uint64 seed_bytes) {
|
||||
return true;
|
||||
}
|
||||
|
||||
#if defined(__APPLE__)
|
||||
#if defined(__APPLE__) || defined(__FreeBSD__)
|
||||
const int RngSupport::kMinSeedBytes;
|
||||
const int RngSupport::kMaxSeedBytes;
|
||||
#endif
|
||||
|
@ -16,7 +16,7 @@ Before running TensorBoard, make sure you have generated summary data in a log
|
||||
directory by creating a summary writer:
|
||||
|
||||
``` python
|
||||
# sess.graph_def is the graph definition; that enables the Graph Visualizer.
|
||||
# sess.graph contains the graph definition; that enables the Graph Visualizer.
|
||||
|
||||
file_writer = tf.summary.FileWriter('/path/to/logs', sess.graph)
|
||||
```
|
||||
|
@ -26,7 +26,7 @@ py_library(
|
||||
|
||||
py_test(
|
||||
name = "application_test",
|
||||
size = "small",
|
||||
size = "medium",
|
||||
srcs = ["application_test.py"],
|
||||
srcs_version = "PY2AND3",
|
||||
deps = [
|
||||
|
@ -65,7 +65,7 @@ filegroup(
|
||||
"@paper_toolbar//:paper_toolbar",
|
||||
"@paper_tooltip//:paper_tooltip",
|
||||
"@plottable//:plottable",
|
||||
"@polymer_archive//:polymer",
|
||||
"@polymer//:polymer",
|
||||
"@promise_polyfill//:promise_polyfill",
|
||||
"@three_js_orbitcontrols_js//file",
|
||||
"@three_js_three_min_js//file",
|
||||
|
@ -110,8 +110,9 @@ def tf_copts():
|
||||
"-Wno-sign-compare",
|
||||
"-fno-exceptions",] +
|
||||
if_cuda(["-DGOOGLE_CUDA=1"]) +
|
||||
if_mkl(["-DINTEL_MKL=1"]) +
|
||||
if_android_arm(["-mfpu=neon"]) +
|
||||
if_x86(["-msse4.1"]) +
|
||||
if_x86(["-msse3"]) +
|
||||
select({
|
||||
"//tensorflow:android": [
|
||||
"-std=c++11",
|
||||
@ -481,7 +482,7 @@ def tf_cuda_library(deps=None, cuda_deps=None, copts=None, **kwargs):
|
||||
"//tensorflow/core:cuda",
|
||||
"@local_config_cuda//cuda:cuda_headers"
|
||||
]),
|
||||
copts = copts + if_cuda(["-DGOOGLE_CUDA=1"]),
|
||||
copts = copts + if_cuda(["-DGOOGLE_CUDA=1"]) + if_mkl(["-DINTEL_MKL=1"]),
|
||||
**kwargs)
|
||||
|
||||
def tf_kernel_library(name, prefix=None, srcs=None, gpu_srcs=None, hdrs=None,
|
||||
|
@ -44,14 +44,13 @@ RUN cd ${ANDROID_DEV_HOME} && \
|
||||
echo y | android update sdk --no-ui -a --filter tools,platform-tools,android-${ANDROID_API_LEVEL},build-tools-${ANDROID_BUILD_TOOLS_VERSION}
|
||||
|
||||
# Install Android NDK.
|
||||
ENV ANDROID_NDK_FILENAME android-ndk-r10e-linux-x86_64.bin
|
||||
ENV ANDROID_NDK_URL http://dl.google.com/android/ndk/${ANDROID_NDK_FILENAME}
|
||||
ENV ANDROID_NDK_FILENAME android-ndk-r12b-linux-x86_64.zip
|
||||
ENV ANDROID_NDK_URL https://dl.google.com/android/repository/${ANDROID_NDK_FILENAME}
|
||||
ENV ANDROID_NDK_HOME ${ANDROID_DEV_HOME}/ndk
|
||||
ENV PATH ${PATH}:${ANDROID_NDK_HOME}
|
||||
RUN cd ${ANDROID_DEV_HOME} && \
|
||||
wget -q ${ANDROID_NDK_URL} && \
|
||||
chmod +x ${ANDROID_NDK_FILENAME} && \
|
||||
./${ANDROID_NDK_FILENAME} -o${ANDROID_DEV_HOME} && \
|
||||
unzip ${ANDROID_NDK_FILENAME} -d ${ANDROID_DEV_HOME} && \
|
||||
rm ${ANDROID_NDK_FILENAME} && \
|
||||
bash -c "ln -s ${ANDROID_DEV_HOME}/android-ndk-* ${ANDROID_NDK_HOME}"
|
||||
|
||||
|
@ -11,7 +11,7 @@ It will print a list of errors it finds that it can't fix. You can also run
|
||||
it on a directory tree:
|
||||
|
||||
```
|
||||
tf_upgrade.py --intree coolcode -outtree coolcode-upgraded
|
||||
tf_upgrade.py --intree coolcode --outtree coolcode-upgraded
|
||||
```
|
||||
|
||||
In either case, it will also dump out a report e.g. which will detail changes
|
||||
@ -32,8 +32,8 @@ Renamed keyword argument from `squeeze_dims` to `axis`
|
||||
## Caveats
|
||||
|
||||
- Don't update parts of your code manually before running this script. In
|
||||
particular, functions that have had reordered arguments like `tf.concat`,
|
||||
`tf.split` will cause the script to incorrectly add keyword arguments that
|
||||
particular, functions that have had reordered arguments like `tf.concat`
|
||||
or `tf.split` will cause the script to incorrectly add keyword arguments that
|
||||
mismap arguments.
|
||||
|
||||
- This script wouldn't actually reorder arguments. Instead, the script will add
|
||||
|
@ -17,7 +17,7 @@
|
||||
#
|
||||
# To build the image, use ../build_server.sh
|
||||
|
||||
FROM ubuntu:14.04
|
||||
FROM ubuntu:16.04
|
||||
|
||||
MAINTAINER Shanqing Cai <cais@google.com>
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
||||
#
|
||||
# To build the image, use ../build_server.sh --test
|
||||
|
||||
FROM ubuntu:14.04
|
||||
FROM ubuntu:16.04
|
||||
|
||||
MAINTAINER Shanqing Cai <cais@google.com>
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
FROM ubuntu:14.04
|
||||
FROM ubuntu:16.04
|
||||
|
||||
MAINTAINER Craig Citro <craigcitro@google.com>
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
FROM ubuntu:14.04
|
||||
FROM ubuntu:16.04
|
||||
|
||||
MAINTAINER Craig Citro <craigcitro@google.com>
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
FROM nvidia/cuda:8.0-cudnn5-devel
|
||||
FROM nvidia/cuda:8.0-cudnn5-devel-ubuntu16.04
|
||||
|
||||
MAINTAINER Craig Citro <craigcitro@google.com>
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
FROM nvidia/cuda:8.0-cudnn5-devel
|
||||
FROM nvidia/cuda:8.0-cudnn5-devel-ubuntu16.04
|
||||
|
||||
MAINTAINER Craig Citro <craigcitro@google.com>
|
||||
|
||||
|
@ -29,7 +29,7 @@ from setuptools.dist import Distribution
|
||||
# This version string is semver compatible, but incompatible with pip.
|
||||
# For pip, we will remove all '-' characters from this string, and use the
|
||||
# result for pip.
|
||||
_VERSION = '1.0.0-rc1'
|
||||
_VERSION = '1.0.0-rc2'
|
||||
|
||||
REQUIRED_PACKAGES = [
|
||||
'numpy >= 1.11.0',
|
||||
|
@ -36,11 +36,17 @@ def check_version(bazel_version):
|
||||
native.bazel_version, bazel_version))
|
||||
pass
|
||||
|
||||
def _repos_are_siblings():
|
||||
return Label("@foo//bar").workspace_root.startswith("../")
|
||||
|
||||
# Temporary workaround to support including TensorFlow as a submodule until this
|
||||
# use-case is supported in the next Bazel release.
|
||||
def _temp_workaround_http_archive_impl(repo_ctx):
|
||||
repo_ctx.template("BUILD", repo_ctx.attr.build_file,
|
||||
{"%ws%": repo_ctx.attr.repository}, False)
|
||||
{
|
||||
"%prefix%" : ".." if _repos_are_siblings() else "external",
|
||||
"%ws%": repo_ctx.attr.repository
|
||||
}, False)
|
||||
repo_ctx.download_and_extract(repo_ctx.attr.urls, "", repo_ctx.attr.sha256,
|
||||
"", repo_ctx.attr.strip_prefix)
|
||||
|
||||
@ -214,11 +220,11 @@ def tf_workspace(path_prefix = "", tf_repo_name = ""):
|
||||
native.http_archive(
|
||||
name = "protobuf",
|
||||
urls = [
|
||||
"http://bazel-mirror.storage.googleapis.com/github.com/google/protobuf/archive/008b5a228b37c054f46ba478ccafa5e855cb16db.tar.gz",
|
||||
"https://github.com/google/protobuf/archive/008b5a228b37c054f46ba478ccafa5e855cb16db.tar.gz",
|
||||
"http://bazel-mirror.storage.googleapis.com/github.com/google/protobuf/archive/ef927cc428db7bf41d3a593a16a8f1a0fe6306c5.tar.gz",
|
||||
"https://github.com/google/protobuf/archive/ef927cc428db7bf41d3a593a16a8f1a0fe6306c5.tar.gz",
|
||||
],
|
||||
sha256 = "2737ad055eb8a9bc63ed068e32c4ea280b62d8236578cb4d4120eb5543f759ab",
|
||||
strip_prefix = "protobuf-008b5a228b37c054f46ba478ccafa5e855cb16db",
|
||||
sha256 = "8813a4ab27f7c61565d0db17d69236b4ec0b1404371efc728f15079b85e457ca",
|
||||
strip_prefix = "protobuf-ef927cc428db7bf41d3a593a16a8f1a0fe6306c5",
|
||||
)
|
||||
|
||||
native.new_http_archive(
|
||||
@ -270,7 +276,7 @@ def tf_workspace(path_prefix = "", tf_repo_name = ""):
|
||||
build_file = str(Label("//third_party:swig.BUILD")),
|
||||
)
|
||||
|
||||
native.new_http_archive(
|
||||
temp_workaround_http_archive(
|
||||
name = "curl",
|
||||
sha256 = "ff3e80c1ca6a068428726cd7dd19037a47cc538ce58ef61c59587191039b2ca6",
|
||||
urls = [
|
||||
@ -401,9 +407,9 @@ def tf_workspace(path_prefix = "", tf_repo_name = ""):
|
||||
|
||||
native.new_http_archive(
|
||||
name = "nccl_archive",
|
||||
url = "https://github.com/NVIDIA/nccl/archive/2a974f5ca2aa12b178046b2206b43f1fd69d9fae.tar.gz",
|
||||
sha256 = "d6aa1a3f20ae85358890d9a96f49c51a75baa1d3af3598501f29ff9ef8a3107d",
|
||||
strip_prefix = "nccl-2a974f5ca2aa12b178046b2206b43f1fd69d9fae",
|
||||
url = "https://github.com/nvidia/nccl/archive/024d1e267845f2ed06f3e2e42476d50f04a00ee6.tar.gz",
|
||||
sha256 = "6787f0eed88d52ee8e32956fa4947d92c139da469f1d8e311c307f27d641118e",
|
||||
strip_prefix = "nccl-024d1e267845f2ed06f3e2e42476d50f04a00ee6",
|
||||
build_file = str(Label("//third_party:nccl.BUILD")),
|
||||
)
|
||||
|
||||
|
8
third_party/curl.BUILD
vendored
8
third_party/curl.BUILD
vendored
@ -232,7 +232,7 @@ cc_library(
|
||||
],
|
||||
copts = select({
|
||||
":windows": [
|
||||
"/Iexternal/curl/lib",
|
||||
"/I%prefix%/curl/lib",
|
||||
"/DHAVE_CONFIG_H",
|
||||
"/DCURL_DISABLE_FTP",
|
||||
"/DCURL_DISABLE_NTLM",
|
||||
@ -245,7 +245,7 @@ cc_library(
|
||||
"/D_USING_V110_SDK71_",
|
||||
],
|
||||
"//conditions:default": [
|
||||
"-Iexternal/curl/lib",
|
||||
"-I%prefix%/curl/lib",
|
||||
"-D_GNU_SOURCE",
|
||||
"-DHAVE_CONFIG_H",
|
||||
"-DCURL_DISABLE_FTP",
|
||||
@ -387,12 +387,12 @@ cc_binary(
|
||||
],
|
||||
copts = select({
|
||||
":windows": [
|
||||
"/Iexternal/curl/lib",
|
||||
"/I%prefix%/curl/lib",
|
||||
"/DHAVE_CONFIG_H",
|
||||
"/DCURL_DISABLE_LIBCURL_OPTION",
|
||||
],
|
||||
"//conditions:default": [
|
||||
"-Iexternal/curl/lib",
|
||||
"-I%prefix%/curl/lib",
|
||||
"-D_GNU_SOURCE",
|
||||
"-DHAVE_CONFIG_H",
|
||||
"-DCURL_DISABLE_LIBCURL_OPTION",
|
||||
|
4
third_party/eigen3/BUILD
vendored
4
third_party/eigen3/BUILD
vendored
@ -11,6 +11,10 @@ licenses([
|
||||
|
||||
exports_files(["LICENSE"])
|
||||
|
||||
# INTEL_MKL start
|
||||
load("//tensorflow:tensorflow.bzl", "if_mkl")
|
||||
|
||||
# INTEL_MKL end
|
||||
load("//tensorflow:tensorflow.bzl", "if_mkl")
|
||||
|
||||
cc_library(
|
||||
|
12
third_party/gpus/cuda/BUILD.tpl
vendored
12
third_party/gpus/cuda/BUILD.tpl
vendored
@ -33,6 +33,12 @@ config_setting(
|
||||
visibility = ["//visibility:public"],
|
||||
)
|
||||
|
||||
config_setting(
|
||||
name = "freebsd",
|
||||
values = {"cpu": "freebsd"},
|
||||
visibility = ["//visibility:public"],
|
||||
)
|
||||
|
||||
cc_library(
|
||||
name = "cuda_headers",
|
||||
hdrs = glob([
|
||||
@ -49,8 +55,10 @@ cc_library(
|
||||
name = "cudart_static",
|
||||
srcs = ["lib/%{cudart_static_lib}"],
|
||||
includes = ["include/"],
|
||||
linkopts = [
|
||||
"-ldl",
|
||||
linkopts = select({
|
||||
":freebsd": [],
|
||||
"//conditions:default": ["-ldl"],
|
||||
}) + [
|
||||
"-lpthread",
|
||||
%{cudart_static_linkopt}
|
||||
],
|
||||
|
2
third_party/gpus/cuda_configure.bzl
vendored
2
third_party/gpus/cuda_configure.bzl
vendored
@ -368,7 +368,7 @@ def _lib_name(lib, cpu_value, version="", static=False):
|
||||
Returns:
|
||||
The platform-specific name of the library.
|
||||
"""
|
||||
if cpu_value == "Linux":
|
||||
if cpu_value in ("Linux", "FreeBSD"):
|
||||
if static:
|
||||
return "lib%s.a" % lib
|
||||
else:
|
||||
|
2
third_party/mkl/build_defs.bzl
vendored
2
third_party/mkl/build_defs.bzl
vendored
@ -2,8 +2,10 @@
|
||||
|
||||
def if_mkl(if_true, if_false = []):
|
||||
"""Shorthand for select()'ing on whether we're building with MKL.
|
||||
|
||||
Returns a select statement which evaluates to if_true if we're building
|
||||
with MKL enabled. Otherwise, the select statement evaluates to if_false.
|
||||
|
||||
"""
|
||||
return select({
|
||||
"//third_party/mkl:using_mkl": if_true,
|
||||
|
4
third_party/sycl/crosstool/computecpp.tpl
vendored
4
third_party/sycl/crosstool/computecpp.tpl
vendored
@ -43,9 +43,9 @@ def main():
|
||||
bc_out = filename + '.sycl'
|
||||
|
||||
# strip asan for the device
|
||||
computecpp_device_compiler_flags = [flag for flag in compiler_flags if not flag.startswith(('-fsanitize'))]
|
||||
computecpp_device_compiler_flags = ['-sycl-compress-name', '-DTENSORFLOW_USE_SYCL', '-Wno-unused-variable', '-I', COMPUTECPP_INCLUDE, '-isystem',
|
||||
COMPUTECPP_INCLUDE, '-std=c++11', '-sycl', '-emit-llvm', '-no-serial-memop', '-Xclang', '-cl-denorms-are-zero', '-Xclang', '-cl-fp32-correctly-rounded-divide-sqrt'] + computecpp_device_compiler_flags
|
||||
COMPUTECPP_INCLUDE, '-std=c++11', '-sycl', '-emit-llvm', '-no-serial-memop', '-Xclang', '-cl-denorms-are-zero', '-Xclang', '-cl-fp32-correctly-rounded-divide-sqrt']
|
||||
computecpp_device_compiler_flags += [flag for flag in compiler_flags if not flag.startswith(('-fsanitize'))]
|
||||
|
||||
x = subprocess.call([COMPUTECPP_DRIVER] + computecpp_device_compiler_flags )
|
||||
if(x == 0):
|
||||
|
Loading…
Reference in New Issue
Block a user