From 3e932b668daf1534679811c58b83478873bb9573 Mon Sep 17 00:00:00 2001 From: Amit Patankar <amitpatankar@google.com> Date: Thu, 1 Feb 2018 09:56:43 -0800 Subject: [PATCH 01/34] Adding Release notes for r1.6. --- RELEASE.md | 78 ++++++++++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 73 insertions(+), 5 deletions(-) diff --git a/RELEASE.md b/RELEASE.md index af6440acef5..728a8400021 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -1,11 +1,43 @@ -# Release 1.5.0 +# Release 1.6.0 ## Breaking Changes * Prebuilt binaries are now built against CUDA 9.0 and cuDNN 7. -* Our Linux binaries are built using ubuntu 16 containers, potentially - introducing glibc incompatibility issues with ubuntu 14. -* Starting from 1.6 release, our prebuilt binaries will use AVX instructions. - This may break TF on older CPUs. +* Prebuilt binaries will use AVX instructions. This may break TF on older CPUs. + +## Major Features And Improvements +* New Optimizer internal API for non-slot variables. Descendants of AdamOptimizer that access _beta[12]_power will need to be updated. +* `tf.estimator.{FinalExporter,LatestExporter}` now export stripped SavedModels. This improves forward compatibility of the SavedModel. +* FFT support added to XLA CPU/GPU. + +## Bug Fixes and Other Changes +* Documentation updates: + * Added a second version of Getting Started, which is aimed at ML +newcomers. + * Clarified documentation on `resize_images.align_corners` parameter. + * Additional documentation for TPUs. +* Google Cloud Storage (GCS): + * Add client-side throttle. + * Add a `FlushCaches()` method to the FileSystem interface, with an implementation for GcsFileSystem. +* Other: + * Add `tf.contrib.distributions.Kumaraswamy`. + * `RetryingFileSystem::FlushCaches()` calls the base FileSystem's `FlushCaches()`. + * Add auto_correlation to distributions. + * Add `tf.contrib.distributions.Autoregressive`. + * Add SeparableConv1D layer. + * Add convolutional Flipout layers. + * When both inputs of `tf.matmul` are bfloat16, it returns bfloat16, instead of float32. + * Added `tf.contrib.image.connected_components`. + * Add `tf.contrib.framework.CriticalSection` that allows atomic variable access. + * Output variance over trees predictions for classifications tasks. + * For `pt` and `eval` commands, allow writing tensor values to filesystem as numpy files. + * gRPC: Propagate truncated errors (instead of returning gRPC internal error). + * Augment parallel_interleave to support 2 kinds of prefetching. + * Improved XLA support for C64-related ops log, pow, atan2, tanh. + * Add probabilistic convolutional layers. + +## API Changes +* Introducing prepare_variance boolean with default setting to False for backward compatibility. +* Move `layers_dense_variational_impl.py` to `layers_dense_variational.py`. ## Known Bugs * Using XLA:GPU with CUDA 9 and CUDA 9.1 results in garbage results and/or @@ -28,6 +60,42 @@ TensorFlow will print a warning if you use XLA:GPU with a known-bad version of CUDA; see e00ba24c4038e7644da417ddc639169b6ea59122. +## Thanks to our Contributors + +This release contains contributions from many people at Google, as well as: + +4d55397500, Ag Ramesh, Aiden Scandella, Akimasa Kimura, Alex Rothberg, Allen Goodman, +amilioto, Andrei Costinescu, Andrei Nigmatulin, Anjum Sayed, Anthony Platanios, +Anush Elangovan, Armando Fandango, Ashish Kumar Ram, Ashwini Shukla, Ben, Bhavani Subramanian, +Brett Koonce, Carl Thomé, cclauss, Cesc, Changming Sun, Christoph Boeddeker, Clayne Robison, +Clemens Schulz, Clint (Woonhyuk Baek), codrut3, Cole Gerdemann, Colin Raffel, Daniel Trebbien, +Daniel Ylitalo, Daniel Zhang, Daniyar, Darjan Salaj, Dave Maclachlan, David Norman, Dong--Jian, +dongsamb, dssgsra, Edward H, eladweiss, elilienstein, Eric Lilienstein, error.d, Eunji Jeong, fanlu, +Florian Courtial, fo40225, Fred, Gregg Helt, Guozhong Zhuang, Hanchen Li, hsm207, hyunyoung2, +ImSheridan, Ishant Mrinal Haloi, Jacky Ko, Jay Young, Jean Flaherty, Jerome, JerrikEph, Jesse +Kinkead, jfaath, Jian Lin, jinghuangintel, Jiongyan Zhang, Joel Hestness, Joel Shor, Johnny Chan, +Julian Niedermeier, Julian Wolff, JxKing, K-W-W, Karl Lessard, Kasper Marstal, Keiji Ariyama, +Koan-Sin Tan, Loki Der Quaeler, Loo Rong Jie, Luke Schaefer, Lynn Jackson, ManHyuk, Matt Basta, +Matt Smith, Matthew Schulkind, Michael, michaelkhan3, Miguel Piedrafita, Mikalai Drabovich, +Mike Knapp, mjwen, mktozk, Mohamed Aly, Mohammad Ashraf Bhuiyan, Myungjoo Ham, Naman Bhalla, +Namrata-Ibm, Nathan Luehr, nathansilberman, Netzeband, Niranjan Hasabnis, Omar Aflak, Ozge +Yalcinkaya, Parth P Panchal, patrickzzy, Patryk Chrabaszcz, Paul Van Eck, Paweł Kapica, Peng Yu, +Philip Yang, Pierre Blondeau, Po-Hsien Chu, powderluv, Puyu Wang, Rajendra Arora, Rasmus, Renat +Idrisov, resec, Robin Richtsfeld, Ronald Eddy Jr, Sahil Singh, Sam Matzek, Sami Kama, sandipmgiri, +Santiago Castro, Sayed Hadi Hashemi, Scott Tseng, Sergii Khomenko, Shahid, Shengpeng Liu, Shreyash +Sharma, Shrinidhi Kl, Simone Cirillo, simsicon, Stanislav Levental, starsblinking, Stephen Lumenta, +Steven Hickson, Su Tang, Taehoon Lee, Takuya Wakisaka, Ted Chang, Ted Ying, Tijmen Verhulsdonck, +Timofey Kondrashov, vade, vaibhav, Valentin Khrulkov, vchigrin, Victor Costan, Viraj Navkal, +Vivek Rane, wagonhelm, Yan Facai (颜发才), Yanbo Liang, Yaroslav Bulatov, yegord, Yong Tang, +Yoni Tsafir, yordun, Yuan (Terry) Tang, Yuxin Wu, zhengdi, Zhengsheng Wei, 田传武 + +# Release 1.5.0 + +## Breaking Changes +* Prebuilt binaries are now built against CUDA 9.0 and cuDNN 7. +* Starting from 1.6 release, our prebuilt binaries will use AVX instructions. + This may break TF on older CPUs. + ## Major Features And Improvements * [Eager execution](https://github.com/tensorflow/tensorflow/tree/r1.5/tensorflow/contrib/eager) preview version is now available. From 68c52b926eb8cba2b43a37cde4a9658654427e70 Mon Sep 17 00:00:00 2001 From: Amit Patankar <amitpatankar@google.com> Date: Thu, 1 Feb 2018 10:27:29 -0800 Subject: [PATCH 02/34] Adding the known bugs to 1.5 as well. --- RELEASE.md | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/RELEASE.md b/RELEASE.md index 728a8400021..0fad3b5d41c 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -214,6 +214,27 @@ Yoni Tsafir, yordun, Yuan (Terry) Tang, Yuxin Wu, zhengdi, Zhengsheng Wei, 田 * Minor refactor: move stats files from `stochastic` to `common` and remove `stochastic`. +## Known Bugs +* Using XLA:GPU with CUDA 9 and CUDA 9.1 results in garbage results and/or + `CUDA_ILLEGAL_ADDRESS` failures. + + Google discovered in mid-December 2017 that the PTX-to-SASS compiler in CUDA 9 + and CUDA 9.1 sometimes does not properly compute the carry bit when + decomposing 64-bit address calculations with large offsets (e.g. `load [x + + large_constant]`) into 32-bit arithmetic in SASS. + + As a result, these versions of `ptxas` miscompile most XLA programs which use + more than 4GB of temp memory. This results in garbage results and/or + `CUDA_ERROR_ILLEGAL_ADDRESS` failures. + + A fix in CUDA 9.1.121 is expected in late February 2018. We do not expect a + fix for CUDA 9.0.x. Until the fix is available, the only workaround is to + [downgrade](https://developer.nvidia.com/cuda-toolkit-archive) to CUDA 8.0.x + or disable XLA:GPU. + + TensorFlow will print a warning if you use XLA:GPU with a known-bad version of + CUDA; see e00ba24c4038e7644da417ddc639169b6ea59122. + ## Thanks to our Contributors This release contains contributions from many people at Google, as well as: From c63fb0841bc020ddee2a4eba337eca1cc49c2eff Mon Sep 17 00:00:00 2001 From: Amit Patankar <amitpatankar@google.com> Date: Fri, 2 Feb 2018 11:14:18 -0800 Subject: [PATCH 03/34] Update version to 1.6.0-rc0. (#16701) * Updating the version to 1.6.0-rc0. * Removing the escape character. * Updating the cloud tpu version. --- .../eager/python/examples/mnist/mnist.py | 2 +- .../contrib/tpu/profiler/pip_package/setup.py | 2 +- tensorflow/core/public/version.h | 4 ++-- tensorflow/docs_src/install/install_c.md | 2 +- tensorflow/docs_src/install/install_go.md | 2 +- tensorflow/docs_src/install/install_java.md | 22 +++++++++---------- tensorflow/docs_src/install/install_linux.md | 22 +++++++++---------- tensorflow/docs_src/install/install_mac.md | 10 ++++----- .../docs_src/install/install_sources.md | 10 ++++++--- tensorflow/tools/docker/Dockerfile.devel | 2 +- .../tools/docker/Dockerfile.devel-cpu-mkl | 2 +- tensorflow/tools/docker/Dockerfile.devel-gpu | 2 +- tensorflow/tools/pip_package/setup.py | 2 +- 13 files changed, 44 insertions(+), 40 deletions(-) diff --git a/tensorflow/contrib/eager/python/examples/mnist/mnist.py b/tensorflow/contrib/eager/python/examples/mnist/mnist.py index ed7dbc89046..772f59562ba 100644 --- a/tensorflow/contrib/eager/python/examples/mnist/mnist.py +++ b/tensorflow/contrib/eager/python/examples/mnist/mnist.py @@ -39,7 +39,7 @@ class MNISTModel(tfe.Network): """MNIST Network. Network structure is equivalent to: - https://github.com/tensorflow/tensorflow/blob/r1.5/tensorflow/examples/tutorials/mnist/mnist_deep.py + https://github.com/tensorflow/tensorflow/blob/r1.6/tensorflow/examples/tutorials/mnist/mnist_deep.py and https://github.com/tensorflow/models/blob/master/tutorials/image/mnist/convolutional.py diff --git a/tensorflow/contrib/tpu/profiler/pip_package/setup.py b/tensorflow/contrib/tpu/profiler/pip_package/setup.py index 3dffebe6680..cb619847990 100644 --- a/tensorflow/contrib/tpu/profiler/pip_package/setup.py +++ b/tensorflow/contrib/tpu/profiler/pip_package/setup.py @@ -20,7 +20,7 @@ from __future__ import print_function from setuptools import setup -_VERSION = '1.5.0-rc1' +_VERSION = '1.6.0-rc0' CONSOLE_SCRIPTS = [ 'capture_tpu_profile=cloud_tpu_profiler.main:run_main', diff --git a/tensorflow/core/public/version.h b/tensorflow/core/public/version.h index b02f899b87d..50bfa912678 100644 --- a/tensorflow/core/public/version.h +++ b/tensorflow/core/public/version.h @@ -19,12 +19,12 @@ limitations under the License. // TensorFlow uses semantic versioning, see http://semver.org/. #define TF_MAJOR_VERSION 1 -#define TF_MINOR_VERSION 5 +#define TF_MINOR_VERSION 6 #define TF_PATCH_VERSION 0 // TF_VERSION_SUFFIX is non-empty for pre-releases (e.g. "-alpha", "-alpha.1", // "-beta", "-rc", "-rc.1") -#define TF_VERSION_SUFFIX "" +#define TF_VERSION_SUFFIX "-rc0" #define TF_STR_HELPER(x) #x #define TF_STR(x) TF_STR_HELPER(x) diff --git a/tensorflow/docs_src/install/install_c.md b/tensorflow/docs_src/install/install_c.md index 14add7c77e6..a783205b4a2 100644 --- a/tensorflow/docs_src/install/install_c.md +++ b/tensorflow/docs_src/install/install_c.md @@ -38,7 +38,7 @@ enable TensorFlow for C: OS="linux" # Change to "darwin" for macOS TARGET_DIRECTORY="/usr/local" curl -L \ - "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-${OS}-x86_64-1.5.0.tar.gz" | + "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-${OS}-x86_64-1.6.0-rc0.tar.gz" | sudo tar -C $TARGET_DIRECTORY -xz The `tar` command extracts the TensorFlow C library into the `lib` diff --git a/tensorflow/docs_src/install/install_go.md b/tensorflow/docs_src/install/install_go.md index d2af9d98434..5249e04615b 100644 --- a/tensorflow/docs_src/install/install_go.md +++ b/tensorflow/docs_src/install/install_go.md @@ -38,7 +38,7 @@ steps to install this library and enable TensorFlow for Go: TF_TYPE="cpu" # Change to "gpu" for GPU support TARGET_DIRECTORY='/usr/local' curl -L \ - "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-$(go env GOOS)-x86_64-1.5.0.tar.gz" | + "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-$(go env GOOS)-x86_64-1.6.0-rc0.tar.gz" | sudo tar -C $TARGET_DIRECTORY -xz The `tar` command extracts the TensorFlow C library into the `lib` diff --git a/tensorflow/docs_src/install/install_java.md b/tensorflow/docs_src/install/install_java.md index e5388c4b1e0..0c6c773e624 100644 --- a/tensorflow/docs_src/install/install_java.md +++ b/tensorflow/docs_src/install/install_java.md @@ -36,7 +36,7 @@ following to the project's `pom.xml` to use the TensorFlow Java APIs: <dependency> <groupId>org.tensorflow</groupId> <artifactId>tensorflow</artifactId> - <version>1.5.0</version> + <version>1.6.0-rc0</version> </dependency> ``` @@ -65,7 +65,7 @@ As an example, these steps will create a Maven project that uses TensorFlow: <dependency> <groupId>org.tensorflow</groupId> <artifactId>tensorflow</artifactId> - <version>1.5.0</version> + <version>1.6.0-rc0</version> </dependency> </dependencies> </project> @@ -123,12 +123,12 @@ instead: <dependency> <groupId>org.tensorflow</groupId> <artifactId>libtensorflow</artifactId> - <version>1.5.0</version> + <version>1.6.0-rc0</version> </dependency> <dependency> <groupId>org.tensorflow</groupId> <artifactId>libtensorflow_jni_gpu</artifactId> - <version>1.5.0</version> + <version>1.6.0-rc0</version> </dependency> ``` @@ -147,7 +147,7 @@ refer to the simpler instructions above instead. Take the following steps to install TensorFlow for Java on Linux or macOS: 1. Download - [libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.5.0.jar), + [libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.6.0-rc0.jar), which is the TensorFlow Java Archive (JAR). 2. Decide whether you will run TensorFlow for Java on CPU(s) only or with @@ -166,7 +166,7 @@ Take the following steps to install TensorFlow for Java on Linux or macOS: OS=$(uname -s | tr '[:upper:]' '[:lower:]') mkdir -p ./jni curl -L \ - "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-${TF_TYPE}-${OS}-x86_64-1.5.0.tar.gz" | + "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-${TF_TYPE}-${OS}-x86_64-1.6.0-rc0.tar.gz" | tar -xz -C ./jni ### Install on Windows @@ -174,10 +174,10 @@ Take the following steps to install TensorFlow for Java on Linux or macOS: Take the following steps to install TensorFlow for Java on Windows: 1. Download - [libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.5.0.jar), + [libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.6.0-rc0.jar), which is the TensorFlow Java Archive (JAR). 2. Download the following Java Native Interface (JNI) file appropriate for - [TensorFlow for Java on Windows](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-cpu-windows-x86_64-1.5.0.zip). + [TensorFlow for Java on Windows](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-cpu-windows-x86_64-1.6.0-rc0.zip). 3. Extract this .zip file. @@ -225,7 +225,7 @@ must be part of your `classpath`. For example, you can include the downloaded `.jar` in your `classpath` by using the `-cp` compilation flag as follows: -<pre><b>javac -cp libtensorflow-1.5.0.jar HelloTF.java</b></pre> +<pre><b>javac -cp libtensorflow-1.6.0-rc0.jar HelloTF.java</b></pre> ### Running @@ -239,11 +239,11 @@ two files are available to the JVM: For example, the following command line executes the `HelloTF` program on Linux and macOS X: -<pre><b>java -cp libtensorflow-1.5.0.jar:. -Djava.library.path=./jni HelloTF</b></pre> +<pre><b>java -cp libtensorflow-1.6.0-rc0.jar:. -Djava.library.path=./jni HelloTF</b></pre> And the following command line executes the `HelloTF` program on Windows: -<pre><b>java -cp libtensorflow-1.5.0.jar;. -Djava.library.path=jni HelloTF</b></pre> +<pre><b>java -cp libtensorflow-1.6.0-rc0.jar;. -Djava.library.path=jni HelloTF</b></pre> If the program prints <tt>Hello from <i>version</i></tt>, you've successfully installed TensorFlow for Java and are ready to use the API. If the program diff --git a/tensorflow/docs_src/install/install_linux.md b/tensorflow/docs_src/install/install_linux.md index cd8c14599fb..105b2251773 100644 --- a/tensorflow/docs_src/install/install_linux.md +++ b/tensorflow/docs_src/install/install_linux.md @@ -188,7 +188,7 @@ Take the following steps to install TensorFlow with Virtualenv: Virtualenv environment: <pre>(tensorflow)$ <b>pip3 install --upgrade \ - https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.5.0-cp34-cp34m-linux_x86_64.whl</b></pre> + https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.6.0rc0-cp34-cp34m-linux_x86_64.whl</b></pre> If you encounter installation problems, see [Common Installation Problems](#common_installation_problems). @@ -293,7 +293,7 @@ take the following steps: <pre> $ <b>sudo pip3 install --upgrade \ - https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.5.0-cp34-cp34m-linux_x86_64.whl</b> + https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.6.0rc0-cp34-cp34m-linux_x86_64.whl</b> </pre> If this step fails, see @@ -480,7 +480,7 @@ Take the following steps to install TensorFlow in an Anaconda environment: <pre> (tensorflow)$ <b>pip install --ignore-installed --upgrade \ - https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.5.0-cp34-cp34m-linux_x86_64.whl</b></pre> + https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.6.0rc0-cp34-cp34m-linux_x86_64.whl</b></pre> <a name="ValidateYourInstallation"></a> @@ -648,14 +648,14 @@ This section documents the relevant values for Linux installations. CPU only: <pre> -https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.5.0-cp27-none-linux_x86_64.whl +https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.6.0rc0-cp27-none-linux_x86_64.whl </pre> GPU support: <pre> -https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.5.0-cp27-none-linux_x86_64.whl +https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.6.0rc0-cp27-none-linux_x86_64.whl </pre> Note that GPU support requires the NVIDIA hardware and software described in @@ -667,14 +667,14 @@ Note that GPU support requires the NVIDIA hardware and software described in CPU only: <pre> -https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.5.0-cp34-cp34m-linux_x86_64.whl +https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.6.0rc0-cp34-cp34m-linux_x86_64.whl </pre> GPU support: <pre> -https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.5.0-cp34-cp34m-linux_x86_64.whl +https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.6.0rc0-cp34-cp34m-linux_x86_64.whl </pre> Note that GPU support requires the NVIDIA hardware and software described in @@ -686,14 +686,14 @@ Note that GPU support requires the NVIDIA hardware and software described in CPU only: <pre> -https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.5.0-cp35-cp35m-linux_x86_64.whl +https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.6.0rc0-cp35-cp35m-linux_x86_64.whl </pre> GPU support: <pre> -https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.5.0-cp35-cp35m-linux_x86_64.whl +https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.6.0rc0-cp35-cp35m-linux_x86_64.whl </pre> @@ -705,14 +705,14 @@ Note that GPU support requires the NVIDIA hardware and software described in CPU only: <pre> -https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.5.0-cp36-cp36m-linux_x86_64.whl +https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.6.0rc0-cp36-cp36m-linux_x86_64.whl </pre> GPU support: <pre> -https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.5.0-cp36-cp36m-linux_x86_64.whl +https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.6.0rc0-cp36-cp36m-linux_x86_64.whl </pre> diff --git a/tensorflow/docs_src/install/install_mac.md b/tensorflow/docs_src/install/install_mac.md index f49d3a2f085..a6ea548cfbd 100644 --- a/tensorflow/docs_src/install/install_mac.md +++ b/tensorflow/docs_src/install/install_mac.md @@ -115,7 +115,7 @@ Take the following steps to install TensorFlow with Virtualenv: TensorFlow in the active Virtualenv is as follows: <pre> $ <b>pip3 install --upgrade \ - https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.5.0-py3-none-any.whl</b></pre> + https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.6.0rc0-py3-none-any.whl</b></pre> If you encounter installation problems, see [Common Installation Problems](#common-installation-problems). @@ -238,7 +238,7 @@ take the following steps: issue the following command: <pre> $ <b>sudo pip3 install --upgrade \ - https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.5.0-py3-none-any.whl</b> </pre> + https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.6.0rc0-py3-none-any.whl</b> </pre> If the preceding command fails, see [installation problems](#common-installation-problems). @@ -347,7 +347,7 @@ Take the following steps to install TensorFlow in an Anaconda environment: TensorFlow for Python 2.7: <pre> (<i>targetDirectory</i>)$ <b>pip install --ignore-installed --upgrade \ - https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.5.0-py2-none-any.whl</b></pre> + https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.6.0rc0-py2-none-any.whl</b></pre> <a name="ValidateYourInstallation"></a> @@ -520,7 +520,7 @@ This section documents the relevant values for Mac OS installations. <pre> -https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.5.0-py2-none-any.whl +https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.6.0rc0-py2-none-any.whl </pre> @@ -528,5 +528,5 @@ https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.5.0-py2-none-any. <pre> -https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.5.0-py3-none-any.whl +https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.6.0rc0-py3-none-any.whl </pre> diff --git a/tensorflow/docs_src/install/install_sources.md b/tensorflow/docs_src/install/install_sources.md index bc7d2080dc6..36dffd85dce 100644 --- a/tensorflow/docs_src/install/install_sources.md +++ b/tensorflow/docs_src/install/install_sources.md @@ -359,10 +359,10 @@ Invoke `pip install` to install that pip package. The filename of the `.whl` file depends on your platform. For example, the following command will install the pip package -for TensorFlow 1.5.0 on Linux: +for TensorFlow 1.6.0rc0 on Linux: <pre> -$ <b>sudo pip install /tmp/tensorflow_pkg/tensorflow-1.5.0-py2-none-any.whl</b> +$ <b>sudo pip install /tmp/tensorflow_pkg/tensorflow-1.6.0rc0-py2-none-any.whl</b> </pre> ## Validate your installation @@ -460,7 +460,8 @@ Stack Overflow and specify the `tensorflow` tag. **Linux** <table> <tr><th>Version:</th><th>CPU/GPU:</th><th>Python Version:</th><th>Compiler:</th><th>Build Tools:</th><th>cuDNN:</th><th>CUDA:</th></tr> - +<tr><td>tensorflow-1.6.0rc0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>GCC 4.8</td><td>Bazel 0.9.0</td><td>N/A</td><td>N/A</td></tr> +<tr><td>tensorflow_gpu-1.6.0rc0</td><td>GPU</td><td>2.7, 3.3-3.6</td><td>GCC 4.8</td><td>Bazel 0.9.0</td><td>7</td><td>9</td></tr> <tr><td>tensorflow-1.5.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>GCC 4.8</td><td>Bazel 0.8.0</td><td>N/A</td><td>N/A</td></tr> <tr><td>tensorflow_gpu-1.5.0</td><td>GPU</td><td>2.7, 3.3-3.6</td><td>GCC 4.8</td><td>Bazel 0.8.0</td><td>7</td><td>9</td></tr> <tr><td>tensorflow-1.4.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>GCC 4.8</td><td>Bazel 0.5.4</td><td>N/A</td><td>N/A</td></tr> @@ -478,6 +479,7 @@ Stack Overflow and specify the `tensorflow` tag. **Mac** <table> <tr><th>Version:</th><th>CPU/GPU:</th><th>Python Version:</th><th>Compiler:</th><th>Build Tools:</th><th>cuDNN:</th><th>CUDA:</th></tr> +<tr><td>tensorflow-1.6.0rc0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>Clang from xcode</td><td>Bazel 0.8.1</td><td>N/A</td><td>N/A</td></tr> <tr><td>tensorflow-1.5.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>Clang from xcode</td><td>Bazel 0.8.1</td><td>N/A</td><td>N/A</td></tr> <tr><td>tensorflow-1.4.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>Clang from xcode</td><td>Bazel 0.5.4</td><td>N/A</td><td>N/A</td></tr> <tr><td>tensorflow-1.3.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>Clang from xcode</td><td>Bazel 0.4.5</td><td>N/A</td><td>N/A</td></tr> @@ -491,6 +493,8 @@ Stack Overflow and specify the `tensorflow` tag. **Windows** <table> <tr><th>Version:</th><th>CPU/GPU:</th><th>Python Version:</th><th>Compiler:</th><th>Build Tools:</th><th>cuDNN:</th><th>CUDA:</th></tr> +<tr><td>tensorflow-1.6.0rc0</td><td>CPU</td><td>3.5-3.6</td><td>MSVC 2015 update 3</td><td>Cmake v3.6.3</td><td>N/A</td><td>N/A</td></tr> +<tr><td>tensorflow_gpu-1.6.0rc0</td><td>GPU</td><td>3.5-3.6</td><td>MSVC 2015 update 3</td><td>Cmake v3.6.3</td><td>7</td><td>9</td></tr> <tr><td>tensorflow-1.5.0</td><td>CPU</td><td>3.5-3.6</td><td>MSVC 2015 update 3</td><td>Cmake v3.6.3</td><td>N/A</td><td>N/A</td></tr> <tr><td>tensorflow_gpu-1.5.0</td><td>GPU</td><td>3.5-3.6</td><td>MSVC 2015 update 3</td><td>Cmake v3.6.3</td><td>7</td><td>9</td></tr> <tr><td>tensorflow-1.4.0</td><td>CPU</td><td>3.5-3.6</td><td>MSVC 2015 update 3</td><td>Cmake v3.6.3</td><td>N/A</td><td>N/A</td></tr> diff --git a/tensorflow/tools/docker/Dockerfile.devel b/tensorflow/tools/docker/Dockerfile.devel index 5dc4a053fd2..d16761c3675 100644 --- a/tensorflow/tools/docker/Dockerfile.devel +++ b/tensorflow/tools/docker/Dockerfile.devel @@ -70,7 +70,7 @@ RUN mkdir /bazel && \ # Download and build TensorFlow. WORKDIR /tensorflow -RUN git clone --branch=r1.5 --depth=1 https://github.com/tensorflow/tensorflow.git . +RUN git clone --branch=r1.6 --depth=1 https://github.com/tensorflow/tensorflow.git . # TODO(craigcitro): Don't install the pip package, since it makes it # more difficult to experiment with local changes. Instead, just add diff --git a/tensorflow/tools/docker/Dockerfile.devel-cpu-mkl b/tensorflow/tools/docker/Dockerfile.devel-cpu-mkl index 96b260ad3ae..3690e7dfe57 100644 --- a/tensorflow/tools/docker/Dockerfile.devel-cpu-mkl +++ b/tensorflow/tools/docker/Dockerfile.devel-cpu-mkl @@ -3,7 +3,7 @@ FROM tensorflow/tensorflow:latest-devel LABEL maintainer="Clayne Robison<clayne.b.robison@intel.com>" # These arguments are parameterized. Use --build-args to override. -ARG TF_BRANCH=r1.5 +ARG TF_BRANCH=r1.6 ARG WHL_DIR=/whl RUN apt-get update && apt-get install -y --no-install-recommends \ diff --git a/tensorflow/tools/docker/Dockerfile.devel-gpu b/tensorflow/tools/docker/Dockerfile.devel-gpu index 07ffd3839a3..4ef37881bc9 100644 --- a/tensorflow/tools/docker/Dockerfile.devel-gpu +++ b/tensorflow/tools/docker/Dockerfile.devel-gpu @@ -79,7 +79,7 @@ RUN mkdir /bazel && \ # Download and build TensorFlow. WORKDIR /tensorflow -RUN git clone --branch=r1.5 --depth=1 https://github.com/tensorflow/tensorflow.git . +RUN git clone --branch=r1.6 --depth=1 https://github.com/tensorflow/tensorflow.git . # Configure the build for our CUDA configuration. ENV CI_BUILD_PYTHON python diff --git a/tensorflow/tools/pip_package/setup.py b/tensorflow/tools/pip_package/setup.py index d7fab2b93ac..20027869990 100644 --- a/tensorflow/tools/pip_package/setup.py +++ b/tensorflow/tools/pip_package/setup.py @@ -29,7 +29,7 @@ from setuptools.dist import Distribution # This version string is semver compatible, but incompatible with pip. # For pip, we will remove all '-' characters from this string, and use the # result for pip. -_VERSION = '1.5.0' +_VERSION = '1.6.0-rc0' REQUIRED_PACKAGES = [ 'absl-py >= 0.1.6', From a0e31f28df22030b4ce33db49e999f56622aa693 Mon Sep 17 00:00:00 2001 From: Yifei Feng <fengyifei2026@gmail.com> Date: Fri, 2 Feb 2018 11:02:34 -0800 Subject: [PATCH 04/34] Remove all_files rule from com_google_absl.BUILD --- third_party/com_google_absl.BUILD | 12 ------------ 1 file changed, 12 deletions(-) diff --git a/third_party/com_google_absl.BUILD b/third_party/com_google_absl.BUILD index 0c8d327c1f3..8fca145f751 100644 --- a/third_party/com_google_absl.BUILD +++ b/third_party/com_google_absl.BUILD @@ -3,15 +3,3 @@ package(default_visibility = ["//visibility:public"]) licenses(["notice"]) # Apache exports_files(["LICENSE"]) - -filegroup( - name = "all_files", - srcs = glob( - ["**/*"], - exclude = [ - "**/METADATA", - "**/OWNERS", - ], - ), - visibility = ["//tensorflow:__subpackages__"], -) From 0fd0f2ded169c383a564ffbe9564b6c3d5a684f3 Mon Sep 17 00:00:00 2001 From: Yifei Feng <fengyifei2026@gmail.com> Date: Fri, 2 Feb 2018 12:42:04 -0800 Subject: [PATCH 05/34] Fix sanity --- tensorflow/tools/ci_build/ci_sanity.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tensorflow/tools/ci_build/ci_sanity.sh b/tensorflow/tools/ci_build/ci_sanity.sh index b3a8ff2ac72..fd5d0058441 100755 --- a/tensorflow/tools/ci_build/ci_sanity.sh +++ b/tensorflow/tools/ci_build/ci_sanity.sh @@ -351,7 +351,7 @@ do_external_licenses_check(){ # Whitelist echo ${EXTRA_LICENSE_FILE} - grep -e "@bazel_tools//src/" -e "@bazel_tools//tools/" -e "@com_google_absl//" -e "//external" -e "@local" -v ${EXTRA_LICENSES_FILE} > temp.txt + grep -e "@bazel_tools//src" -e "@bazel_tools//tools/" -e "@com_google_absl//" -e "//external" -e "@local" -v ${EXTRA_LICENSES_FILE} > temp.txt mv temp.txt ${EXTRA_LICENSES_FILE} From 31bfdac6fa5cdf25ebcbf297f54bcaa042f1ec14 Mon Sep 17 00:00:00 2001 From: Amit Patankar <amitpatankar@google.com> Date: Fri, 2 Feb 2018 13:00:44 -0800 Subject: [PATCH 06/34] Revert "Update external protobuf codebase version for Windows cmake build" This reverts commit 07bec47ba5db4c2f2e33ecb49f23253a371bfbbe. --- tensorflow/contrib/cmake/external/protobuf.cmake | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tensorflow/contrib/cmake/external/protobuf.cmake b/tensorflow/contrib/cmake/external/protobuf.cmake index fd05fa6d472..aedb793d2ae 100644 --- a/tensorflow/contrib/cmake/external/protobuf.cmake +++ b/tensorflow/contrib/cmake/external/protobuf.cmake @@ -16,7 +16,7 @@ include (ExternalProject) set(PROTOBUF_INCLUDE_DIRS ${CMAKE_CURRENT_BINARY_DIR}/protobuf/src/protobuf/src) set(PROTOBUF_URL https://github.com/google/protobuf.git) -set(PROTOBUF_TAG 396336eb961b75f03b25824fe86cf6490fb75e3a) +set(PROTOBUF_TAG b04e5cba356212e4e8c66c61bbe0c3a20537c5b9) if(WIN32) set(protobuf_STATIC_LIBRARIES From e8341fe47417fece104f24fc7067c105c507aa95 Mon Sep 17 00:00:00 2001 From: Amit Patankar <amitpatankar@google.com> Date: Fri, 2 Feb 2018 13:03:49 -0800 Subject: [PATCH 07/34] Fixing the typo. --- tensorflow/core/BUILD | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tensorflow/core/BUILD b/tensorflow/core/BUILD index a8a8c34846a..dba9f6f0e0c 100644 --- a/tensorflow/core/BUILD +++ b/tensorflow/core/BUILD @@ -1909,7 +1909,7 @@ cc_library( tf_cuda_library( name = "cuda_device_functions", hdrs = ["util/cuda_device_functions.h"], - cuda_deps = ["//third_party_gpus/cuda:cuda_headers"], + cuda_deps = ["//third_party/gpus/cuda:cuda_headers"], visibility = ["//visibility:public"], deps = [":framework_lite"], ) From b300c52026ba152e62fc6d7e7e8c2cc515677212 Mon Sep 17 00:00:00 2001 From: Amit Patankar <amitpatankar@google.com> Date: Fri, 2 Feb 2018 13:46:48 -0800 Subject: [PATCH 08/34] Fixing the path post transition to GitHub. --- tensorflow/core/BUILD | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tensorflow/core/BUILD b/tensorflow/core/BUILD index dba9f6f0e0c..5b38c10032e 100644 --- a/tensorflow/core/BUILD +++ b/tensorflow/core/BUILD @@ -1909,7 +1909,7 @@ cc_library( tf_cuda_library( name = "cuda_device_functions", hdrs = ["util/cuda_device_functions.h"], - cuda_deps = ["//third_party/gpus/cuda:cuda_headers"], + cuda_deps = ["@local_config_cuda//cuda:cuda_headers"], visibility = ["//visibility:public"], deps = [":framework_lite"], ) From f42bf6706d12369c7686bae1ecac8f6957daa78a Mon Sep 17 00:00:00 2001 From: Amit Patankar <amitpatankar@google.com> Date: Fri, 2 Feb 2018 13:52:30 -0800 Subject: [PATCH 09/34] Revert "Update deprecated API use" This reverts commit ccedcbe14c798fb3b227030cf85b4fe89406f0d8. --- tensorflow/core/distributed_runtime/tensor_coding.cc | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/tensorflow/core/distributed_runtime/tensor_coding.cc b/tensorflow/core/distributed_runtime/tensor_coding.cc index 34a4013547b..fe2d1a12934 100644 --- a/tensorflow/core/distributed_runtime/tensor_coding.cc +++ b/tensorflow/core/distributed_runtime/tensor_coding.cc @@ -81,7 +81,7 @@ void TensorResponse::InitPartial(const RecvTensorResponse& response) { Status TensorResponse::ParseFrom(Source* source) { if (!on_host_) { protobuf::io::CodedInputStream input(source->contents()); - input.SetTotalBytesLimit(INT_MAX); // Unlimited + input.SetTotalBytesLimit(INT_MAX, INT_MAX); // Unlimited // Pre-parse into local storage, then delegate to device. if (!meta_.ParseFromCodedStream(&input) || !input.ConsumedEntireMessage()) { @@ -217,7 +217,7 @@ bool TensorResponse::ParseTensorSubmessage( bool TensorResponse::ParseFast(Source* source) { protobuf::io::CodedInputStream input(source->contents()); - input.SetTotalBytesLimit(INT_MAX); // Unlimited + input.SetTotalBytesLimit(INT_MAX, INT_MAX); // Unlimited while (true) { auto p = input.ReadTagWithCutoff(127); int tag = GetTagFieldNumber(p.first); From 782ccd93198f0187e4cb52aad489b49c6960d074 Mon Sep 17 00:00:00 2001 From: Andrew Selle <aselle@andyselle.com> Date: Fri, 2 Feb 2018 13:57:11 -0800 Subject: [PATCH 10/34] Add k8 to detection for when to use neon_tensor_utils. --- tensorflow/contrib/lite/kernels/internal/BUILD | 3 +++ 1 file changed, 3 insertions(+) diff --git a/tensorflow/contrib/lite/kernels/internal/BUILD b/tensorflow/contrib/lite/kernels/internal/BUILD index 38b032c6de7..2c29b2abb80 100644 --- a/tensorflow/contrib/lite/kernels/internal/BUILD +++ b/tensorflow/contrib/lite/kernels/internal/BUILD @@ -330,6 +330,9 @@ cc_library( ":x86": [ ":neon_tensor_utils", ], + ":k8": [ + ":neon_tensor_utils", + ], ":darwin": [ ":neon_tensor_utils", ], From 075482c55bd8fa87eeee0f430ddf38d081cf16ec Mon Sep 17 00:00:00 2001 From: Amit Patankar <amitpatankar@google.com> Date: Fri, 2 Feb 2018 14:08:30 -0800 Subject: [PATCH 11/34] Update BUILD --- tensorflow/core/BUILD | 1 - 1 file changed, 1 deletion(-) diff --git a/tensorflow/core/BUILD b/tensorflow/core/BUILD index 5b38c10032e..45edbe96522 100644 --- a/tensorflow/core/BUILD +++ b/tensorflow/core/BUILD @@ -1909,7 +1909,6 @@ cc_library( tf_cuda_library( name = "cuda_device_functions", hdrs = ["util/cuda_device_functions.h"], - cuda_deps = ["@local_config_cuda//cuda:cuda_headers"], visibility = ["//visibility:public"], deps = [":framework_lite"], ) From a8efd478702bbd4503f8afa9559b86a54d8544eb Mon Sep 17 00:00:00 2001 From: Jianwei Xie <xiejw@google.com> Date: Fri, 2 Feb 2018 15:00:17 -0800 Subject: [PATCH 12/34] Adds batch inference support on TPU with TPUEstimator.predict. PiperOrigin-RevId: 184339842 --- .../contrib/tpu/python/tpu/tpu_estimator.py | 629 +++++++++++++++--- 1 file changed, 542 insertions(+), 87 deletions(-) diff --git a/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py b/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py index c7008533f3a..b5082fc8235 100644 --- a/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py +++ b/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py @@ -47,6 +47,7 @@ from tensorflow.python.estimator import model_fn as model_fn_lib from tensorflow.python.estimator import util from tensorflow.python.framework import constant_op from tensorflow.python.framework import dtypes +from tensorflow.python.framework import errors from tensorflow.python.framework import ops from tensorflow.python.ops import array_ops from tensorflow.python.ops import control_flow_ops @@ -170,10 +171,12 @@ class _TPUContext(object): ``` """ - def __init__(self, config, train_batch_size, eval_batch_size, use_tpu): + def __init__(self, config, train_batch_size, eval_batch_size, + predict_batch_size, use_tpu): self._config = config self._train_batch_size = train_batch_size self._eval_batch_size = eval_batch_size + self._predict_batch_size = predict_batch_size self._use_tpu = use_tpu self._num_shards_or_none = self._config.tpu_config.num_shards self._mode = None @@ -218,31 +221,66 @@ class _TPUContext(object): return (self._mode == model_fn_lib.ModeKeys.TRAIN and not self._config.tpu_config.per_host_input_for_training) - def is_running_on_cpu(self): - """Determines whether the input_fn and model_fn should be invoked on CPU.""" + def is_running_on_cpu(self, is_export_mode=False): + """Determines whether the input_fn and model_fn should be invoked on CPU. + + Args: + is_export_mode: Indicates whether the current mode is for exporting the + model, when mode == PREDICT. Only with this bool, we could + tell whether user is calling the Estimator.predict or + Estimator.export_savedmodel, which are running on TPU and CPU + respectively. Parent class Estimator does not distingush these two. + + Returns: + bool, whether current input_fn or model_fn should be running on CPU. + + Raises: + ValueError: any configuration is invalid. + """ mode = self._assert_mode() - return (not self._use_tpu) or mode == model_fn_lib.ModeKeys.PREDICT + + if not self._use_tpu: + return True + + if mode != model_fn_lib.ModeKeys.PREDICT: + return False + + # There are actually 2 use cases when running with mode.PREDICT: prediction + # and saving the model. We run actual predictions on the TPU, but + # model export is run on the CPU. + if is_export_mode: + return True + + if self._predict_batch_size is None: + raise ValueError( + 'predict_batch_size in TPUEstimator constructor should not be ' + '`None` if .predict is running on TPU.') + if self.num_hosts > 1: + raise ValueError( + 'TPUEstimator.predict should be running on single host.') + + return False @property def global_batch_size(self): mode = self._assert_mode() - return (self._train_batch_size - if mode == model_fn_lib.ModeKeys.TRAIN else self._eval_batch_size) + if mode == model_fn_lib.ModeKeys.TRAIN: + return self._train_batch_size + elif mode == model_fn_lib.ModeKeys.EVAL: + return self._eval_batch_size + elif mode == model_fn_lib.ModeKeys.PREDICT: + return self._predict_batch_size + else: + return None @property def batch_size_for_input_fn(self): """Returns the shard batch size for `input_fn`.""" - mode = self._assert_mode() - if self.is_running_on_cpu(): - if mode == model_fn_lib.ModeKeys.TRAIN: - return self._train_batch_size - if mode == model_fn_lib.ModeKeys.EVAL: - return self._eval_batch_size - return None + global_batch_size = self.global_batch_size + + if self.is_running_on_cpu(): + return global_batch_size - global_batch_size = ( - self._train_batch_size - if mode == model_fn_lib.ModeKeys.TRAIN else self._eval_batch_size) # On TPU if self.is_input_sharded_per_core(): return global_batch_size // self.num_cores @@ -252,19 +290,13 @@ class _TPUContext(object): @property def batch_size_for_model_fn(self): """Returns the shard batch size for `model_fn`.""" - mode = self._assert_mode() + global_batch_size = self.global_batch_size + if self.is_running_on_cpu(): - if mode == model_fn_lib.ModeKeys.TRAIN: - return self._train_batch_size - if mode == model_fn_lib.ModeKeys.EVAL: - return self._eval_batch_size - return None + return global_batch_size # On TPU. always sharded per core. - if mode == model_fn_lib.ModeKeys.TRAIN: - return self._train_batch_size // self.num_cores - else: - return self._eval_batch_size // self.num_cores + return global_batch_size // self.num_cores @property def master_job(self): @@ -506,6 +538,22 @@ class _OpQueueContext(object): self._thread.join() +class _OpSignalOnceQueueContext(_OpQueueContext): + """Manages work queue and thread for a infeed/outfeed thread. + + This subclass only signals once. + """ + + def __init__(self, name, target, args): + super(_OpSignalOnceQueueContext, self).__init__(name, target, args) + self._has_signaled = False + + def send_next_batch_signal(self, iterations): + if not self._has_signaled: + self._queue.put(iterations) + self._has_signaled = True + + class TPUInfeedOutfeedSessionHook(session_run_hook.SessionRunHook): """A Session hook setting up the TPU initialization, infeed, and outfeed. @@ -633,13 +681,16 @@ class TPUInfeedOutfeedSessionHook(session_run_hook.SessionRunHook): except Exception as e: # pylint: disable=broad-except self._log_error(session, e) + def _create_infeed_controller(self, name, target, args): + return _OpQueueContext(name=name, target=target, args=args) + def after_create_session(self, session, coord): logging.info('Init TPU system') session.run(self._init_ops, options=config_pb2.RunOptions(timeout_in_ms=5 * 60 * 1000)) logging.info('Start infeed thread controller') - self._infeed_controller = _OpQueueContext( + self._infeed_controller = self._create_infeed_controller( name='InfeedController', target=self._run_infeed, args=(session,)) logging.info('Start outfeed thread controller') @@ -677,6 +728,16 @@ class TPUInfeedOutfeedSessionHook(session_run_hook.SessionRunHook): session.run(self._finalize_ops) +class TPUInfeedOutfeedSessionHookForPrediction(TPUInfeedOutfeedSessionHook): + + def __init__(self, ctx, enqueue_ops, dequeue_ops): + super(TPUInfeedOutfeedSessionHookForPrediction, self).__init__( + ctx, enqueue_ops, dequeue_ops, run_infeed_loop_on_coordinator=False) + + def _create_infeed_controller(self, name, target, args): + return _OpSignalOnceQueueContext(name=name, target=target, args=args) + + class _TPUStopAtStepHook(session_run_hook.SessionRunHook): """Hook that requests stop at a specified step. @@ -764,6 +825,47 @@ class _SetEvalIterationsHook(session_run_hook.SessionRunHook): self._iterations_per_loop_var.load(self._num_steps, session=session) +class _StoppingPredictHook(session_run_hook.SessionRunHook): + """Hook that requests stop according to the stopping signal in prediction.""" + + def __init__(self, scalar_stopping_signal): + self._scalar_stopping_signal = scalar_stopping_signal + + def begin(self): + self._iterations_per_loop_var = _create_or_get_iterations_per_loop() + + def after_create_session(self, session, coord): + # This is not necessary as we do not run infeed enqueue and outfeed dequeue + # in side threads for prediction model. But it makes the + # TPUInfeedOutfeedSessionHook prints nice message. + self._iterations_per_loop_var.load(1, session=session) + + def before_run(self, run_context): + return session_run_hook.SessionRunArgs(self._scalar_stopping_signal) + + def after_run(self, run_context, run_values): + _ = run_context + scalar_stopping_signal = run_values.results + if _StopSignals.should_stop(scalar_stopping_signal): + # NOTE(xiejw): In prediction, stopping signals are inserted for each + # batch. And we append one more batch to signal the system it should stop. + # The data flow might look like + # + # batch 0: images, labels, stop = 0 (user provideded) + # batch 1: images, labels, stop = 0 (user provideded) + # ... + # batch 99: images, labels, stop = 0 (user provideded) + # batch 100: images, labels, stop = 1 (TPUEstimator appended) + # + # where the final batch (id = 100) is appended by TPUEstimator, so we + # should drop it before returning the predictions to user. + # To achieve that, we throw the OutOfRangeError in after_run. Once + # Monitored Session sees this error in SessionRunHook.after_run, the + # "current" prediciton, i.e., batch with id=100, will be discarded + # immediately + raise errors.OutOfRangeError(None, None, 'Stopped by stopping signal.') + + def generate_per_core_enqueue_ops_fn_for_host(ctx, input_fn, inputs_structure_recorder): """Generates infeed enqueue ops for per-core input_fn on a single host.""" @@ -815,6 +917,14 @@ def generate_per_host_enqueue_ops_fn_for_host( inputs = _Inputs.from_input_fn(input_fn()) is_dataset = inputs.is_dataset + if ctx.mode == model_fn_lib.ModeKeys.PREDICT: + if not is_dataset: + raise TypeError( + 'For mode PREDICT, `input_fn` must return `Dataset` instead of ' + '`features` and `labels`.') + inputs = _InputsWithStoppingSignals( + dataset=inputs.dataset, batch_size=ctx.batch_size_for_input_fn) + if is_dataset: hooks.append(inputs.dataset_initializer_hook()) @@ -825,11 +935,13 @@ def generate_per_host_enqueue_ops_fn_for_host( # dataset, it is initialized and the features and labels extracted via # `dataset.iterator.get_next()` features, labels = inputs.features_and_labels() + signals = inputs.signals() - inputs_structure_recorder.validate_and_record_structure(features, labels) + inputs_structure_recorder.validate_and_record_structure( + features, labels, signals) unsharded_tensor_list = ( inputs_structure_recorder.flatten_features_and_labels( - features, labels)) + features, labels, signals)) infeed_queue = tpu_feed.InfeedQueue( tuple_types=[t.dtype for t in unsharded_tensor_list], @@ -841,7 +953,13 @@ def generate_per_host_enqueue_ops_fn_for_host( per_host_enqueue_ops = ( infeed_queue.split_inputs_and_generate_enqueue_ops( unsharded_tensor_list, placement_function=lambda x: device)) - return per_host_enqueue_ops + if signals is None: + return per_host_enqueue_ops + else: + return { + 'ops': per_host_enqueue_ops, + 'signals': signals, + } return enqueue_ops_fn, captured_infeed_queue, hooks, is_dataset @@ -883,6 +1001,7 @@ class _InputPipeline(object): self._feature_names = [] self._label_names = [] self._has_labels = False + self._signals_helper = None # Internal state. self._initialized = False @@ -890,7 +1009,7 @@ class _InputPipeline(object): def has_labels(self): return self._has_labels - def validate_and_record_structure(self, features, labels): + def validate_and_record_structure(self, features, labels, signals=None): """Validates and records the structure of features` and `labels`.""" def _extract_key_names(tensor_or_dict): @@ -903,6 +1022,10 @@ class _InputPipeline(object): feature_names = _extract_key_names(features) label_names = _extract_key_names(labels) + if signals is not None and self._signals_helper is None: + # Record signals helper. + self._signals_helper = _SignalsHelper(signals) + if self._initialized: # Verify the structure is same. The following should never happen. assert feature_names == self._feature_names, 'feature keys mismatched' @@ -915,7 +1038,7 @@ class _InputPipeline(object): self._label_names = label_names self._has_labels = has_labels - def flatten_features_and_labels(self, features, labels): + def flatten_features_and_labels(self, features, labels, signals=None): """Flattens the `features` and `labels` to a single tensor list.""" flattened_inputs = [] if self._feature_names: @@ -931,6 +1054,9 @@ class _InputPipeline(object): flattened_inputs.extend([labels[name] for name in self._label_names]) else: flattened_inputs.append(labels) + + if signals is not None: + flattened_inputs.extend(_SignalsHelper.as_tensor_list(signals)) return flattened_inputs def unflatten_features_and_labels(self, flattened_inputs): @@ -956,7 +1082,11 @@ class _InputPipeline(object): else: expected_num_labels = 0 - expected_num_tensors = expected_num_features + expected_num_labels + expected_num_signals = ( + self._signals_helper.num_signals if self._signals_helper else 0) + + expected_num_tensors = ( + expected_num_features + expected_num_labels + expected_num_signals) if expected_num_tensors != len(flattened_inputs): raise ValueError( @@ -973,13 +1103,20 @@ class _InputPipeline(object): if expected_num_labels == 0: unflattened_label = None elif self._label_names: - unflattened_label = dict( - zip(self._label_names, flattened_inputs[expected_num_features:])) + label_list = flattened_inputs[ + expected_num_features:expected_num_features + expected_num_labels] + unflattened_label = dict(zip(self._label_names, label_list)) else: # Single tensor case. unflattened_label = flattened_inputs[expected_num_features] - return _Inputs(unflattened_features, unflattened_label) + signals = None + if expected_num_signals != 0: + tensor_list_for_signals = flattened_inputs[ + expected_num_features + expected_num_labels:] + signals = self._signals_helper.unflatten(tensor_list_for_signals) + + return _Inputs(unflattened_features, unflattened_label, signals=signals) def __init__(self, input_fn, batch_axis, ctx): """Constructor. @@ -1078,9 +1215,12 @@ class _InputPipeline(object): # slow compared to the previous case. if is_dataset: run_infeed_loop_on_coordinator = False + wrap_fn = ( + _wrap_computation_in_while_loop + if self._ctx.mode != model_fn_lib.ModeKeys.PREDICT else + _wrap_computation_in_while_loop_with_stopping_signals) enqueue_ops.append( - _wrap_computation_in_while_loop( - device=host_device, op_fn=enqueue_ops_fn)) + wrap_fn(device=host_device, op_fn=enqueue_ops_fn)) else: enqueue_ops.append(enqueue_ops_fn()) infeed_queues.append(captured_infeed_queue.get()) @@ -1145,7 +1285,8 @@ class _ModelFnWrapper(object): infeed dequeue channel. Returns: - A Fn representing the train step for TPU. + A tuple of train_fn, host_calls, and captured scaffold_fn. The train_fn + representing the train step for TPU. """ host_call = _OutfeedHostCall(self._ctx) @@ -1198,8 +1339,8 @@ class _ModelFnWrapper(object): infeed dequeue channel. Returns: - A tuple of eval_fn and eval_metrics. The eval_fn representing the eval - step for TPU. and eval_metrics is an `_OutfeedHostCall` instance. + A tuple of eval_fn, host_calls, and captured scaffold_fn. The eval_fn + representing the eval step for TPU. """ host_calls = _OutfeedHostCall(self._ctx) captured_scaffold_fn = _CapturedObject() @@ -1228,7 +1369,55 @@ class _ModelFnWrapper(object): return eval_step, host_calls, captured_scaffold_fn - def _call_model_fn(self, features, labels): + def convert_to_single_tpu_predict_step(self, dequeue_fn): + """Converts user provided model_fn` as a single predict step on TPU. + + Args: + dequeue_fn: The function to retrieve inputs, features and labels, from TPU + infeed dequeue channel. + + Returns: + A tuple of predict_fn, host_calls, and captured scaffold_fn. The + predict_fn representing the predict step for TPU. + """ + host_calls = _OutfeedHostCall(self._ctx) + captured_scaffold_fn = _CapturedObject() + + def predict_step(unused_scalar_stopping_signal): + """Evaluation step function for use inside a while loop.""" + inputs = dequeue_fn() + features, labels = inputs.features_and_labels() + stopping_signals = inputs.signals() + + assert stopping_signals is not None, ( + 'Internal Error: `signals` is missing.') + + tpu_estimator_spec = self._call_model_fn( + features, labels, is_export_mode=False) + if not isinstance(tpu_estimator_spec, TPUEstimatorSpec): + raise RuntimeError( + 'estimator_spec used by TPU prediction must have type' + '`TPUEstimatorSpec`. Got {}'.format(type(tpu_estimator_spec))) + + captured_scaffold_fn.capture(tpu_estimator_spec.scaffold_fn) + to_record = {} + identity_fn = lambda **kwargs: kwargs + # TODO(xiejw): Adds validation for prediction dictionrary. + # TODO(xiejw): Adds support for single tensor as predictions. + if not isinstance(tpu_estimator_spec.predictions, dict): + raise TypeError('TPUEstimatorSpec.predictions must be dict of Tensors.') + to_record['predictions'] = [identity_fn, tpu_estimator_spec.predictions] + to_record['signals'] = [identity_fn, stopping_signals] + if tpu_estimator_spec.host_call is not None: + to_record['host_call'] = tpu_estimator_spec.host_call + host_calls.record(to_record) + + with ops.control_dependencies(host_calls.create_enqueue_op()): + return _StopSignals.as_scalar_stopping_signal(stopping_signals) + + return predict_step, host_calls, captured_scaffold_fn + + def _call_model_fn(self, features, labels, is_export_mode=True): """Calls the model_fn with required parameters.""" model_fn_args = util.fn_args(self._model_fn) kwargs = {} @@ -1259,7 +1448,7 @@ class _ModelFnWrapper(object): params[_BATCH_SIZE_KEY] = batch_size_for_model_fn estimator_spec = self._model_fn(features=features, **kwargs) - if (self._ctx.is_running_on_cpu() and + if (self._ctx.is_running_on_cpu(is_export_mode) and isinstance(estimator_spec, TPUEstimatorSpec)): # The estimator_spec will be passed to `Estimator` directly, which expects # type `EstimatorSpec`. @@ -1614,6 +1803,7 @@ class TPUEstimator(estimator_lib.Estimator): use_tpu=True, train_batch_size=None, eval_batch_size=None, + predict_batch_size=None, batch_axis=None): """Constructs an `TPUEstimator` instance. @@ -1639,7 +1829,9 @@ class TPUEstimator(estimator_lib.Estimator): size, as params['batch_size'], when calling `input_fn` and `model_fn`. Cannot be `None` if `use_tpu` is `True`. Must be divisible by `config.tpu_config.num_shards`. - eval_batch_size: An int representing the global training batch size. + eval_batch_size: An int representing evaluation batch size. + Must be divisible by `config.tpu_config.num_shards`. + predict_batch_size: An int representing the prediction batch size. Must be divisible by `config.tpu_config.num_shards`. batch_axis: A python tuple of int values describing how each tensor produced by the Estimator `input_fn` should be split across the TPU @@ -1689,6 +1881,16 @@ class TPUEstimator(estimator_lib.Estimator): 'eval batch size {} must be divisible by number of shards {}' .format(eval_batch_size, config.tpu_config.num_shards)) + if predict_batch_size is not None: + if not isinstance(predict_batch_size, int): + raise ValueError('`predict_batch_size` must be an int') + if predict_batch_size < 1: + raise ValueError('`predict_batch_size` must be positive') + if predict_batch_size % config.tpu_config.num_shards != 0: + raise ValueError( + 'predict batch size {} must be divisible by number of shards {}' + .format(predict_batch_size, config.tpu_config.num_shards)) + # Verifies the model_fn signature according to Estimator framework. estimator_lib._verify_model_fn_args(model_fn, params) # pylint: disable=protected-access # We cannot store config and params in this constructor as parent @@ -1708,7 +1910,7 @@ class TPUEstimator(estimator_lib.Estimator): # All properties passed to _TPUContext are immutable. self._ctx = _TPUContext(self._config, train_batch_size, eval_batch_size, - use_tpu) + predict_batch_size, use_tpu) def _create_global_step(self, graph): """Creates a global step suitable for TPUs. @@ -1804,7 +2006,9 @@ class TPUEstimator(estimator_lib.Estimator): if batch_size_for_input_fn is not None: kwargs['params'][_BATCH_SIZE_KEY] = batch_size_for_input_fn - if ctx.is_running_on_cpu(): + # For export_savedmodel, input_fn is never passed to Estimator. So, + # `is_export_mode` must be False. + if ctx.is_running_on_cpu(is_export_mode=False): with ops.device('/device:CPU:0'): return input_fn(**kwargs) @@ -1831,8 +2035,13 @@ class TPUEstimator(estimator_lib.Estimator): with self._ctx.with_mode(mode) as ctx: model_fn_wrapper = _ModelFnWrapper(model_fn, config, params, ctx) - # TODO(jhseu): Move to PREDICT to TPU. - if ctx.is_running_on_cpu(): + # For export_savedmodel, input_fn is never passed to Estimator. So, + # if features is callable, it means it is the input_fn passed by + # TPUEstimator._call_input_fn. Then we can know if the mode == PREDICT, + # it implies, it is the .predict API, not export_savedmodel API. + is_export_mode = not callable(features) + + if ctx.is_running_on_cpu(is_export_mode=is_export_mode): logging.info('Running %s on CPU', mode) return model_fn_wrapper.call_without_tpu(features, labels) @@ -1881,53 +2090,114 @@ class TPUEstimator(estimator_lib.Estimator): train_op=control_flow_ops.group(*update_ops), scaffold=scaffold) - # Now eval. - total_loss, host_calls, scaffold = _eval_on_tpu_system( - ctx, model_fn_wrapper, dequeue_fn) - iterations_per_loop_var = _create_or_get_iterations_per_loop() - mean_loss = math_ops.div(total_loss, - math_ops.cast( - iterations_per_loop_var, - dtype=total_loss.dtype)) + if mode == model_fn_lib.ModeKeys.EVAL: + total_loss, host_calls, scaffold = _eval_on_tpu_system( + ctx, model_fn_wrapper, dequeue_fn) + iterations_per_loop_var = _create_or_get_iterations_per_loop() + mean_loss = math_ops.div(total_loss, + math_ops.cast( + iterations_per_loop_var, + dtype=total_loss.dtype)) - # Creates a dummy metric update_op for all metrics. Estimator expects - # all metrics in eval_metric_ops have update_op and calls them one by - # one. The real metric update_ops are invoked in a separated thread. So, - # here give Estimator the dummy op for all metrics. - with ops.control_dependencies([mean_loss]): - # After TPU evaluation computation is done (the mean_loss tensor), - # reads all variables back from TPU and updates the eval step counter - # properly + # Creates a dummy metric update_op for all metrics. Estimator expects + # all metrics in eval_metric_ops have update_op and calls them one by + # one. The real metric update_ops are invoked in a separated thread. + # So, here give Estimator the dummy op for all metrics. + with ops.control_dependencies([mean_loss]): + # After TPU evaluation computation is done (the mean_loss tensor), + # reads all variables back from TPU and updates the eval step + # counter properly + internal_ops_to_run = _sync_variables_ops() + internal_ops_to_run.append( + _increase_eval_step_op(iterations_per_loop_var)) + with ops.control_dependencies(internal_ops_to_run): + dummy_update_op = control_flow_ops.no_op() + + host_call_ret = host_calls.create_tpu_hostcall() + eval_metric_ops = {} + eval_update_ops = [] + for k, v in host_call_ret['eval_metrics'].items(): + eval_metric_ops[k] = (v[0], dummy_update_op) + eval_update_ops.append(v[1]) + + if 'host_call' not in host_call_ret: + host_ops = [] + else: + host_ops = host_call_ret['host_call'] + hooks = [ + TPUInfeedOutfeedSessionHook( + ctx, + enqueue_ops, + eval_update_ops + host_ops, + run_infeed_loop_on_coordinator=( + run_infeed_loop_on_coordinator)), + ] + input_hooks + + return model_fn_lib.EstimatorSpec( + mode, + loss=mean_loss, + evaluation_hooks=hooks, + eval_metric_ops=eval_metric_ops, + scaffold=scaffold) + + # Predict + assert mode == model_fn_lib.ModeKeys.PREDICT + + dummy_predict_op, host_calls, scaffold = _predict_on_tpu_system( + ctx, model_fn_wrapper, dequeue_fn) + with ops.control_dependencies([dummy_predict_op]): internal_ops_to_run = _sync_variables_ops() - internal_ops_to_run.append( - _increase_eval_step_op(iterations_per_loop_var)) with ops.control_dependencies(internal_ops_to_run): - dummy_update_op = control_flow_ops.no_op() + dummy_predict_op = control_flow_ops.no_op() + + # In train and evaluation, the main TPU program is passed to monitored + # training session to run. Infeed enqueue and outfeed dequeue are + # executed in side threads. This is not the configuration for + # prediction mode. + # + # For prediction, the Estimator executes the EstimatorSpec.predictions + # directly and yield the element (via generator) to call site. So, the + # outfeed based prediction must be passed to MonitoredSession directly. + # Other parts of the TPU execution are organized as follows. + # + # 1. All outfeed based Tensors must be grouped with predictions Tensors + # to form a single invocation. This avoid the issue we might trigger + # multiple outfeeds incorrectly. To achieve this, `host_call` is + # placed in control_dependencies of `stopping_signals`, and + # `stopping_signals` is passed into _StoppingPredictHook, which sets + # the `stopping_signals` as SessionRunArgs. MonitoredSession merges + # all SessionRunArgs with the fetch in session.run together. + # + # 2. The TPU program (dummy_predict_op) and enqueue_ops (infeed Enqueue) + # are grouped together. They will be launched once and only once in + # side threads and they quit naturally according to the SAME stopping + # condition. + enqueue_ops.append(dummy_predict_op) host_call_ret = host_calls.create_tpu_hostcall() - eval_metric_ops = {} - eval_update_ops = [] - for k, v in host_call_ret['eval_metrics'].items(): - eval_metric_ops[k] = (v[0], dummy_update_op) - eval_update_ops.append(v[1]) - if 'host_call' not in host_call_ret: host_ops = [] else: host_ops = host_call_ret['host_call'] + + predictions = host_call_ret['predictions'] + stopping_signals = host_call_ret['signals'] + + with ops.control_dependencies(host_ops): + host_ops = [] # Empty, we do do not need it anymore. + scalar_stopping_signal = _StopSignals.as_scalar_stopping_signal( + stopping_signals) + hooks = [ - TPUInfeedOutfeedSessionHook( - ctx, - enqueue_ops, - eval_update_ops + host_ops, - run_infeed_loop_on_coordinator=run_infeed_loop_on_coordinator), + _StoppingPredictHook(scalar_stopping_signal), + TPUInfeedOutfeedSessionHookForPrediction(ctx, enqueue_ops, + host_ops), ] + input_hooks return model_fn_lib.EstimatorSpec( mode, - loss=mean_loss, - evaluation_hooks=hooks, - eval_metric_ops=eval_metric_ops, + prediction_hooks=hooks, + predictions=predictions, scaffold=scaffold) return _model_fn @@ -1981,6 +2251,34 @@ def _train_on_tpu_system(ctx, model_fn_wrapper, dequeue_fn): return loss, host_call, scaffold +def _predict_on_tpu_system(ctx, model_fn_wrapper, dequeue_fn): + """Executes `model_fn_wrapper` multiple times on all TPU shards.""" + num_cores = ctx.num_cores + + single_tpu_predict_step, host_calls, captured_scaffold_fn = ( + model_fn_wrapper.convert_to_single_tpu_predict_step(dequeue_fn)) + + def multi_tpu_predict_steps_on_single_shard(): + + def cond(scalar_stopping_signal): + return math_ops.logical_not( + _StopSignals.should_stop(scalar_stopping_signal)) + + inputs = [_StopSignals.NON_STOPPING_SIGNAL] + outputs = training_loop.while_loop( + cond, single_tpu_predict_step, inputs=inputs, name=b'loop') + return outputs + + (dummy_predict_op,) = tpu.shard( + multi_tpu_predict_steps_on_single_shard, + inputs=[], + num_shards=num_cores, + outputs_from_all_shards=False) + + scaffold = _get_scaffold(captured_scaffold_fn) + return dummy_predict_op, host_calls, scaffold + + def _wrap_computation_in_while_loop(device, op_fn): """Wraps the ops generated by `op_fn` in tf.while_loop.""" @@ -1999,6 +2297,29 @@ def _wrap_computation_in_while_loop(device, op_fn): parallel_iterations=1) +def _wrap_computation_in_while_loop_with_stopping_signals(device, op_fn): + """Wraps the ops generated by `op_fn` in tf.while_loop.""" + + def cond(scalar_stopping_signal): + return math_ops.logical_not( + _StopSignals.should_stop(scalar_stopping_signal)) + + def computation(unused_scalar_stopping_signal): + return_value = op_fn() + execute_ops = return_value['ops'] + signals = return_value['signals'] + with ops.control_dependencies(execute_ops): + return _StopSignals.as_scalar_stopping_signal(signals) + + # By setting parallel_iterations=1, the parallel execution in while_loop is + # basically turned off. + with ops.device(device): + return control_flow_ops.while_loop( + cond, + computation, [_StopSignals.NON_STOPPING_SIGNAL], + parallel_iterations=1) + + def _validate_tpu_training_graph(): """Validate graph before running distributed training. @@ -2091,21 +2412,22 @@ class _CapturingContext(control_flow_ops.ControlFlowContext): self._g._set_control_flow_context(self._old) # pylint: disable=protected-access -# TODO(xiejw): Extend this to support internal signal. class _Inputs(object): """A data structure representing the input_fn returned values. This also supports the returned value from input_fn as `Dataset`. """ - def __init__(self, features=None, labels=None, dataset=None): - if dataset is not None and (features is not None or labels is not None): + def __init__(self, features=None, labels=None, dataset=None, signals=None): + if dataset is not None and (features is not None or labels is not None or + signals is not None): raise RuntimeError('Internal Error: Either (features and labels) or ' 'dataset should be provided, not both. Please file ' 'bug') self._features = features self._labels = labels + self._signals = signals self._dataset = dataset self._iterator = None @@ -2117,11 +2439,16 @@ class _Inputs(object): dataset = return_values return _Inputs(dataset=dataset) + features, labels = _Inputs._parse_inputs(return_values) + return _Inputs(features, labels) + + @staticmethod + def _parse_inputs(return_values): if isinstance(return_values, tuple): features, labels = return_values else: features, labels = return_values, None - return _Inputs(features, labels) + return features, labels @property def is_dataset(self): @@ -2142,7 +2469,135 @@ class _Inputs(object): def features_and_labels(self): """Gets `features` and `labels`.""" if self.is_dataset: - return (_Inputs.from_input_fn( - self._iterator.get_next()).features_and_labels()) + return _Inputs._parse_inputs(self._iterator.get_next()) return (self._features, self._labels) + + def signals(self): + return self._signals + + @property + def dataset(self): + return self._dataset + + +# TODO(xiejw): Extend this to support final partial batch. +class _InputsWithStoppingSignals(_Inputs): + """Inputs with `_StopSignals` inserted into the dataset.""" + + def __init__(self, dataset, batch_size): + + assert dataset is not None + + user_provided_dataset = dataset.map( + _InputsWithStoppingSignals.insert_stopping_signal( + stop=False, batch_size=batch_size)) + final_batch_dataset = dataset.take(1).map( + _InputsWithStoppingSignals.insert_stopping_signal( + stop=True, batch_size=batch_size)) + dataset = user_provided_dataset.concatenate(final_batch_dataset).prefetch(2) + + super(_InputsWithStoppingSignals, self).__init__(dataset=dataset) + self._current_inputs = None + + def features_and_labels(self): + if self._current_inputs is not None: + raise RuntimeError( + 'Internal Error: The previous inputs have not been properly ' + 'consumed. First call features_and_labels, then call signals.') + + inputs_with_signals = self._iterator.get_next() + features = inputs_with_signals['features'] + labels = inputs_with_signals.get('labels') + + self._current_inputs = inputs_with_signals + return features, labels + + def signals(self): + """Returns the `Signals` from `_Inputs`.""" + if self._current_inputs is None: + raise RuntimeError( + 'Internal Error: The current inputs have not been properly ' + 'generated. First call features_and_labels, then call signals.') + signals = self._current_inputs['signals'] + self._current_inputs = None + return signals + + @staticmethod + def insert_stopping_signal(stop, batch_size): + """Inserts stopping_signal into dataset via _map_fn. + + Here we change the data structure in the dataset, such that the return value + is a dictionary now and `features`, `labels`, and `signals` are three + distinguished keys in that dict. This provides a better structure, which + eases the process to decompose the inputs (see `features_and_labels`). + + Args: + stop: bool, state of current stopping signals. + batch_size: int, batch size. + + Returns: + A map_fn passed to dataset.map API. + """ + + def _map_fn(*args): + features, labels = _Inputs._parse_inputs(args) + new_input_dict = {} + new_input_dict['features'] = features + if labels is not None: + new_input_dict['labels'] = labels + new_input_dict['signals'] = _StopSignals( + stop=stop, batch_size=batch_size).as_dict() + return new_input_dict + + return _map_fn + + +class _StopSignals(object): + """Signals class holding all logic to handle TPU stopping condition.""" + + NON_STOPPING_SIGNAL = 0.0 + STOPPING_SIGNAL = 1.0 + + def __init__(self, stop, batch_size): + self._stop = stop + self._batch_size = batch_size + + def as_dict(self): + shape = [self._batch_size, 1] + dtype = dtypes.float32 + + if self._stop: + stopping = array_ops.ones(shape=shape, dtype=dtype) + else: + stopping = array_ops.zeros(shape=shape, dtype=dtype) + + return {'stopping': stopping} + + @staticmethod + def as_scalar_stopping_signal(signals): + return array_ops.identity(signals['stopping'][0][0]) + + @staticmethod + def should_stop(scalar_stopping_signal): + return scalar_stopping_signal >= _StopSignals.STOPPING_SIGNAL + + +class _SignalsHelper(object): + """A general helper class to handle common signals manipulation.""" + + def __init__(self, signals): + self._signal_keys = [] + for key in sorted(signals.iterkeys()): + self._signal_keys.append(key) + + @property + def num_signals(self): + return len(self._signal_keys) + + def unflatten(self, tensor_list): + return dict(zip(self._signal_keys, tensor_list)) + + @staticmethod + def as_tensor_list(signals): + return [signals[key] for key in sorted(signals.iterkeys())] From 7785a8ebf417d0e867a08cabf2d42bb9b29dcb98 Mon Sep 17 00:00:00 2001 From: Jonathan Hseu <jhseu@google.com> Date: Fri, 2 Feb 2018 16:35:10 -0800 Subject: [PATCH 13/34] Update global_step by default if the user specifies a host_call. PiperOrigin-RevId: 184352399 --- .../contrib/tpu/python/tpu/tpu_estimator.py | 50 +++++++++++++++++-- 1 file changed, 45 insertions(+), 5 deletions(-) diff --git a/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py b/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py index b5082fc8235..56793f11d9c 100644 --- a/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py +++ b/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py @@ -63,6 +63,7 @@ from tensorflow.python.training import evaluation from tensorflow.python.training import session_run_hook from tensorflow.python.training import training from tensorflow.python.training import training_util +from tensorflow.python.util import tf_inspect _INITIAL_LOSS = 1e7 _ZERO_LOSS = 0. @@ -484,7 +485,7 @@ class TPUEstimatorSpec( if self.eval_metrics is not None: host_calls['eval_metrics'] = self.eval_metrics if self.host_call is not None: - host_calls['host_call'] = self.host_call + host_calls['host_call'] = wrap_hostcall_with_global_step(self.host_call) host_call_ret = _OutfeedHostCall.create_cpu_hostcall(host_calls) eval_metric_ops = None if self.eval_metrics is not None: @@ -1306,7 +1307,9 @@ class _ModelFnWrapper(object): if isinstance(estimator_spec, TPUEstimatorSpec): captured_scaffold_fn.capture(estimator_spec.scaffold_fn) if estimator_spec.host_call is not None: - host_call.record({'host_call': estimator_spec.host_call}) + host_call.record({ + 'host_call': wrap_hostcall_with_global_step( + estimator_spec.host_call)}) host_call_outfeed_ops = host_call.create_enqueue_op() else: captured_scaffold_fn.capture(None) @@ -1361,6 +1364,8 @@ class _ModelFnWrapper(object): to_record = {} to_record['eval_metrics'] = tpu_estimator_spec.eval_metrics if tpu_estimator_spec.host_call is not None: + # We assume that evaluate won't update global step, so we don't wrap + # this host_call. to_record['host_call'] = tpu_estimator_spec.host_call host_calls.record(to_record) @@ -1503,11 +1508,14 @@ class _OutfeedHostCall(object): raise ValueError('{}[1] should be tuple or list, or dict.'.format(name)) if isinstance(host_call[1], (tuple, list)): + fullargspec = tf_inspect.getfullargspec(host_call[0]) fn_args = util.fn_args(host_call[0]) - if len(host_call[1]) != len(fn_args): + # wrapped_hostcall_with_global_step uses varargs, so we allow that. + if fullargspec.varargs is None and len(host_call[1]) != len(fn_args): raise RuntimeError( - 'In TPUEstimatorSpec.{}, length of tensors does not ' - 'match method args of metric_fn.'.format(name)) + 'In TPUEstimatorSpec.{}, length of tensors {} does not match ' + 'method args of the function, which takes {}.'.format( + name, len(host_call[1]), len(fn_args))) @staticmethod def create_cpu_hostcall(host_calls): @@ -1649,6 +1657,38 @@ class _OutfeedHostCall(object): return ret +def wrap_hostcall_with_global_step(hostcall): + """Wrap the hostcall so that we update the global step upon every call.""" + if hostcall is None: + return None + host_fn, tensors = hostcall + + def global_step_host_fn(_global_step, *args, **kwargs): # pylint: disable=invalid-name + # Note that we don't have any ordering here, so the graph may see a + # global_step that's off by 1. + state_ops.assign( + training.get_global_step(), + math_ops.cast(_global_step[0], dtypes.int64)) + return host_fn(*args, **kwargs) + # Give the global step tensor a batch dimension. Reshape is not supported for + # int64, so we cast it to int32. + # TODO(jhseu): Remove the cast once int64 is supported. + global_step_tensor = array_ops.reshape( + math_ops.cast(training.get_global_step(), dtypes.int32), [1]) + if isinstance(tensors, dict): + outfeed_tensors = {'_global_step': global_step_tensor} + outfeed_tensors.update(tensors) + return global_step_host_fn, outfeed_tensors + else: + fn_args = util.fn_args(host_fn) + if len(tensors) != len(fn_args): + raise RuntimeError( + 'In TPUEstimatorSpec.host_call, length of tensors {} does not match ' + 'method args of the function, which takes {}.'.format( + len(tensors), len(fn_args))) + return global_step_host_fn, [global_step_tensor] + list(tensors) + + class _OutfeedHostCallHook(session_run_hook.SessionRunHook): """Hook to run host calls when use_tpu=False.""" From 359ab22b7972802e19fe7949ebd945a59998f549 Mon Sep 17 00:00:00 2001 From: Amit Patankar <amitpatankar@google.com> Date: Sat, 3 Feb 2018 09:05:25 -0800 Subject: [PATCH 14/34] skyewm: Fix Python3 crazy SessionTest.testReentryWithCApi failure. --- tensorflow/python/client/session.py | 18 ++++++++++++++++-- 1 file changed, 16 insertions(+), 2 deletions(-) diff --git a/tensorflow/python/client/session.py b/tensorflow/python/client/session.py index 6befeb846d8..f3c4fecdc0f 100644 --- a/tensorflow/python/client/session.py +++ b/tensorflow/python/client/session.py @@ -1539,8 +1539,22 @@ class Session(BaseSession): def __exit__(self, exec_type, exec_value, exec_tb): if exec_type is errors.OpError: logging.error('Session closing due to OpError: %s', (exec_value,)) - self._default_session_context_manager.__exit__(exec_type, exec_value, - exec_tb) + try: + self._default_session_context_manager.__exit__(exec_type, exec_value, + exec_tb) + except RuntimeError as error: + if error == exec_value: + # NOTE(skyewm): for some reason, in Python3, + # _default_session_context_manager.__exit__ will re-raise the "not + # re-entrant" exception raised in __enter__ above (note that if we're + # here, we're in the outer session context manager, since __exit__ is + # not called when __enter__ raises an exception). We still want to + # continue cleaning up this context manager before the exception is + # further propagated, so we ignore it here (note that it'll continue + # being propagated after this method completes). + pass + else: + raise self._default_graph_context_manager.__exit__(exec_type, exec_value, exec_tb) self._default_session_context_manager = None From b23908b41ebad5dbe86255c2f196641e42490b2f Mon Sep 17 00:00:00 2001 From: Amit Patankar <amitpatankar@google.com> Date: Sat, 3 Feb 2018 09:17:55 -0800 Subject: [PATCH 15/34] Disabling keras io_utils_test on Windows. --- tensorflow/contrib/cmake/tf_tests.cmake | 2 ++ 1 file changed, 2 insertions(+) diff --git a/tensorflow/contrib/cmake/tf_tests.cmake b/tensorflow/contrib/cmake/tf_tests.cmake index 2e79eadf7f5..73edd616ea4 100644 --- a/tensorflow/contrib/cmake/tf_tests.cmake +++ b/tensorflow/contrib/cmake/tf_tests.cmake @@ -310,6 +310,8 @@ if (tensorflow_BUILD_PYTHON_TESTS) "${tensorflow_source_dir}/tensorflow/python/kernel_tests/control_flow_util_test.py" # Flaky replicate_model_fn_test "${tensorflow_source_dir}/tensorflow/contrib/estimator/python/estimator/replicate_model_fn_test.py" # b/71901810 + # Broken io_utils_test + "${tensorflow_source_dir}/tensorflow/python/keras/_impl/keras/utils/io_utils_test.py" # b/72894325 ) endif() list(REMOVE_ITEM tf_test_src_py ${tf_test_src_py_exclude}) From 6842913e9b56baa88c11f188e29e13466be9ed86 Mon Sep 17 00:00:00 2001 From: Jonathan Hseu <jhseu@google.com> Date: Sat, 3 Feb 2018 09:46:13 -0800 Subject: [PATCH 16/34] Delete device_functions.h include. --- tensorflow/core/util/cuda_device_functions.h | 1 - 1 file changed, 1 deletion(-) diff --git a/tensorflow/core/util/cuda_device_functions.h b/tensorflow/core/util/cuda_device_functions.h index f787687f662..d67663c6692 100644 --- a/tensorflow/core/util/cuda_device_functions.h +++ b/tensorflow/core/util/cuda_device_functions.h @@ -29,7 +29,6 @@ limitations under the License. #include <algorithm> #include <complex> #include "cuda/include/cuda.h" -#include "cuda/include/device_functions.h" #include "tensorflow/core/platform/types.h" #if CUDA_VERSION >= 7050 From 7946a0b1d9998cc54cb952d538668883d3fd8181 Mon Sep 17 00:00:00 2001 From: Jonathan Hseu <jhseu@google.com> Date: Sat, 3 Feb 2018 22:02:53 -0800 Subject: [PATCH 17/34] Fix the Windows GPU build #2 --- tensorflow/core/util/cuda_device_functions.h | 4 ---- 1 file changed, 4 deletions(-) diff --git a/tensorflow/core/util/cuda_device_functions.h b/tensorflow/core/util/cuda_device_functions.h index d67663c6692..525bef16a02 100644 --- a/tensorflow/core/util/cuda_device_functions.h +++ b/tensorflow/core/util/cuda_device_functions.h @@ -31,10 +31,6 @@ limitations under the License. #include "cuda/include/cuda.h" #include "tensorflow/core/platform/types.h" -#if CUDA_VERSION >= 7050 -#include "cuda/include/cuda_fp16.h" -#endif // CUDA_VERSION >= 7050 - namespace tensorflow { namespace detail { From 232fec7e52e52053a0e04c8919e7ece5c654d2de Mon Sep 17 00:00:00 2001 From: Jonathan Hseu <jhseu@google.com> Date: Sun, 4 Feb 2018 09:28:59 -0800 Subject: [PATCH 18/34] Fix for_canonicalization_test --- tensorflow/contrib/py2tf/converters/BUILD | 1 + 1 file changed, 1 insertion(+) diff --git a/tensorflow/contrib/py2tf/converters/BUILD b/tensorflow/contrib/py2tf/converters/BUILD index 3853c60f997..fc6781d50e8 100644 --- a/tensorflow/contrib/py2tf/converters/BUILD +++ b/tensorflow/contrib/py2tf/converters/BUILD @@ -131,6 +131,7 @@ py_test( py_test( name = "for_canonicalization_test", srcs = ["for_canonicalization_test.py"], + srcs_version = "PY2AND3", deps = [ ":test_lib", "//tensorflow/contrib/py2tf/pyct", From 573c6f40a90ace2bc921738937fea32fdf724f7b Mon Sep 17 00:00:00 2001 From: Jonathan Hseu <jhseu@google.com> Date: Mon, 5 Feb 2018 13:36:22 -0800 Subject: [PATCH 19/34] Bump the required numpy version in r1.6 --- tensorflow/tools/pip_package/setup.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tensorflow/tools/pip_package/setup.py b/tensorflow/tools/pip_package/setup.py index 20027869990..fe2c22f2f59 100644 --- a/tensorflow/tools/pip_package/setup.py +++ b/tensorflow/tools/pip_package/setup.py @@ -36,7 +36,7 @@ REQUIRED_PACKAGES = [ 'astor >= 0.6.0', 'gast >= 0.2.0', 'grpcio >= 1.8.6', - 'numpy >= 1.12.1', + 'numpy >= 1.13.3', 'six >= 1.10.0', 'protobuf >= 3.4.0', 'tensorflow-tensorboard >= 1.5.0, < 1.6.0', From 83f06ec185ee87fc57220f2f63245d81aa3c9311 Mon Sep 17 00:00:00 2001 From: Amit Patankar <amitpatankar@google.com> Date: Wed, 7 Feb 2018 20:15:10 -0800 Subject: [PATCH 20/34] Adding the Visual Studio specification for cmake for the 1.6 branch. (#16852) --- tensorflow/tools/ci_build/windows/gpu/cmake/run_build.bat | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tensorflow/tools/ci_build/windows/gpu/cmake/run_build.bat b/tensorflow/tools/ci_build/windows/gpu/cmake/run_build.bat index b87e4a9bec4..e5451461888 100644 --- a/tensorflow/tools/ci_build/windows/gpu/cmake/run_build.bat +++ b/tensorflow/tools/ci_build/windows/gpu/cmake/run_build.bat @@ -37,7 +37,7 @@ SET CMAKE_DIR=%REPO_ROOT%\tensorflow\contrib\cmake SET MSBUILD_EXE="C:\Program Files (x86)\MSBuild\14.0\Bin\msbuild.exe" :: Run cmake to create Visual Studio Project files. -%CMAKE_EXE% %CMAKE_DIR% -A x64 -DSWIG_EXECUTABLE=%SWIG_EXE% -DPYTHON_EXECUTABLE=%PY_EXE% -DCMAKE_BUILD_TYPE=Release -DPYTHON_LIBRARIES=%PY_LIB% -Dtensorflow_BUILD_PYTHON_TESTS=%BUILD_PYTHON_TESTS% -Dtensorflow_BUILD_CC_TESTS=%BUILD_CC_TESTS% -Dtensorflow_ENABLE_GPU=ON -DCUDNN_HOME=%CUDNN_HOME% -Dtensorflow_TF_NIGHTLY=%TF_NIGHTLY% -Dtensorflow_DISABLE_EIGEN_FORCEINLINE=%DISABLE_FORCEINLINE% -Dtensorflow_WIN_CPU_SIMD_OPTIONS=/arch:AVX +%CMAKE_EXE% %CMAKE_DIR% -A x64 -DSWIG_EXECUTABLE=%SWIG_EXE% -DPYTHON_EXECUTABLE=%PY_EXE% -DCMAKE_BUILD_TYPE=Release -DPYTHON_LIBRARIES=%PY_LIB% -Dtensorflow_BUILD_PYTHON_TESTS=%BUILD_PYTHON_TESTS% -Dtensorflow_BUILD_CC_TESTS=%BUILD_CC_TESTS% -Dtensorflow_ENABLE_GPU=ON -DCUDNN_HOME=%CUDNN_HOME% -Dtensorflow_TF_NIGHTLY=%TF_NIGHTLY% -Dtensorflow_DISABLE_EIGEN_FORCEINLINE=%DISABLE_FORCEINLINE% -Dtensorflow_WIN_CPU_SIMD_OPTIONS=/arch:AVX -G "Visual Studio 14 2015" :: Run msbuild in the resulting VS project files to build a pip package. %MSBUILD_EXE% /p:Configuration=Release /maxcpucount:32 tf_python_build_pip_package.vcxproj From 963941e7d639b3211a5792b1086128db554a740a Mon Sep 17 00:00:00 2001 From: Nick Felt <nfelt@users.noreply.github.com> Date: Fri, 9 Feb 2018 13:15:31 -0800 Subject: [PATCH 21/34] Update tensorboard dependency to 1.6.0+ and new name (#16815) * Update tensorboard dependency to 1.6.0+ and new name * Mention tensorboard package name change in RELEASE.md --- RELEASE.md | 4 ++++ tensorflow/tools/pip_package/setup.py | 2 +- 2 files changed, 5 insertions(+), 1 deletion(-) diff --git a/RELEASE.md b/RELEASE.md index 0fad3b5d41c..de4a34bb040 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -59,6 +59,10 @@ newcomers. TensorFlow will print a warning if you use XLA:GPU with a known-bad version of CUDA; see e00ba24c4038e7644da417ddc639169b6ea59122. +* The `tensorboard` command or module may appear to be missing after certain + upgrade flows. This is due to pip package conflicts as a result of changing + the TensorBoard package name. See the [TensorBoard 1.6.0 release notes]( + https://github.com/tensorflow/tensorboard/releases/tag/1.6.0) for a fix. ## Thanks to our Contributors diff --git a/tensorflow/tools/pip_package/setup.py b/tensorflow/tools/pip_package/setup.py index fe2c22f2f59..0d4fa465ad2 100644 --- a/tensorflow/tools/pip_package/setup.py +++ b/tensorflow/tools/pip_package/setup.py @@ -39,7 +39,7 @@ REQUIRED_PACKAGES = [ 'numpy >= 1.13.3', 'six >= 1.10.0', 'protobuf >= 3.4.0', - 'tensorflow-tensorboard >= 1.5.0, < 1.6.0', + 'tensorboard >= 1.6.0, < 1.7.0', 'termcolor >= 1.1.0', ] From 61df29fa97cf82f3d1ef129a70bb5fa3ed99fe3a Mon Sep 17 00:00:00 2001 From: Gunhan Gulsoy <gunan@google.com> Date: Sat, 10 Feb 2018 09:50:40 -0800 Subject: [PATCH 22/34] Update version string to 1.6.0-rc1 --- .../contrib/tpu/profiler/pip_package/setup.py | 2 +- tensorflow/core/public/version.h | 2 +- tensorflow/docs_src/install/install_c.md | 2 +- tensorflow/docs_src/install/install_go.md | 2 +- tensorflow/docs_src/install/install_java.md | 22 +++++++++---------- tensorflow/docs_src/install/install_linux.md | 22 +++++++++---------- tensorflow/docs_src/install/install_mac.md | 10 ++++----- .../docs_src/install/install_sources.md | 14 ++++++------ tensorflow/tools/pip_package/setup.py | 2 +- 9 files changed, 39 insertions(+), 39 deletions(-) diff --git a/tensorflow/contrib/tpu/profiler/pip_package/setup.py b/tensorflow/contrib/tpu/profiler/pip_package/setup.py index cb619847990..52984cd6fd7 100644 --- a/tensorflow/contrib/tpu/profiler/pip_package/setup.py +++ b/tensorflow/contrib/tpu/profiler/pip_package/setup.py @@ -20,7 +20,7 @@ from __future__ import print_function from setuptools import setup -_VERSION = '1.6.0-rc0' +_VERSION = '1.6.0-rc1' CONSOLE_SCRIPTS = [ 'capture_tpu_profile=cloud_tpu_profiler.main:run_main', diff --git a/tensorflow/core/public/version.h b/tensorflow/core/public/version.h index 50bfa912678..7405e01e144 100644 --- a/tensorflow/core/public/version.h +++ b/tensorflow/core/public/version.h @@ -24,7 +24,7 @@ limitations under the License. // TF_VERSION_SUFFIX is non-empty for pre-releases (e.g. "-alpha", "-alpha.1", // "-beta", "-rc", "-rc.1") -#define TF_VERSION_SUFFIX "-rc0" +#define TF_VERSION_SUFFIX "-rc1" #define TF_STR_HELPER(x) #x #define TF_STR(x) TF_STR_HELPER(x) diff --git a/tensorflow/docs_src/install/install_c.md b/tensorflow/docs_src/install/install_c.md index a783205b4a2..f3620cf6873 100644 --- a/tensorflow/docs_src/install/install_c.md +++ b/tensorflow/docs_src/install/install_c.md @@ -38,7 +38,7 @@ enable TensorFlow for C: OS="linux" # Change to "darwin" for macOS TARGET_DIRECTORY="/usr/local" curl -L \ - "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-${OS}-x86_64-1.6.0-rc0.tar.gz" | + "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-${OS}-x86_64-1.6.0-rc1.tar.gz" | sudo tar -C $TARGET_DIRECTORY -xz The `tar` command extracts the TensorFlow C library into the `lib` diff --git a/tensorflow/docs_src/install/install_go.md b/tensorflow/docs_src/install/install_go.md index 5249e04615b..4bf4bacaecb 100644 --- a/tensorflow/docs_src/install/install_go.md +++ b/tensorflow/docs_src/install/install_go.md @@ -38,7 +38,7 @@ steps to install this library and enable TensorFlow for Go: TF_TYPE="cpu" # Change to "gpu" for GPU support TARGET_DIRECTORY='/usr/local' curl -L \ - "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-$(go env GOOS)-x86_64-1.6.0-rc0.tar.gz" | + "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-$(go env GOOS)-x86_64-1.6.0-rc1.tar.gz" | sudo tar -C $TARGET_DIRECTORY -xz The `tar` command extracts the TensorFlow C library into the `lib` diff --git a/tensorflow/docs_src/install/install_java.md b/tensorflow/docs_src/install/install_java.md index 0c6c773e624..1905f9729e8 100644 --- a/tensorflow/docs_src/install/install_java.md +++ b/tensorflow/docs_src/install/install_java.md @@ -36,7 +36,7 @@ following to the project's `pom.xml` to use the TensorFlow Java APIs: <dependency> <groupId>org.tensorflow</groupId> <artifactId>tensorflow</artifactId> - <version>1.6.0-rc0</version> + <version>1.6.0-rc1</version> </dependency> ``` @@ -65,7 +65,7 @@ As an example, these steps will create a Maven project that uses TensorFlow: <dependency> <groupId>org.tensorflow</groupId> <artifactId>tensorflow</artifactId> - <version>1.6.0-rc0</version> + <version>1.6.0-rc1</version> </dependency> </dependencies> </project> @@ -123,12 +123,12 @@ instead: <dependency> <groupId>org.tensorflow</groupId> <artifactId>libtensorflow</artifactId> - <version>1.6.0-rc0</version> + <version>1.6.0-rc1</version> </dependency> <dependency> <groupId>org.tensorflow</groupId> <artifactId>libtensorflow_jni_gpu</artifactId> - <version>1.6.0-rc0</version> + <version>1.6.0-rc1</version> </dependency> ``` @@ -147,7 +147,7 @@ refer to the simpler instructions above instead. Take the following steps to install TensorFlow for Java on Linux or macOS: 1. Download - [libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.6.0-rc0.jar), + [libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.6.0-rc1.jar), which is the TensorFlow Java Archive (JAR). 2. Decide whether you will run TensorFlow for Java on CPU(s) only or with @@ -166,7 +166,7 @@ Take the following steps to install TensorFlow for Java on Linux or macOS: OS=$(uname -s | tr '[:upper:]' '[:lower:]') mkdir -p ./jni curl -L \ - "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-${TF_TYPE}-${OS}-x86_64-1.6.0-rc0.tar.gz" | + "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-${TF_TYPE}-${OS}-x86_64-1.6.0-rc1.tar.gz" | tar -xz -C ./jni ### Install on Windows @@ -174,10 +174,10 @@ Take the following steps to install TensorFlow for Java on Linux or macOS: Take the following steps to install TensorFlow for Java on Windows: 1. Download - [libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.6.0-rc0.jar), + [libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.6.0-rc1.jar), which is the TensorFlow Java Archive (JAR). 2. Download the following Java Native Interface (JNI) file appropriate for - [TensorFlow for Java on Windows](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-cpu-windows-x86_64-1.6.0-rc0.zip). + [TensorFlow for Java on Windows](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-cpu-windows-x86_64-1.6.0-rc1.zip). 3. Extract this .zip file. @@ -225,7 +225,7 @@ must be part of your `classpath`. For example, you can include the downloaded `.jar` in your `classpath` by using the `-cp` compilation flag as follows: -<pre><b>javac -cp libtensorflow-1.6.0-rc0.jar HelloTF.java</b></pre> +<pre><b>javac -cp libtensorflow-1.6.0-rc1.jar HelloTF.java</b></pre> ### Running @@ -239,11 +239,11 @@ two files are available to the JVM: For example, the following command line executes the `HelloTF` program on Linux and macOS X: -<pre><b>java -cp libtensorflow-1.6.0-rc0.jar:. -Djava.library.path=./jni HelloTF</b></pre> +<pre><b>java -cp libtensorflow-1.6.0-rc1.jar:. -Djava.library.path=./jni HelloTF</b></pre> And the following command line executes the `HelloTF` program on Windows: -<pre><b>java -cp libtensorflow-1.6.0-rc0.jar;. -Djava.library.path=jni HelloTF</b></pre> +<pre><b>java -cp libtensorflow-1.6.0-rc1.jar;. -Djava.library.path=jni HelloTF</b></pre> If the program prints <tt>Hello from <i>version</i></tt>, you've successfully installed TensorFlow for Java and are ready to use the API. If the program diff --git a/tensorflow/docs_src/install/install_linux.md b/tensorflow/docs_src/install/install_linux.md index 105b2251773..62bd45650ad 100644 --- a/tensorflow/docs_src/install/install_linux.md +++ b/tensorflow/docs_src/install/install_linux.md @@ -188,7 +188,7 @@ Take the following steps to install TensorFlow with Virtualenv: Virtualenv environment: <pre>(tensorflow)$ <b>pip3 install --upgrade \ - https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.6.0rc0-cp34-cp34m-linux_x86_64.whl</b></pre> + https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.6.0rc1-cp34-cp34m-linux_x86_64.whl</b></pre> If you encounter installation problems, see [Common Installation Problems](#common_installation_problems). @@ -293,7 +293,7 @@ take the following steps: <pre> $ <b>sudo pip3 install --upgrade \ - https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.6.0rc0-cp34-cp34m-linux_x86_64.whl</b> + https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.6.0rc1-cp34-cp34m-linux_x86_64.whl</b> </pre> If this step fails, see @@ -480,7 +480,7 @@ Take the following steps to install TensorFlow in an Anaconda environment: <pre> (tensorflow)$ <b>pip install --ignore-installed --upgrade \ - https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.6.0rc0-cp34-cp34m-linux_x86_64.whl</b></pre> + https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.6.0rc1-cp34-cp34m-linux_x86_64.whl</b></pre> <a name="ValidateYourInstallation"></a> @@ -648,14 +648,14 @@ This section documents the relevant values for Linux installations. CPU only: <pre> -https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.6.0rc0-cp27-none-linux_x86_64.whl +https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.6.0rc1-cp27-none-linux_x86_64.whl </pre> GPU support: <pre> -https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.6.0rc0-cp27-none-linux_x86_64.whl +https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.6.0rc1-cp27-none-linux_x86_64.whl </pre> Note that GPU support requires the NVIDIA hardware and software described in @@ -667,14 +667,14 @@ Note that GPU support requires the NVIDIA hardware and software described in CPU only: <pre> -https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.6.0rc0-cp34-cp34m-linux_x86_64.whl +https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.6.0rc1-cp34-cp34m-linux_x86_64.whl </pre> GPU support: <pre> -https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.6.0rc0-cp34-cp34m-linux_x86_64.whl +https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.6.0rc1-cp34-cp34m-linux_x86_64.whl </pre> Note that GPU support requires the NVIDIA hardware and software described in @@ -686,14 +686,14 @@ Note that GPU support requires the NVIDIA hardware and software described in CPU only: <pre> -https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.6.0rc0-cp35-cp35m-linux_x86_64.whl +https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.6.0rc1-cp35-cp35m-linux_x86_64.whl </pre> GPU support: <pre> -https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.6.0rc0-cp35-cp35m-linux_x86_64.whl +https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.6.0rc1-cp35-cp35m-linux_x86_64.whl </pre> @@ -705,14 +705,14 @@ Note that GPU support requires the NVIDIA hardware and software described in CPU only: <pre> -https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.6.0rc0-cp36-cp36m-linux_x86_64.whl +https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.6.0rc1-cp36-cp36m-linux_x86_64.whl </pre> GPU support: <pre> -https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.6.0rc0-cp36-cp36m-linux_x86_64.whl +https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.6.0rc1-cp36-cp36m-linux_x86_64.whl </pre> diff --git a/tensorflow/docs_src/install/install_mac.md b/tensorflow/docs_src/install/install_mac.md index a6ea548cfbd..e3832a7a2a8 100644 --- a/tensorflow/docs_src/install/install_mac.md +++ b/tensorflow/docs_src/install/install_mac.md @@ -115,7 +115,7 @@ Take the following steps to install TensorFlow with Virtualenv: TensorFlow in the active Virtualenv is as follows: <pre> $ <b>pip3 install --upgrade \ - https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.6.0rc0-py3-none-any.whl</b></pre> + https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.6.0rc1-py3-none-any.whl</b></pre> If you encounter installation problems, see [Common Installation Problems](#common-installation-problems). @@ -238,7 +238,7 @@ take the following steps: issue the following command: <pre> $ <b>sudo pip3 install --upgrade \ - https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.6.0rc0-py3-none-any.whl</b> </pre> + https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.6.0rc1-py3-none-any.whl</b> </pre> If the preceding command fails, see [installation problems](#common-installation-problems). @@ -347,7 +347,7 @@ Take the following steps to install TensorFlow in an Anaconda environment: TensorFlow for Python 2.7: <pre> (<i>targetDirectory</i>)$ <b>pip install --ignore-installed --upgrade \ - https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.6.0rc0-py2-none-any.whl</b></pre> + https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.6.0rc1-py2-none-any.whl</b></pre> <a name="ValidateYourInstallation"></a> @@ -520,7 +520,7 @@ This section documents the relevant values for Mac OS installations. <pre> -https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.6.0rc0-py2-none-any.whl +https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.6.0rc1-py2-none-any.whl </pre> @@ -528,5 +528,5 @@ https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.6.0rc0-py2-none-a <pre> -https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.6.0rc0-py3-none-any.whl +https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.6.0rc1-py3-none-any.whl </pre> diff --git a/tensorflow/docs_src/install/install_sources.md b/tensorflow/docs_src/install/install_sources.md index 36dffd85dce..051da692d35 100644 --- a/tensorflow/docs_src/install/install_sources.md +++ b/tensorflow/docs_src/install/install_sources.md @@ -359,10 +359,10 @@ Invoke `pip install` to install that pip package. The filename of the `.whl` file depends on your platform. For example, the following command will install the pip package -for TensorFlow 1.6.0rc0 on Linux: +for TensorFlow 1.6.0rc1 on Linux: <pre> -$ <b>sudo pip install /tmp/tensorflow_pkg/tensorflow-1.6.0rc0-py2-none-any.whl</b> +$ <b>sudo pip install /tmp/tensorflow_pkg/tensorflow-1.6.0rc1-py2-none-any.whl</b> </pre> ## Validate your installation @@ -460,8 +460,8 @@ Stack Overflow and specify the `tensorflow` tag. **Linux** <table> <tr><th>Version:</th><th>CPU/GPU:</th><th>Python Version:</th><th>Compiler:</th><th>Build Tools:</th><th>cuDNN:</th><th>CUDA:</th></tr> -<tr><td>tensorflow-1.6.0rc0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>GCC 4.8</td><td>Bazel 0.9.0</td><td>N/A</td><td>N/A</td></tr> -<tr><td>tensorflow_gpu-1.6.0rc0</td><td>GPU</td><td>2.7, 3.3-3.6</td><td>GCC 4.8</td><td>Bazel 0.9.0</td><td>7</td><td>9</td></tr> +<tr><td>tensorflow-1.6.0rc1</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>GCC 4.8</td><td>Bazel 0.9.0</td><td>N/A</td><td>N/A</td></tr> +<tr><td>tensorflow_gpu-1.6.0rc1</td><td>GPU</td><td>2.7, 3.3-3.6</td><td>GCC 4.8</td><td>Bazel 0.9.0</td><td>7</td><td>9</td></tr> <tr><td>tensorflow-1.5.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>GCC 4.8</td><td>Bazel 0.8.0</td><td>N/A</td><td>N/A</td></tr> <tr><td>tensorflow_gpu-1.5.0</td><td>GPU</td><td>2.7, 3.3-3.6</td><td>GCC 4.8</td><td>Bazel 0.8.0</td><td>7</td><td>9</td></tr> <tr><td>tensorflow-1.4.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>GCC 4.8</td><td>Bazel 0.5.4</td><td>N/A</td><td>N/A</td></tr> @@ -479,7 +479,7 @@ Stack Overflow and specify the `tensorflow` tag. **Mac** <table> <tr><th>Version:</th><th>CPU/GPU:</th><th>Python Version:</th><th>Compiler:</th><th>Build Tools:</th><th>cuDNN:</th><th>CUDA:</th></tr> -<tr><td>tensorflow-1.6.0rc0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>Clang from xcode</td><td>Bazel 0.8.1</td><td>N/A</td><td>N/A</td></tr> +<tr><td>tensorflow-1.6.0rc1</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>Clang from xcode</td><td>Bazel 0.8.1</td><td>N/A</td><td>N/A</td></tr> <tr><td>tensorflow-1.5.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>Clang from xcode</td><td>Bazel 0.8.1</td><td>N/A</td><td>N/A</td></tr> <tr><td>tensorflow-1.4.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>Clang from xcode</td><td>Bazel 0.5.4</td><td>N/A</td><td>N/A</td></tr> <tr><td>tensorflow-1.3.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>Clang from xcode</td><td>Bazel 0.4.5</td><td>N/A</td><td>N/A</td></tr> @@ -493,8 +493,8 @@ Stack Overflow and specify the `tensorflow` tag. **Windows** <table> <tr><th>Version:</th><th>CPU/GPU:</th><th>Python Version:</th><th>Compiler:</th><th>Build Tools:</th><th>cuDNN:</th><th>CUDA:</th></tr> -<tr><td>tensorflow-1.6.0rc0</td><td>CPU</td><td>3.5-3.6</td><td>MSVC 2015 update 3</td><td>Cmake v3.6.3</td><td>N/A</td><td>N/A</td></tr> -<tr><td>tensorflow_gpu-1.6.0rc0</td><td>GPU</td><td>3.5-3.6</td><td>MSVC 2015 update 3</td><td>Cmake v3.6.3</td><td>7</td><td>9</td></tr> +<tr><td>tensorflow-1.6.0rc1</td><td>CPU</td><td>3.5-3.6</td><td>MSVC 2015 update 3</td><td>Cmake v3.6.3</td><td>N/A</td><td>N/A</td></tr> +<tr><td>tensorflow_gpu-1.6.0rc1</td><td>GPU</td><td>3.5-3.6</td><td>MSVC 2015 update 3</td><td>Cmake v3.6.3</td><td>7</td><td>9</td></tr> <tr><td>tensorflow-1.5.0</td><td>CPU</td><td>3.5-3.6</td><td>MSVC 2015 update 3</td><td>Cmake v3.6.3</td><td>N/A</td><td>N/A</td></tr> <tr><td>tensorflow_gpu-1.5.0</td><td>GPU</td><td>3.5-3.6</td><td>MSVC 2015 update 3</td><td>Cmake v3.6.3</td><td>7</td><td>9</td></tr> <tr><td>tensorflow-1.4.0</td><td>CPU</td><td>3.5-3.6</td><td>MSVC 2015 update 3</td><td>Cmake v3.6.3</td><td>N/A</td><td>N/A</td></tr> diff --git a/tensorflow/tools/pip_package/setup.py b/tensorflow/tools/pip_package/setup.py index 0d4fa465ad2..a835275daed 100644 --- a/tensorflow/tools/pip_package/setup.py +++ b/tensorflow/tools/pip_package/setup.py @@ -29,7 +29,7 @@ from setuptools.dist import Distribution # This version string is semver compatible, but incompatible with pip. # For pip, we will remove all '-' characters from this string, and use the # result for pip. -_VERSION = '1.6.0-rc0' +_VERSION = '1.6.0-rc1' REQUIRED_PACKAGES = [ 'absl-py >= 0.1.6', From 1eca578242eda2db93cdb2509413996e9294751e Mon Sep 17 00:00:00 2001 From: Derek Murray <mrry@google.com> Date: Mon, 5 Feb 2018 13:43:52 -0800 Subject: [PATCH 23/34] [tf.data] Fix use-after-free bug when closing down an input pipeline. This fix affects the distributed runtime; DirectSession use is unaffected. Before this change, an iterator that used a background prefetching thread might attempt to use a captured FunctionLibraryRuntime from a subgraph that had been deregistered (and hence its FunctionLibraryRuntime would have been deleted). This change introduces a mechanism for "cloning" the necessary parts of the FunctionLibraryRuntime so that it can be owned by the IteratorResource. PiperOrigin-RevId: 184579490 --- tensorflow/core/common_runtime/function.cc | 19 ++++++++++++ .../core/common_runtime/graph_optimizer.h | 2 ++ .../process_function_library_runtime.cc | 12 ++++++++ .../process_function_library_runtime.h | 6 ++++ tensorflow/core/framework/function.h | 5 ++++ tensorflow/core/kernels/data/iterator_ops.cc | 7 +++-- .../kernel_tests/iterator_ops_cluster_test.py | 30 +++++++++++++++++++ 7 files changed, 79 insertions(+), 2 deletions(-) diff --git a/tensorflow/core/common_runtime/function.cc b/tensorflow/core/common_runtime/function.cc index 150fb85c707..248ff9051b7 100644 --- a/tensorflow/core/common_runtime/function.cc +++ b/tensorflow/core/common_runtime/function.cc @@ -183,6 +183,10 @@ class FunctionLibraryRuntimeImpl : public FunctionLibraryRuntime { string DebugString(Handle h) override; + Status Clone(std::unique_ptr<FunctionLibraryDefinition>* out_lib_def, + std::unique_ptr<ProcessFunctionLibraryRuntime>* out_pflr, + FunctionLibraryRuntime** out_flr) override; + private: typedef FunctionLibraryRuntimeImpl ME; @@ -895,6 +899,21 @@ string FunctionLibraryRuntimeImpl::DebugString(Handle handle) { } } +Status FunctionLibraryRuntimeImpl::Clone( + std::unique_ptr<FunctionLibraryDefinition>* out_lib_def, + std::unique_ptr<ProcessFunctionLibraryRuntime>* out_pflr, + FunctionLibraryRuntime** out_flr) { + TF_RETURN_IF_ERROR( + parent_->Clone(env_, graph_def_version_, optimizer_.options(), + custom_kernel_creator_, out_lib_def, out_pflr)); + *out_flr = (*out_pflr)->GetFLR(device_->name()); + if (out_flr != nullptr) { + return Status::OK(); + } else { + return errors::Internal("Cloning FunctionLibraryRuntime failed."); + } +} + namespace { struct CustomCreatorSingleton { diff --git a/tensorflow/core/common_runtime/graph_optimizer.h b/tensorflow/core/common_runtime/graph_optimizer.h index 8477cea126f..80246281cde 100644 --- a/tensorflow/core/common_runtime/graph_optimizer.h +++ b/tensorflow/core/common_runtime/graph_optimizer.h @@ -52,6 +52,8 @@ class GraphOptimizer { shape_map, const std::function<bool(const Node*)>& cse_consider_fn = nullptr); + const OptimizerOptions& options() { return opts_; } + private: OptimizerOptions opts_; diff --git a/tensorflow/core/common_runtime/process_function_library_runtime.cc b/tensorflow/core/common_runtime/process_function_library_runtime.cc index dd4bf6a3457..41e1ce8c15c 100644 --- a/tensorflow/core/common_runtime/process_function_library_runtime.cc +++ b/tensorflow/core/common_runtime/process_function_library_runtime.cc @@ -350,4 +350,16 @@ void ProcessFunctionLibraryRuntime::Run( done(errors::Internal("Could not find device")); } +Status ProcessFunctionLibraryRuntime::Clone( + Env* env, int graph_def_version, const OptimizerOptions& optimizer_options, + CustomKernelCreator custom_kernel_creator, + std::unique_ptr<FunctionLibraryDefinition>* out_lib_def, + std::unique_ptr<ProcessFunctionLibraryRuntime>* out_pflr) { + out_lib_def->reset(new FunctionLibraryDefinition(*lib_def_)); + out_pflr->reset(new ProcessFunctionLibraryRuntime( + device_mgr_, env, graph_def_version, out_lib_def->get(), + optimizer_options, std::move(custom_kernel_creator), parent_)); + return Status::OK(); +} + } // namespace tensorflow diff --git a/tensorflow/core/common_runtime/process_function_library_runtime.h b/tensorflow/core/common_runtime/process_function_library_runtime.h index 9c9c92f1ea0..4296f9449f4 100644 --- a/tensorflow/core/common_runtime/process_function_library_runtime.h +++ b/tensorflow/core/common_runtime/process_function_library_runtime.h @@ -145,6 +145,12 @@ class ProcessFunctionLibraryRuntime { // Removes handle from the state owned by this object. Status RemoveHandle(FunctionLibraryRuntime::Handle handle); + Status Clone(Env* env, int graph_def_version, + const OptimizerOptions& optimizer_options, + CustomKernelCreator custom_kernel_creator, + std::unique_ptr<FunctionLibraryDefinition>* out_lib_def, + std::unique_ptr<ProcessFunctionLibraryRuntime>* out_pflr); + friend class FunctionLibraryRuntimeImpl; mutable mutex mu_; diff --git a/tensorflow/core/framework/function.h b/tensorflow/core/framework/function.h index b933ee0b0e4..7498cde6372 100644 --- a/tensorflow/core/framework/function.h +++ b/tensorflow/core/framework/function.h @@ -35,6 +35,7 @@ namespace tensorflow { class CancellationManager; class GraphDef; class OpKernel; +class ProcessFunctionLibraryRuntime; class ResourceMgr; class Rendezvous; class ScopedStepContainer; @@ -534,6 +535,10 @@ class FunctionLibraryRuntime { virtual int graph_def_version() = 0; typedef uint64 LocalHandle; + + virtual Status Clone(std::unique_ptr<FunctionLibraryDefinition>* out_lib_def, + std::unique_ptr<ProcessFunctionLibraryRuntime>* out_pflr, + FunctionLibraryRuntime** out_flr) = 0; }; // Returns a canonicalized string for the instantiation of the diff --git a/tensorflow/core/kernels/data/iterator_ops.cc b/tensorflow/core/kernels/data/iterator_ops.cc index dd5f4a4554b..8a420ac26dd 100644 --- a/tensorflow/core/kernels/data/iterator_ops.cc +++ b/tensorflow/core/kernels/data/iterator_ops.cc @@ -459,7 +459,7 @@ class IteratorHandleOp : public OpKernel { { mutex_lock l(mu_); if (resource_ == nullptr) { - FunctionLibraryRuntime* lib = context->function_library(); + FunctionLibraryRuntime* lib; std::unique_ptr<DeviceMgr> device_mgr(nullptr); std::unique_ptr<FunctionLibraryDefinition> flib_def(nullptr); std::unique_ptr<ProcessFunctionLibraryRuntime> pflr(nullptr); @@ -469,6 +469,9 @@ class IteratorHandleOp : public OpKernel { // is sufficient demand, but it will require a significant refactoring. if (!name_.empty()) { lib = CreatePrivateFLR(context, &device_mgr, &flib_def, &pflr); + } else { + OP_REQUIRES_OK(context, context->function_library()->Clone( + &flib_def, &pflr, &lib)); } ResourceMgr* mgr = context->resource_manager(); @@ -538,7 +541,7 @@ class IteratorHandleOp : public OpKernel { // Wrap the existing device in order to see any captured resources // in its resource manager. The existing device will outlive the // IteratorResource, because we are storing the IteratorResource - // in that device's resourc manager. + // in that device's resource manager. Device* wrapped_device = RenamedDevice::NewRenamedDevice( ctx->device()->name(), down_cast<Device*>(ctx->device()), false /* owns_underlying */, false /* isolate_session_state */); diff --git a/tensorflow/python/data/kernel_tests/iterator_ops_cluster_test.py b/tensorflow/python/data/kernel_tests/iterator_ops_cluster_test.py index 2c65c49ebdc..25c91b42dc6 100644 --- a/tensorflow/python/data/kernel_tests/iterator_ops_cluster_test.py +++ b/tensorflow/python/data/kernel_tests/iterator_ops_cluster_test.py @@ -17,6 +17,8 @@ from __future__ import absolute_import from __future__ import division from __future__ import print_function +import numpy as np + from tensorflow.core.protobuf import config_pb2 from tensorflow.python.client import session from tensorflow.python.data.ops import dataset_ops @@ -30,6 +32,7 @@ from tensorflow.python.framework import test_util from tensorflow.python.ops import array_ops from tensorflow.python.ops import functional_ops from tensorflow.python.ops import lookup_ops +from tensorflow.python.ops import math_ops from tensorflow.python.ops import string_ops from tensorflow.python.platform import test @@ -140,6 +143,33 @@ class IteratorClusterTest(test.TestCase): with self.assertRaises(errors.OutOfRangeError): sess.run(get_next) + def testImplicitDisposeParallelMapDataset(self): + # Tests whether a parallel map dataset will be cleaned up correctly when + # the pipeline does not run it until exhaustion. + # The pipeline is TensorSliceDataset -> MapDataset(square_3) -> + # RepeatDataset(None) -> PrefetchDataset(100). + worker, _ = test_util.create_local_cluster(1, 1) + + components = (np.arange(1000), + np.array([[1, 2, 3]]) * np.arange(1000)[:, np.newaxis], + np.array(37.0) * np.arange(1000)) + + def _map_fn(x, y, z): + return math_ops.square(x), math_ops.square(y), math_ops.square(z) + + dataset = ( + dataset_ops.Dataset.from_tensor_slices(components).map(_map_fn) + .repeat(None).prefetch(10000)) + + iterator = dataset.make_initializable_iterator() + init_op = iterator.initializer + get_next = iterator.get_next() + + with session.Session(worker[0].target) as sess: + sess.run(init_op) + for _ in range(3): + sess.run(get_next) + if __name__ == "__main__": test.main() From d4ad85b765f5945bf35a256f0dd2e9a5278de3e5 Mon Sep 17 00:00:00 2001 From: "A. Unique TensorFlower" <gardener@tensorflow.org> Date: Tue, 6 Feb 2018 09:40:53 -0800 Subject: [PATCH 24/34] Fix bug in and speed up ConstantFolding::CreateNodeDef(): * Fix bug trying to store more than kintmax32 values in a repeated proto field. * Speed up populating compressed format. Example: tensorflow/python/kernel_tests/large_concat_op_test with size = 2**29+6 goes from ~30 seconds to ~15 seconds. The fraction of time spent in ConstantFolding::CreateNodeDef() goes down from about 35% to about 12%. PiperOrigin-RevId: 184693749 --- .../grappler/optimizers/constant_folding.cc | 34 +++++++++++-------- 1 file changed, 20 insertions(+), 14 deletions(-) diff --git a/tensorflow/core/grappler/optimizers/constant_folding.cc b/tensorflow/core/grappler/optimizers/constant_folding.cc index 0aeff6222c2..2caefdf4bde 100644 --- a/tensorflow/core/grappler/optimizers/constant_folding.cc +++ b/tensorflow/core/grappler/optimizers/constant_folding.cc @@ -808,20 +808,26 @@ NodeDef ConstantFolding::CreateNodeDef(const string& name, // Use the packed representation whenever possible to avoid generating large // graphdefs. Moreover, avoid repeating the last values if they're equal. if (tensor->NumElements() > 4) { -#define POPULATE_TENSOR_PROTO(tensor, t, TYPE, NAME) \ - optimized = true; \ - TYPE last = tensor->flat<TYPE>()(0); \ - int last_index = 0; \ - for (int i = 0; i < tensor->NumElements(); ++i) { \ - TYPE cur = tensor->flat<TYPE>()(i); \ - t->add_##NAME##_val(cur); \ - if (cur != last) { \ - last = cur; \ - last_index = i; \ - } \ - } \ - /* Remove all identical trailing values to save memory. */ \ - t->mutable_##NAME##_val()->Truncate(last_index + 1); +#define POPULATE_TENSOR_PROTO(tensor, t, TYPE, NAME) \ + const TYPE* val_ptr = tensor->flat<TYPE>().data(); \ + TYPE last = *val_ptr; \ + int64 last_index = 0; \ + for (int64 i = 0; i < tensor->NumElements(); ++i) { \ + TYPE cur = *val_ptr++; \ + if (cur != last) { \ + last = cur; \ + last_index = i; \ + } \ + } \ + if (last_index < kint32max) { \ + optimized = true; \ + t->mutable_##NAME##_val()->Reserve(last_index + 1); \ + t->mutable_##NAME##_val()->AddNAlreadyReserved(last_index + 1); \ + val_ptr = tensor->flat<TYPE>().data(); \ + for (int64 i = 0; i <= last_index; ++i) { \ + t->set_##NAME##_val(i, *val_ptr++); \ + } \ + } if (tensor->dtype() == DT_FLOAT) { POPULATE_TENSOR_PROTO(tensor, t, float, float) From c829c71afb14cd41079231b3c3f33fad5e119679 Mon Sep 17 00:00:00 2001 From: Derek Murray <mrry@google.com> Date: Tue, 6 Feb 2018 17:37:02 -0800 Subject: [PATCH 25/34] [tf.data] Fix a memory leak when an iterator is reinitialized many times in a session. Previously, we would instantiate a new function handle for each function in a dataset each time an iterator on that dataset was initialized. These would only be deleted at session closure, which could lead to an apparent leak of memory over the lifetime of session. PiperOrigin-RevId: 184768730 --- .../core/kernels/data/captured_function.cc | 6 +++++- tensorflow/core/kernels/data/iterator_ops.cc | 19 ++++++++++++------- 2 files changed, 17 insertions(+), 8 deletions(-) diff --git a/tensorflow/core/kernels/data/captured_function.cc b/tensorflow/core/kernels/data/captured_function.cc index f3e4f1cd3fd..f248f7897ff 100644 --- a/tensorflow/core/kernels/data/captured_function.cc +++ b/tensorflow/core/kernels/data/captured_function.cc @@ -32,7 +32,11 @@ Status CapturedFunction::Create( return Status::OK(); } -CapturedFunction::~CapturedFunction() {} +CapturedFunction::~CapturedFunction() { + if (lib_ != nullptr) { + lib_->ReleaseHandle(f_handle_).IgnoreError(); + } +} namespace { class CallFrameBase : public CallFrameInterface { diff --git a/tensorflow/core/kernels/data/iterator_ops.cc b/tensorflow/core/kernels/data/iterator_ops.cc index 8a420ac26dd..fc3e291afb8 100644 --- a/tensorflow/core/kernels/data/iterator_ops.cc +++ b/tensorflow/core/kernels/data/iterator_ops.cc @@ -725,18 +725,23 @@ class OneShotIteratorOp : public AsyncOpKernel { Status TryInit(OpKernelContext* ctx, IteratorResource** iterator, ContainerInfo* cinfo) { TF_RETURN_IF_ERROR(cinfo->Init(ctx->resource_manager(), def())); - FunctionLibraryRuntime* lib = ctx->function_library(); + + FunctionLibraryRuntime* lib; + std::unique_ptr<FunctionLibraryDefinition> flib_def(nullptr); + std::unique_ptr<ProcessFunctionLibraryRuntime> pflr(nullptr); + TF_RETURN_IF_ERROR(ctx->function_library()->Clone(&flib_def, &pflr, &lib)); // Create an IteratorResource that will hold the iterator for this op. TF_RETURN_IF_ERROR( ctx->resource_manager()->LookupOrCreate<IteratorResource>( cinfo->container(), cinfo->name(), iterator, - [lib, this](IteratorResource** ret) EXCLUSIVE_LOCKS_REQUIRED(mu_) { - *ret = new IteratorResource(output_dtypes_, output_shapes_, - graph_def_version_, nullptr, nullptr, - nullptr, lib); - return Status::OK(); - })); + [lib, this, &flib_def, &pflr](IteratorResource** ret) + EXCLUSIVE_LOCKS_REQUIRED(mu_) { + *ret = new IteratorResource( + output_dtypes_, output_shapes_, graph_def_version_, + nullptr, std::move(flib_def), std::move(pflr), lib); + return Status::OK(); + })); core::ScopedUnref unref_iterator(*iterator); From 0851b150f3fed192db0f51e0a794d2a667b1bc66 Mon Sep 17 00:00:00 2001 From: Jonathan Hseu <jhseu@google.com> Date: Tue, 6 Feb 2018 20:40:00 -0800 Subject: [PATCH 26/34] TPUEstimator: Revert the global_step change and require the user to explicitly pass it. PiperOrigin-RevId: 184784330 --- .../contrib/tpu/python/tpu/tpu_estimator.py | 52 ++++--------------- 1 file changed, 11 insertions(+), 41 deletions(-) diff --git a/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py b/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py index 56793f11d9c..2aec7ce707f 100644 --- a/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py +++ b/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py @@ -485,7 +485,7 @@ class TPUEstimatorSpec( if self.eval_metrics is not None: host_calls['eval_metrics'] = self.eval_metrics if self.host_call is not None: - host_calls['host_call'] = wrap_hostcall_with_global_step(self.host_call) + host_calls['host_call'] = self.host_call host_call_ret = _OutfeedHostCall.create_cpu_hostcall(host_calls) eval_metric_ops = None if self.eval_metrics is not None: @@ -1303,19 +1303,21 @@ class _ModelFnWrapper(object): self._call_model_fn(features, labels)) loss, train_op = estimator_spec.loss, estimator_spec.train_op - host_call_outfeed_ops = [] if isinstance(estimator_spec, TPUEstimatorSpec): captured_scaffold_fn.capture(estimator_spec.scaffold_fn) - if estimator_spec.host_call is not None: - host_call.record({ - 'host_call': wrap_hostcall_with_global_step( - estimator_spec.host_call)}) - host_call_outfeed_ops = host_call.create_enqueue_op() else: captured_scaffold_fn.capture(None) - with ops.control_dependencies([train_op] + host_call_outfeed_ops): - return array_ops.identity(loss) + # We must run train_op to update the variables prior to running the + # outfeed. + with ops.control_dependencies([train_op]): + host_call_outfeed_ops = [] + if (isinstance(estimator_spec, TPUEstimatorSpec) and + estimator_spec.host_call is not None): + host_call.record({'host_call': estimator_spec.host_call}) + host_call_outfeed_ops = host_call.create_enqueue_op() + with ops.control_dependencies(host_call_outfeed_ops): + return array_ops.identity(loss) return train_step, host_call, captured_scaffold_fn @@ -1657,38 +1659,6 @@ class _OutfeedHostCall(object): return ret -def wrap_hostcall_with_global_step(hostcall): - """Wrap the hostcall so that we update the global step upon every call.""" - if hostcall is None: - return None - host_fn, tensors = hostcall - - def global_step_host_fn(_global_step, *args, **kwargs): # pylint: disable=invalid-name - # Note that we don't have any ordering here, so the graph may see a - # global_step that's off by 1. - state_ops.assign( - training.get_global_step(), - math_ops.cast(_global_step[0], dtypes.int64)) - return host_fn(*args, **kwargs) - # Give the global step tensor a batch dimension. Reshape is not supported for - # int64, so we cast it to int32. - # TODO(jhseu): Remove the cast once int64 is supported. - global_step_tensor = array_ops.reshape( - math_ops.cast(training.get_global_step(), dtypes.int32), [1]) - if isinstance(tensors, dict): - outfeed_tensors = {'_global_step': global_step_tensor} - outfeed_tensors.update(tensors) - return global_step_host_fn, outfeed_tensors - else: - fn_args = util.fn_args(host_fn) - if len(tensors) != len(fn_args): - raise RuntimeError( - 'In TPUEstimatorSpec.host_call, length of tensors {} does not match ' - 'method args of the function, which takes {}.'.format( - len(tensors), len(fn_args))) - return global_step_host_fn, [global_step_tensor] + list(tensors) - - class _OutfeedHostCallHook(session_run_hook.SessionRunHook): """Hook to run host calls when use_tpu=False.""" From 7c799e11b7b765a8d43e409b32d24dfbb5614bf8 Mon Sep 17 00:00:00 2001 From: Mark Daoust <markdaoust@google.com> Date: Wed, 7 Feb 2018 20:50:09 -0800 Subject: [PATCH 27/34] Move TPU doc to "using_tpu". Add short titles to some docs. PiperOrigin-RevId: 184941101 --- tensorflow/docs_src/install/leftnav_files | 14 +++++++------- .../docs_src/programmers_guide/debugger.md | 2 +- .../docs_src/programmers_guide/leftnav_files | 9 ++++++--- .../using_tpu.md} | 0 tensorflow/docs_src/tutorials/leftnav_files | 16 ++++++++-------- 5 files changed, 22 insertions(+), 19 deletions(-) rename tensorflow/docs_src/{api_guides/python/TPUEstimator.md => programmers_guide/using_tpu.md} (100%) diff --git a/tensorflow/docs_src/install/leftnav_files b/tensorflow/docs_src/install/leftnav_files index 0e8b5ae7a17..e523e06f67a 100644 --- a/tensorflow/docs_src/install/leftnav_files +++ b/tensorflow/docs_src/install/leftnav_files @@ -1,16 +1,16 @@ index.md ### Python -install_linux.md -install_mac.md -install_windows.md -install_sources.md +install_linux.md: Ubuntu +install_mac.md: MacOS +install_windows.md: Windows +install_sources.md: From source >>> migration.md ### Other Languages -install_java.md -install_go.md -install_c.md +install_java.md: Java +install_go.md: Go +install_c.md: C diff --git a/tensorflow/docs_src/programmers_guide/debugger.md b/tensorflow/docs_src/programmers_guide/debugger.md index 9eaee270282..dbc45170878 100644 --- a/tensorflow/docs_src/programmers_guide/debugger.md +++ b/tensorflow/docs_src/programmers_guide/debugger.md @@ -1,4 +1,4 @@ -# Debugging TensorFlow Programs +# TensorFlow Debugger <!-- [comment]: TODO(barryr): Links to and from sections on "Graphs" & "Monitoring Learning". --> diff --git a/tensorflow/docs_src/programmers_guide/leftnav_files b/tensorflow/docs_src/programmers_guide/leftnav_files index 38de3ccc3e4..3fe4cb2ddae 100644 --- a/tensorflow/docs_src/programmers_guide/leftnav_files +++ b/tensorflow/docs_src/programmers_guide/leftnav_files @@ -10,7 +10,10 @@ tensors.md variables.md graphs.md saved_model.md + +### Accelerators using_gpu.md +using_tpu.md ### ML Concepts embedding.md @@ -19,9 +22,9 @@ embedding.md debugger.md ### TensorBoard -summaries_and_tensorboard.md -graph_viz.md -tensorboard_histograms.md +summaries_and_tensorboard.md: Visualizing Learning +graph_viz.md: Graphs +tensorboard_histograms.md: Histograms ### Misc version_compat.md diff --git a/tensorflow/docs_src/api_guides/python/TPUEstimator.md b/tensorflow/docs_src/programmers_guide/using_tpu.md similarity index 100% rename from tensorflow/docs_src/api_guides/python/TPUEstimator.md rename to tensorflow/docs_src/programmers_guide/using_tpu.md diff --git a/tensorflow/docs_src/tutorials/leftnav_files b/tensorflow/docs_src/tutorials/leftnav_files index 41ffdc86010..888052428f9 100644 --- a/tensorflow/docs_src/tutorials/leftnav_files +++ b/tensorflow/docs_src/tutorials/leftnav_files @@ -1,22 +1,22 @@ index.md ### Images -layers.md -image_recognition.md -image_retraining.md +layers.md: MNIST +image_recognition.md: Image Recognition +image_retraining.md: Image Retraining deep_cnn.md ### Sequences recurrent.md -seq2seq.md -recurrent_quickdraw.md +seq2seq.md: Neural Machine Translation +recurrent_quickdraw.md: Drawing Classification audio_recognition.md ### Data Representation -wide.md -wide_and_deep.md +wide.md: Linear Models +wide_and_deep.md: Wide & Deep Learning word2vec.md -kernel_methods.md +kernel_methods.md: Kernel Methods ### Non-ML mandelbrot.md From 7f982862c68febf8c4e9553a5b848ecd570cb808 Mon Sep 17 00:00:00 2001 From: cclauss <cclauss@bluewin.ch> Date: Thu, 8 Feb 2018 01:28:06 +0100 Subject: [PATCH 28/34] Fix undefined name: import as_str_any for line 35 (#16668) flake8 testing of https://github.com/tensorflow/tensorflow on Python 2.7.14 $ __flake8 . --count --select=E901,E999,F821,F822,F823 --show-source --statistics__ ``` ./tensorflow/python/util/compat_internal.py:33:12: F821 undefined name 'as_str_any' path = as_str_any(path.__fspath__()) ^ ``` --- tensorflow/python/util/compat_internal.py | 2 ++ 1 file changed, 2 insertions(+) diff --git a/tensorflow/python/util/compat_internal.py b/tensorflow/python/util/compat_internal.py index a299b2fc3c3..9e60e689d20 100644 --- a/tensorflow/python/util/compat_internal.py +++ b/tensorflow/python/util/compat_internal.py @@ -19,6 +19,8 @@ from __future__ import absolute_import from __future__ import division from __future__ import print_function +from tensorflow.python.util.compat import as_str_any + def path_to_str(path): """Returns the file system path representation of a `PathLike` object, else as it is. From 43ecf848478940904e1a2df10df6bfe72163a38d Mon Sep 17 00:00:00 2001 From: Gunhan Gulsoy <gunan@google.com> Date: Mon, 12 Feb 2018 13:00:11 -0800 Subject: [PATCH 29/34] Revert "Add checkpoint file prefix check (#14341)" This reverts commit 2a16133061ba3f8fa60c0338cd629f2211f9b17d. --- .../contrib/slim/python/slim/evaluation_test.py | 2 +- tensorflow/python/training/saver.py | 11 +++-------- 2 files changed, 4 insertions(+), 9 deletions(-) diff --git a/tensorflow/contrib/slim/python/slim/evaluation_test.py b/tensorflow/contrib/slim/python/slim/evaluation_test.py index f5a9299d263..870f504d103 100644 --- a/tensorflow/contrib/slim/python/slim/evaluation_test.py +++ b/tensorflow/contrib/slim/python/slim/evaluation_test.py @@ -236,7 +236,7 @@ class SingleEvaluationTest(test.TestCase): def _prepareCheckpoint(self, checkpoint_path): init_op = control_flow_ops.group(variables.global_variables_initializer(), variables.local_variables_initializer()) - saver = saver_lib.Saver() + saver = saver_lib.Saver(write_version=saver_pb2.SaverDef.V1) with self.test_session() as sess: sess.run(init_op) saver.save(sess, checkpoint_path) diff --git a/tensorflow/python/training/saver.py b/tensorflow/python/training/saver.py index 764f8400122..3888e9bba42 100644 --- a/tensorflow/python/training/saver.py +++ b/tensorflow/python/training/saver.py @@ -1597,9 +1597,9 @@ class Saver(object): [Stripping Default-Valued Attributes](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes). Returns: - A string: path prefix used for the checkpoint files. If checkpoint - format is V1 and the saver is sharded, this string ends with: - '-?????-of-nnnnn' where 'nnnnn' is the number of shards created. + A string: path prefix used for the checkpoint files. If the saver is + sharded, this string ends with: '-?????-of-nnnnn' where 'nnnnn' + is the number of shards created. If the saver is empty, returns None. Raises: @@ -1749,11 +1749,6 @@ class Saver(object): return if save_path is None: raise ValueError("Can't load save_path when it is None.") - if (os.path.isfile(save_path) and - self._write_version != saver_pb2.SaverDef.V1): - raise ValueError("The specified path: %s is a file." - " Please specify only the path prefix" - " to the checkpoint files." % save_path) logging.info("Restoring parameters from %s", save_path) if context.in_graph_mode(): sess.run(self.saver_def.restore_op_name, From 9d8994ad0f8ba11911ab08d6d755abec40cfb84f Mon Sep 17 00:00:00 2001 From: Gunhan Gulsoy <gunan@google.com> Date: Fri, 9 Feb 2018 14:37:24 -0800 Subject: [PATCH 30/34] Remove header dependence on cuda_config.h to fix opensource custom op support. Fixes #14454 Fixes #12860 PiperOrigin-RevId: 185194924 --- tensorflow/core/common_runtime/gpu/gpu_device.cc | 4 ++++ tensorflow/stream_executor/dso_loader.cc | 4 ++++ tensorflow/stream_executor/dso_loader.h | 4 ---- 3 files changed, 8 insertions(+), 4 deletions(-) diff --git a/tensorflow/core/common_runtime/gpu/gpu_device.cc b/tensorflow/core/common_runtime/gpu/gpu_device.cc index 80a5bdbfff4..dc92da77380 100644 --- a/tensorflow/core/common_runtime/gpu/gpu_device.cc +++ b/tensorflow/core/common_runtime/gpu/gpu_device.cc @@ -65,6 +65,10 @@ limitations under the License. #include "tensorflow/core/util/env_var.h" #include "tensorflow/core/util/stream_executor_util.h" +#if !defined(PLATFORM_GOOGLE) +#include "cuda/cuda_config.h" +#endif + namespace tensorflow { // Eigen Ops directly allocate memory only for temporary buffers used diff --git a/tensorflow/stream_executor/dso_loader.cc b/tensorflow/stream_executor/dso_loader.cc index d71938634d6..d9fe301b8fb 100644 --- a/tensorflow/stream_executor/dso_loader.cc +++ b/tensorflow/stream_executor/dso_loader.cc @@ -33,6 +33,10 @@ limitations under the License. #include "tensorflow/stream_executor/platform/logging.h" #include "tensorflow/stream_executor/platform/port.h" +#if !defined(PLATFORM_GOOGLE) +#include "cuda/cuda_config.h" +#endif + namespace perftools { namespace gputools { namespace internal { diff --git a/tensorflow/stream_executor/dso_loader.h b/tensorflow/stream_executor/dso_loader.h index 9495f7253a1..354c7b50b82 100644 --- a/tensorflow/stream_executor/dso_loader.h +++ b/tensorflow/stream_executor/dso_loader.h @@ -28,10 +28,6 @@ limitations under the License. #include "tensorflow/stream_executor/platform.h" #include "tensorflow/stream_executor/platform/mutex.h" -#if !defined(PLATFORM_GOOGLE) -#include "cuda/cuda_config.h" -#endif - namespace perftools { namespace gputools { namespace internal { From b6c2273f0a7735b9850080eefd26b6056b4d5f42 Mon Sep 17 00:00:00 2001 From: Andrew Harp <andrewharp@users.noreply.github.com> Date: Wed, 7 Feb 2018 17:19:11 -0500 Subject: [PATCH 31/34] Bump JetPack default to 3.2 in Android build script (#16842) --- tensorflow/contrib/makefile/build_all_android.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tensorflow/contrib/makefile/build_all_android.sh b/tensorflow/contrib/makefile/build_all_android.sh index f67c5161861..fc88f59e094 100755 --- a/tensorflow/contrib/makefile/build_all_android.sh +++ b/tensorflow/contrib/makefile/build_all_android.sh @@ -52,7 +52,7 @@ shift $((OPTIND - 1)) if [ "$ARCH" == "tegra" ]; then if [[ -z "${JETPACK}" ]]; then - export JETPACK="$HOME/JetPack_Android_3.0" + export JETPACK="$HOME/JetPack_Android_3.2" fi if [ ! -d ${JETPACK} ]; then echo "Can't find Jetpack at ${JETPACK}" From 22ff574ee9d4272995ddcc6aaac70479b7e6e17c Mon Sep 17 00:00:00 2001 From: Andrew Harp <andrewharp@users.noreply.github.com> Date: Tue, 13 Feb 2018 18:20:07 -0500 Subject: [PATCH 32/34] Add instructions for building CUDA-enabled Android TensorFlow (#16961) * Add instructions for building CUDA-enabled Android TensorFlow * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md --- tensorflow/contrib/android/README.md | 5 ++ tensorflow/contrib/makefile/README.md | 99 +++++++++++++++++++++++++++ 2 files changed, 104 insertions(+) diff --git a/tensorflow/contrib/android/README.md b/tensorflow/contrib/android/README.md index b8d73bf24ce..db37bcf73d1 100644 --- a/tensorflow/contrib/android/README.md +++ b/tensorflow/contrib/android/README.md @@ -81,6 +81,11 @@ For documentation on building a self-contained AAR file with cmake, see [tensorflow/contrib/android/cmake](cmake). +### Makefile + +For documentation on building native TF libraries with make, including a CUDA-enabled variant for devices like the Nvidia Shield TV, see [tensorflow/contrib/makefile/README.md](../makefile/README.md) + + ## AssetManagerFileSystem This directory also contains a TensorFlow filesystem supporting the Android diff --git a/tensorflow/contrib/makefile/README.md b/tensorflow/contrib/makefile/README.md index 0613de2cabe..9758ee1c474 100644 --- a/tensorflow/contrib/makefile/README.md +++ b/tensorflow/contrib/makefile/README.md @@ -130,6 +130,105 @@ adb shell '/data/local/tmp/benchmark \ For more details, see the [benchmark documentation](../../tools/benchmark). +## CUDA support for Tegra devices running Android (Nvidia Shield TV, etc) + +With the release of TF 1.6 and JetPack for Android 3.2 (currently pending), you can now build a version of TensorFlow for compatible devices according to the following instructions which will receive the full benefits of GPU acceleration. + +#### Environment setup: + +First, download and install JetPack for Android version 3.2 or greater from [Nvidia](https://developers.nvidia.com). Note that as of the TF 1.6 release the JetPack for Android 3.2 release is still pending, and regular JetPack for L4T will not work. + +```bash +git clone https://github.com/tensorflow/tensorflow.git +cd tensorflow +JETPACK=$HOME/JetPack_Android_3.2 +TEGRA_LIBS="$JETPACK/cuDNN/aarch64/cuda/lib64/libcudnn.so $JETPACK/cuda-9.0/extras/CUPTI/lib64/libcupti.so $JETPACK/cuda/targets/aarch64-linux-androideabi/lib64/libcufft.so" +``` + +#### Building all CUDA-enabled native binaries: +This will build CUDA-enabled versions of libtensorflow_inference.so and the benchmark binary. (libtensorflow_demo.so will also be built incidentally, but it does not support CUDA) + +```bash +NDK_ROOT=$JETPACK/android-ndk-r13b +CC_PREFIX=ccache tensorflow/contrib/makefile/build_all_android.sh -s tensorflow/contrib/makefile/sub_makefiles/android/Makefile.in -t "libtensorflow_inference.so libtensorflow_demo.so all" -a tegra +``` +(add -T on subsequent builds to skip protobuf downloading/building) + + +#### Testing the the CUDA-enabled benchmark via adb: +Build binaries first as above, then run: + +```bash +adb shell mkdir -p /data/local/tmp/lib64 +adb push $TEGRA_LIBS /data/local/tmp/lib64 +adb push tensorflow/contrib/makefile/gen/bin/android_arm64-v8a/benchmark /data/local/tmp +wget https://ci.tensorflow.org/view/Nightly/job/nightly-android/lastSuccessfulBuild/artifact/out/tensorflow_demo.apk +unzip tensorflow_demo.apk -d /tmp/tensorflow_demo +adb push /tmp/tensorflow_demo/assets/*.pb /data/local/tmp +adb shell "LD_LIBRARY_PATH=/data/local/tmp/lib64 /data/local/tmp/benchmark --graph=/data/local/tmp/tensorflow_inception_graph.pb" +``` + +#### Building the CUDA-enabled TensorFlow AAR with Bazel: +Build the native binaries first as above. Then, build the aar and package the native libs by executing the following: +```bash +mkdir -p /tmp/tf/jni/arm64-v8a +cp tensorflow/contrib/makefile/gen/lib/android_tegra/libtensorflow_*.so /tmp/tf/jni/arm64-v8a/ +cp $TEGRA_LIBS /tmp/tf/jni/arm64-v8a +bazel build //tensorflow/contrib/android:android_tensorflow_inference_java.aar +cp bazel-bin/tensorflow/contrib/android/android_tensorflow_inference_java.aar /tmp/tf/tensorflow.aar +cd /tmp/tf +chmod +w tensorflow.aar +zip -ur tensorflow.aar $(find jni -name *.so) +``` + +#### Building the CUDA-enabled TensorFlow Android demo with Bazel: +Build binaries first as above, then edit tensorflow/examples/android/BUILD and replace: +``` + srcs = [ + ":libtensorflow_demo.so", + "//tensorflow/contrib/android:libtensorflow_inference.so", + ], +``` +with: +``` +srcs = glob(["libs/arm64-v8a/*.so"]), +``` + +Then run: +```bash +# Create dir for native libs +mkdir -p tensorflow/examples/android/libs/arm64-v8a + +# Copy JetPack libs +cp $TEGRA_LIBS tensorflow/examples/android/libs/arm64-v8a + +# Copy native TensorFlow libraries +cp tensorflow/contrib/makefile/gen/lib/android_arm64-v8a/libtensorflow_*.so tensorflow/examples/android/libs/arm64-v8a/ + +# Build APK +bazel build -c opt --fat_apk_cpu=arm64-v8a tensorflow/android:tensorflow_demo + +# Install +adb install -r -f bazel-bin/tensorflow/examples/android/tensorflow_demo.apk +``` + +#### Building the CUDA-enabled Android demo with gradle/Android Studio: + +Add tensorflow/examples/android as an Android project in Android Studio as normal. + +Edit build.gradle and: +* set nativeBuildSystem = 'makefile' +* set cpuType = 'arm64-v8a' +* in "buildNativeMake", replace cpuType with 'tegra' (optional speedups like -T and ccache also work) +* set the environment "NDK_ROOT" var to $JETPACK/android-ndk-r13b + +Click "build apk" to build. + +Install: +```bash +adb install -r -f tensorflow/examples/android/gradleBuild/outputs/apk/debug/android-debug.apk +``` + ## iOS _Note: To use this library in an iOS application, see related instructions in From ecec1d8c8e557656e7e5c4034604ca6f20d2d3c2 Mon Sep 17 00:00:00 2001 From: Andrew Harp <andrew.harp@gmail.com> Date: Tue, 13 Feb 2018 18:49:44 -0500 Subject: [PATCH 33/34] Update RELEASE.md --- RELEASE.md | 1 + 1 file changed, 1 insertion(+) diff --git a/RELEASE.md b/RELEASE.md index de4a34bb040..1a037ce595a 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -8,6 +8,7 @@ * New Optimizer internal API for non-slot variables. Descendants of AdamOptimizer that access _beta[12]_power will need to be updated. * `tf.estimator.{FinalExporter,LatestExporter}` now export stripped SavedModels. This improves forward compatibility of the SavedModel. * FFT support added to XLA CPU/GPU. +* Android TF can now be built with CUDA acceleration on compatible Tegra devices (see [contrib/makefile/README.md](contrib/makefile/README.md) for more information) ## Bug Fixes and Other Changes * Documentation updates: From 62b3b9a0d40aacb2c890ed56b831dd2568304f89 Mon Sep 17 00:00:00 2001 From: Mark Daoust <markdaoust@google.com> Date: Thu, 15 Feb 2018 13:31:49 -0500 Subject: [PATCH 34/34] Docs fix r1.6 (#17038) * add missing blank line PiperOrigin-RevId: 185554969 * fix cuDNN64 dll name --- tensorflow/docs_src/get_started/get_started_for_beginners.md | 4 ++++ tensorflow/docs_src/get_started/premade_estimators.md | 1 + tensorflow/docs_src/install/install_windows.md | 2 +- 3 files changed, 6 insertions(+), 1 deletion(-) diff --git a/tensorflow/docs_src/get_started/get_started_for_beginners.md b/tensorflow/docs_src/get_started/get_started_for_beginners.md index ea1c2fb3f47..446bae4b896 100644 --- a/tensorflow/docs_src/get_started/get_started_for_beginners.md +++ b/tensorflow/docs_src/get_started/get_started_for_beginners.md @@ -36,6 +36,7 @@ the following three: alt="Petal geometry compared for three iris species: Iris setosa, Iris virginica, and Iris versicolor" src="../images/iris_three_species.jpg"> </div> + **From left to right, [*Iris setosa*](https://commons.wikimedia.org/w/index.php?curid=170298) (by [Radomil](https://commons.wikimedia.org/wiki/User:Radomil), CC BY-SA 3.0), @@ -188,6 +189,7 @@ provides a programming stack consisting of multiple API layers: <div style="margin:auto; margin-bottom:10px; margin-top:20px;"> <img style="width:100%" src="../images/tensorflow_programming_environment.png"> </div> + **The TensorFlow Programming Environment.** <p> </p> @@ -380,6 +382,7 @@ fully connected neural network consisting of three hidden layers: <div style="margin:auto; margin-bottom:10px; margin-top:20px;"> <img style="width:100%" src="../images/simple_dnn.svg"> </div> + **A neural network with three hidden layers.** <p> </p> @@ -568,6 +571,7 @@ of 0.5. The following suggests a more effective model: <tr> <td>5.5</td> <td>2.5</td> <td>4.0</td> <td>1.3</td> <td>1</td> <td style="background-color:green">1</td></tr> </table> + **A model that is 80% accurate.** <p> </p> diff --git a/tensorflow/docs_src/get_started/premade_estimators.md b/tensorflow/docs_src/get_started/premade_estimators.md index 4f01f997c33..6bffd2e0655 100644 --- a/tensorflow/docs_src/get_started/premade_estimators.md +++ b/tensorflow/docs_src/get_started/premade_estimators.md @@ -98,6 +98,7 @@ classifies Iris flowers into three different species based on the size of their alt="Petal geometry compared for three iris species: Iris setosa, Iris virginica, and Iris versicolor" src="../images/iris_three_species.jpg"> </div> + **From left to right, [*Iris setosa*](https://commons.wikimedia.org/w/index.php?curid=170298) (by [Radomil](https://commons.wikimedia.org/wiki/User:Radomil), CC BY-SA 3.0), diff --git a/tensorflow/docs_src/install/install_windows.md b/tensorflow/docs_src/install/install_windows.md index 86a111c2ec1..87e1a715aa0 100644 --- a/tensorflow/docs_src/install/install_windows.md +++ b/tensorflow/docs_src/install/install_windows.md @@ -47,7 +47,7 @@ installed on your system: If you have a different version of one of the preceding packages, please change to the specified versions. In particular, the cuDNN version -must match exactly: TensorFlow will not load if it cannot find `cuDNN64_6.dll`. +must match exactly: TensorFlow will not load if it cannot find `cuDNN64_7.dll`. To use a different version of cuDNN, you must build from source. ## Determine how to install TensorFlow