Merging 1.3.0rc0 back to master.
This commit is contained in:
parent
49961e588d
commit
c2baef7a6c
15
README.md
15
README.md
@ -34,13 +34,14 @@ and discussion, and please direct specific questions to [Stack Overflow](https:/
|
||||
|
||||
People who are a little more adventurous can also try our nightly binaries:
|
||||
|
||||
* Linux CPU-only: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=cpu-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.2.1-cp27-none-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=cpu-slave)) / [Python 3.4](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=cpu-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.2.1-cp34-cp34m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=cpu-slave/)) / [Python 3.5](https://ci.tensorflow.org/view/Nightly/job/nightly-python35-linux-cpu/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.2.1-cp35-cp35m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-python35-linux-cpu/))
|
||||
* Linux GPU: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-linux/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.2.1-cp27-none-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-linux/)) / [Python 3.4](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-linux/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.2.1-cp34-cp34m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-linux/)) / [Python 3.5](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3.5,label=gpu-linux/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.2.1-cp35-cp35m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3.5,label=gpu-linux/))
|
||||
* Mac CPU-only: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=mac-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.2.1-py2-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=mac-slave/)) / [Python 3](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=mac-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.2.1-py3-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=mac-slave/))
|
||||
* Mac GPU: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-mac/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.2.1-py2-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-mac/)) / [Python 3](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-mac/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.2.1-py3-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-mac/))
|
||||
* Windows CPU-only: [Python 3.5 64-bit](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows,PY=35/lastSuccessfulBuild/artifact/cmake_build/tf_python/dist/tensorflow-1.2.1-cp35-cp35m-win_amd64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows,PY=35/)) / [Python 3.6 64-bit](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows,PY=36/lastSuccessfulBuild/artifact/cmake_build/tf_python/dist/tensorflow-1.2.1-cp36-cp36m-win_amd64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows,PY=36/))
|
||||
* Windows GPU: [Python 3.5 64-bit](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows-gpu,PY=35/lastSuccessfulBuild/artifact/cmake_build/tf_python/dist/tensorflow_gpu-1.2.1-cp35-cp35m-win_amd64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows-gpu,PY=35/)) / [Python 3.6 64-bit](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows-gpu,PY=36/lastSuccessfulBuild/artifact/cmake_build/tf_python/dist/tensorflow_gpu-1.2.1-cp36-cp36m-win_amd64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows-gpu,PY=36/))
|
||||
* Android: [demo APK](https://ci.tensorflow.org/view/Nightly/job/nightly-android/lastSuccessfulBuild/artifact/out/tensorflow_demo.apk), [native libs](https://ci.tensorflow.org/view/Nightly/job/nightly-android/lastSuccessfulBuild/artifact/out/native/)
|
||||
|
||||
* Linux CPU-only: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=cpu-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.3.0rc0-cp27-none-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=cpu-slave)) / [Python 3.4](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=cpu-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.3.0rc0-cp34-cp34m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=cpu-slave/)) / [Python 3.5](https://ci.tensorflow.org/view/Nightly/job/nightly-python35-linux-cpu/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.3.0rc0-cp35-cp35m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-python35-linux-cpu/))
|
||||
* Linux GPU: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-linux/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.3.0rc0-cp27-none-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-linux/)) / [Python 3.4](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-linux/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.3.0rc0-cp34-cp34m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-linux/)) / [Python 3.5](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3.5,label=gpu-linux/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.3.0rc0-cp35-cp35m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3.5,label=gpu-linux/))
|
||||
* Mac CPU-only: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=mac-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.3.0rc0-py2-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=mac-slave/)) / [Python 3](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=mac-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.3.0rc0-py3-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=mac-slave/))
|
||||
* Mac GPU: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-mac/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.3.0rc0-py2-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-mac/)) / [Python 3](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-mac/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.3.0rc0-py3-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-mac/))
|
||||
* Windows CPU-only: [Python 3.5 64-bit](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows,PY=35/lastSuccessfulBuild/artifact/cmake_build/tf_python/dist/tensorflow-1.3.0rc0-cp35-cp35m-win_amd64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows,PY=35/)) / [Python 3.6 64-bit](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows,PY=36/lastSuccessfulBuild/artifact/cmake_build/tf_python/dist/tensorflow-1.3.0rc0-cp36-cp36m-win_amd64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows,PY=36/))
|
||||
* Windows GPU: [Python 3.5 64-bit](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows-gpu,PY=35/lastSuccessfulBuild/artifact/cmake_build/tf_python/dist/tensorflow_gpu-1.3.0rc0-cp35-cp35m-win_amd64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows-gpu,PY=35/)) / [Python 3.6 64-bit](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows-gpu,PY=36/lastSuccessfulBuild/artifact/cmake_build/tf_python/dist/tensorflow_gpu-1.3.0rc0-cp36-cp36m-win_amd64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows-gpu,PY=36/))
|
||||
* Android: [demo APK](https://ci.tensorflow.org/view/Nightly/job/nightly-android/lastSuccessfulBuild/artifact/out/tensorflow_demo.apk), [native libs](http://ci.tensorflow.org/view/Nightly/job/nightly-android/lastSuccessfulBuild/artifact/out/native/)
|
||||
([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-android/))
|
||||
|
||||
#### *Try your first TensorFlow program*
|
||||
|
103
RELEASE.md
103
RELEASE.md
@ -1,3 +1,106 @@
|
||||
# Release 1.3.0
|
||||
|
||||
## Major Features and Improvements
|
||||
* Added canned estimators to Tensorflow library. List of added estimators: `DNNClassifier`, `DNNRegressor`, `LinearClassifer`, `LinearRegressor`, `DNNLinearCombinedClassifier`, `DNNLinearCombinedRegressor`.
|
||||
* All our prebuilt binaries have been built with cuDNN 6.
|
||||
* Adds a file cache to the GCS filesystem with configurable max staleness for file contents. This permits caching of file contents across close/open boundaries.
|
||||
* Added an axis parameter to `tf.gather`.
|
||||
* Added a `constant_values` keyword argument to `tf.pad`.
|
||||
* Adds `Dataset.interleave` transformation.
|
||||
* Add `ConcatenateDataset` to concatenate two datasets.
|
||||
* Added Mobilenet support to TensorFlow for Poets training script.
|
||||
* Adds a block cache to the GCS filesystem with configurable block size and count.
|
||||
* SinhArcSinh bijector added.
|
||||
* Added `Dataset.list_files` API.
|
||||
* Introduces new operations and Python bindings for the Cloud TPU.
|
||||
* Adding TensorFlow-iOS CocoaPod for symmetry with tensorflow-android.
|
||||
* Introduces base implementations of ClusterResolvers.
|
||||
* Unify memory representations of TensorShape and PartialTensorShape. As a consequence, tensors now have a maximum of 254 dimensions, not 255.
|
||||
* Changed references to LIBXSMM to use version 1.8.1.
|
||||
* TensorFlow Debugger (tfdbg): Display summaries of numeric tensor values with the `-s` flag to command `print_tensor` or `pt`.
|
||||
* Initial release of the statistical distribution library `tf.distributions`.
|
||||
* GPU kernels and speed improvements for for unary `tf.where` and `tf.nn.top_k`.
|
||||
* Monotonic Attention wrappers added to `tf.contrib.seq2seq`.
|
||||
|
||||
## Breaking Changes to the API
|
||||
* `tf.RewriterConfig` was removed from the Python API after being available in 1.2 release candidates (it was never in an actual release). Graph rewriting is still available, just not as `tf.RewriterConfig`. Instead add an explicit import.
|
||||
* Breaking change to `tf.contrib.data.Dataset` APIs that expect a nested structure. Lists are now converted to `tf.Tensor` implicitly. You may need to change uses of lists to tuples in existing code. In addition, dicts are now supported as a nested structure.
|
||||
|
||||
## Changes to contrib APIs
|
||||
* Adds tf.contrib.nn.rank_sampled_softmax_loss, a sampled-softmax variant that can improve rank loss.
|
||||
* `tf.contrib.metrics`.{streaming_covariance,streaming_pearson_correlation} modified to return nan when they have seen less or equal to 1 unit of weight.
|
||||
* Adds time series models to contrib. See contrib/timeseries/README.md for details.
|
||||
* Adds FULLY_CONNECTED Op to tensorflow/contrib/lite/schema.fbs
|
||||
|
||||
## Bug Fixes and Other Changes
|
||||
* Fixes 'strides' and 'begin' dtype mismatch when slicing using int64 Tensor index in python.
|
||||
* Improved convolution padding documentation.
|
||||
* Add a tag constant, gpu, to present graph with GPU support.
|
||||
* `saved_model.utils` now support SparseTensors transparently.
|
||||
* A more efficient implementation of non-max suppression.
|
||||
* Add support for the shrinkage-type L2 to FtrlOptimizer in addition to the online L2 it already supports.
|
||||
* Fix negative variance in moments calculation.
|
||||
* Expand UniqueOp Benchmark Tests to cover more collision cases.
|
||||
* Improves stability of GCS filesystem on Mac.
|
||||
* Add time estimation to HloCostAnalysis.
|
||||
* Fixed the bug in Estimator that params in constructor was not a deepcopy of the user provided one. This bugs inadvertently enabled user to mutate the params after the creation of Estimator, leading to potentially undefined behavior.
|
||||
* Added None check for save_path in `saver.restore`.
|
||||
* Register devices under their legacy names in device_mgr to ease the transition to clusterspec-propagated configurations.
|
||||
* VectorExponential added to distributions.
|
||||
* Add a bitwise module with bitwise_and, bitwise_or, bitwise_xor, and invert functions.
|
||||
* Add fixed-grid ODE integration routines.
|
||||
* Allow passing bounds to ScipyOptimizerInterface.
|
||||
* Correctness fixes for fft_length parameter to `tf.spectral.rfft` & `tf.spectral.irfft`.
|
||||
* Exported model signatures using the 'predict' method will no longer have their input and output keys silently ignored and rewritten to 'inputs' and 'outputs'. If a model was exported with different names before 1.2, and is now served with tensorflow/serving, it will accept requests using 'inputs' and 'outputs'. Starting at 1.2, such a model will accept the keys specified during export. Therefore, inference requests using 'inputs' and 'outputs' may start to fail. To fix this, either update any inference clients to send requests with the actual input and output keys used by the trainer code, or conversely, update the trainer code to name the input and output Tensors 'inputs' and 'outputs', respectively. Signatures using the 'classify' and 'regress' methods are not affected by this change; they will continue to standardize their input and output keys as before.
|
||||
* Add in-memory caching to the Dataset API.
|
||||
* Set default end_of_sequence variable in datasets iterators to false.
|
||||
* [Performance] Increase performance of `tf.layers.con2d` when setting use_bias=True by 2x by using nn.bias_add.
|
||||
* Update iOS examples to use CocoaPods, and moved to tensorflow/examples/ios.
|
||||
* Adds a family= attribute in `tf.summary` ops to allow controlling the tab name used in Tensorboard for organizing summaries.
|
||||
* When GPU is configured, do not require --config=cuda, instead, automatically build for GPU if this is requested in the configure script.
|
||||
* Fix incorrect sampling of small probabilities in CPU/GPU multinomial.
|
||||
* Add a list_devices() API on sessions to list devices within a cluster. Additionally, this change augment the ListDevices master API to support specifying a session.
|
||||
* Allow uses of over-parameterized separable convolution.
|
||||
* TensorForest multi-regression bug fix.
|
||||
* Framework now supports armv7, cocoapods.org now displays correct page.
|
||||
* Script to create iOS framework for CocoaPods.
|
||||
* Android releases of TensorFlow are now pushed to jcenter for easier integration into apps. See https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/android/README.md for more details.
|
||||
* Fixed a bug that prevented tfdbg from functioning with multi-GPU setups.
|
||||
* Fixed a bug that prevented tfdbg from working with `tf.Session.make_callable`.
|
||||
|
||||
## Thanks to our Contributors
|
||||
|
||||
This release contains contributions from many people at Google, as well as:
|
||||
|
||||
4F2E4A2E, Adriano Carmezim, Adrià Arrufat, Alan Yee, Alex Lattas, Alex Rothberg,
|
||||
Alexandr Baranezky, Ali Siddiqui, Andreas Solleder, Andrei Costinescu, Andrew Hundt,
|
||||
Androbin, Andy Kernahan, Anish Shah, Anthony Platanios, Arvinds-Ds, b1rd, Baptiste
|
||||
Arnaud, Ben Mabey, Benedikt Linse, Beomsu Kim, Bo Wang, Boyuan Deng, Brett Koonce,
|
||||
Bruno Rosa, Carl Thomé, Changming Sun, Chase Roberts, Chirag Bhatia, Chris Antaki,
|
||||
Chris Hoyean Song, Chris Tava, Christos Nikolaou, Croath Liu, cxx, Czxck001, Daniel
|
||||
Ylitalo, Danny Goodman, Darren Garvey, David Brailovsky, David Norman, DavidNorman,
|
||||
davidpham87, ddurham2, Dhruv, DimanNe, Drew Hintz, Dustin Tran, Earthson Lu, ethiraj,
|
||||
Fabian Winnen, Fei Sun, Freedom" Koan-Sin Tan, Fritz Obermeyer, Gao, Xiang, Gautam,
|
||||
Guenther Schmuelling, Gyu-Ho Lee, Hauke Brammer, horance, Humanity123, J Alammar,
|
||||
Jayeol Chun, Jeroen BéDorf, Jianfei Wang, jiefangxuanyan, Jing Jun Yin, Joan Puigcerver,
|
||||
Joel Hestness, Johannes Mayer, John Lawson, Johnson145, Jon Malmaud, Jonathan Alvarez-Gutierrez,
|
||||
Juang, Yi-Lin, Julian Viereck, Kaarthik Sivashanmugam, Karl Lessard, karl@kubx.ca, Kevin
|
||||
Carbone, Kevin Van Der Burgt, Kongsea, ksellesk, lanhin, Lef Ioannidis, Liangliang He,
|
||||
Louis Tiao, Luke Iwanski, LáSzló Csomor, magixsno, Mahmoud Abuzaina, Marcel Hlopko, Mark
|
||||
Neumann, Maxwell Paul Brickner, mdfaijul, MichaëL Defferrard, Michał JastrzęBski, Michele
|
||||
Colombo, Mike Brodie, Mosnoi Ion, mouradmourafiq, myPrecious, Nayana Thorat,
|
||||
Neeraj Kashyap, Nelson Liu, Niranjan Hasabnis, Olivier Moindrot, orome, Pankaj Gupta, Paul
|
||||
Van Eck, peeyush18, Peng Yu, Pierre, preciousdp11, qjivy, Raingo, raoqiyu, ribx, Richard S.
|
||||
Imaoka, Rishabh Patel, Robert Walecki, Rockford Wei, Ryan Kung, Sahil Dua, Sandip Giri, Sayed
|
||||
Hadi Hashemi, sgt101, Shitian Ni, Shuolongbj, Siim PõDer, Simon Perkins, sj6077, SOLARIS,
|
||||
Spotlight0xff, Steffen Eberbach, Stephen Fox, superryanguo, Sven Mayer, Tapan Prakash,
|
||||
Tiago Morais Morgado, Till Hoffmann, Tj Rana, Vadim Markovtsev, vhasanov, Wei Wu,
|
||||
windead, Yan (Asta) Li, Yan Chen, Yann Henon, Yi Wang, Yong Tang, yorkie, Yuan (Terry)
|
||||
Tang, Yuxin Wu, zhengjiajin, zhongzyd, 黄璞
|
||||
|
||||
We are also grateful to all who filed issues or helped resolve them, asked and
|
||||
answered questions, and were part of inspiring discussions.
|
||||
|
||||
# Release 1.2.1
|
||||
|
||||
## Bug Fixes and Other Changes
|
||||
|
@ -129,15 +129,3 @@ list(REMOVE_ITEM tf_core_ops_srcs ${tf_core_ops_exclude_srcs})
|
||||
add_library(tf_core_ops OBJECT ${tf_core_ops_srcs})
|
||||
|
||||
add_dependencies(tf_core_ops tf_core_cpu)
|
||||
|
||||
########################################################
|
||||
# tf_debug_ops library
|
||||
########################################################
|
||||
|
||||
file(GLOB tf_debug_ops_srcs
|
||||
"${tensorflow_source_dir}/tensorflow/core/ops/debug_ops.cc"
|
||||
)
|
||||
|
||||
add_library(tf_debug_ops OBJECT ${tf_debug_ops_srcs})
|
||||
|
||||
add_dependencies(tf_debug_ops tf_core_framework)
|
||||
|
@ -696,8 +696,6 @@ GENERATE_PYTHON_OP_LIB("contrib_bigquery_reader_ops"
|
||||
DESTINATION ${CMAKE_CURRENT_BINARY_DIR}/tf_python/tensorflow/contrib/cloud/python/ops/gen_bigquery_reader_ops.py)
|
||||
GENERATE_PYTHON_OP_LIB("stateless_random_ops"
|
||||
DESTINATION ${CMAKE_CURRENT_BINARY_DIR}/tf_python/tensorflow/contrib/stateless/gen_stateless_random_ops.py)
|
||||
GENERATE_PYTHON_OP_LIB("debug_ops"
|
||||
DESTINATION ${CMAKE_CURRENT_BINARY_DIR}/tf_python/tensorflow/python/debug/ops/gen_debug_ops.py)
|
||||
|
||||
add_custom_target(tf_python_ops SOURCES ${tf_python_ops_generated_files} ${PYTHON_PROTO_GENFILES})
|
||||
add_dependencies(tf_python_ops tf_python_op_gen_main)
|
||||
|
@ -14,9 +14,7 @@ limitations under the License.
|
||||
==============================================================================*/
|
||||
// This file registers all TensorFlow Debugger (tfdbg) ops.
|
||||
|
||||
#include "tensorflow/core/framework/common_shape_fns.h"
|
||||
#include "tensorflow/core/framework/op.h"
|
||||
#include "tensorflow/core/framework/shape_inference.h"
|
||||
|
||||
namespace tensorflow {
|
||||
|
||||
@ -90,7 +88,6 @@ REGISTER_OP("DebugIdentity")
|
||||
.Attr("debug_urls: list(string) = []")
|
||||
.Attr("gated_grpc: bool = false")
|
||||
.SetAllowsUninitializedInput()
|
||||
.SetShapeFn(shape_inference::UnchangedShape)
|
||||
.Doc(R"doc(
|
||||
Debug Identity Op.
|
||||
|
||||
|
@ -19,12 +19,12 @@ limitations under the License.
|
||||
// TensorFlow uses semantic versioning, see http://semver.org/.
|
||||
|
||||
#define TF_MAJOR_VERSION 1
|
||||
#define TF_MINOR_VERSION 2
|
||||
#define TF_PATCH_VERSION 1
|
||||
#define TF_MINOR_VERSION 3
|
||||
#define TF_PATCH_VERSION 0
|
||||
|
||||
// TF_VERSION_SUFFIX is non-empty for pre-releases (e.g. "-alpha", "-alpha.1",
|
||||
// "-beta", "-rc", "-rc.1")
|
||||
#define TF_VERSION_SUFFIX ""
|
||||
#define TF_VERSION_SUFFIX "-rc0"
|
||||
|
||||
#define TF_STR_HELPER(x) #x
|
||||
#define TF_STR(x) TF_STR_HELPER(x)
|
||||
|
@ -35,7 +35,7 @@ enable TensorFlow for C:
|
||||
OS="linux" # Change to "darwin" for Mac OS
|
||||
TARGET_DIRECTORY="/usr/local"
|
||||
curl -L \
|
||||
"https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-${OS}-x86_64-1.2.1.tar.gz" |
|
||||
"https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-${OS}-x86_64-1.3.0-rc0.tar.gz" |
|
||||
sudo tar -C $TARGET_DIRECTORY -xz
|
||||
|
||||
The `tar` command extracts the TensorFlow C library into the `lib`
|
||||
|
@ -35,7 +35,7 @@ steps to install this library and enable TensorFlow for Go:
|
||||
TF_TYPE="cpu" # Change to "gpu" for GPU support
|
||||
TARGET_DIRECTORY='/usr/local'
|
||||
curl -L \
|
||||
"https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-$(go env GOOS)-x86_64-1.2.1.tar.gz" |
|
||||
"https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-$(go env GOOS)-x86_64-1.3.0-rc0.tar.gz" |
|
||||
sudo tar -C $TARGET_DIRECTORY -xz
|
||||
|
||||
The `tar` command extracts the TensorFlow C library into the `lib`
|
||||
|
@ -34,7 +34,7 @@ following to the project's `pom.xml` to use the TensorFlow Java APIs:
|
||||
<dependency>
|
||||
<groupId>org.tensorflow</groupId>
|
||||
<artifactId>tensorflow</artifactId>
|
||||
<version>1.2.1</version>
|
||||
<version>1.3.0-rc0</version>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
@ -63,7 +63,7 @@ As an example, these steps will create a Maven project that uses TensorFlow:
|
||||
<dependency>
|
||||
<groupId>org.tensorflow</groupId>
|
||||
<artifactId>tensorflow</artifactId>
|
||||
<version>1.2.1</version>
|
||||
<version>1.3.0-rc0</version>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
</project>
|
||||
@ -122,7 +122,7 @@ refer to the simpler instructions above instead.
|
||||
Take the following steps to install TensorFlow for Java on Linux or Mac OS:
|
||||
|
||||
1. Download
|
||||
[libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.2.1.jar),
|
||||
[libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.3.0-rc0.jar),
|
||||
which is the TensorFlow Java Archive (JAR).
|
||||
|
||||
2. Decide whether you will run TensorFlow for Java on CPU(s) only or with
|
||||
@ -141,7 +141,7 @@ Take the following steps to install TensorFlow for Java on Linux or Mac OS:
|
||||
OS=$(uname -s | tr '[:upper:]' '[:lower:]')
|
||||
mkdir -p ./jni
|
||||
curl -L \
|
||||
"https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-${TF_TYPE}-${OS}-x86_64-1.2.1.tar.gz" |
|
||||
"https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-${TF_TYPE}-${OS}-x86_64-1.3.0-rc0.tar.gz" |
|
||||
tar -xz -C ./jni
|
||||
|
||||
### Install on Windows
|
||||
@ -149,10 +149,10 @@ Take the following steps to install TensorFlow for Java on Linux or Mac OS:
|
||||
Take the following steps to install TensorFlow for Java on Windows:
|
||||
|
||||
1. Download
|
||||
[libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.2.1.jar),
|
||||
[libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.3.0-rc0.jar),
|
||||
which is the TensorFlow Java Archive (JAR).
|
||||
2. Download the following Java Native Interface (JNI) file appropriate for
|
||||
[TensorFlow for Java on Windows](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-cpu-windows-x86_64-1.2.1.zip).
|
||||
[TensorFlow for Java on Windows](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-cpu-windows-x86_64-1.3.0-rc0.zip).
|
||||
3. Extract this .zip file.
|
||||
|
||||
|
||||
@ -200,7 +200,7 @@ must be part of your `classpath`. For example, you can include the
|
||||
downloaded `.jar` in your `classpath` by using the `-cp` compilation flag
|
||||
as follows:
|
||||
|
||||
<pre><b>javac -cp libtensorflow-1.2.1.jar HelloTF.java</b></pre>
|
||||
<pre><b>javac -cp libtensorflow-1.3.0-rc0.jar HelloTF.java</b></pre>
|
||||
|
||||
|
||||
### Running
|
||||
@ -214,11 +214,11 @@ two files are available to the JVM:
|
||||
For example, the following command line executes the `HelloTF` program on Linux
|
||||
and Mac OS X:
|
||||
|
||||
<pre><b>java -cp libtensorflow-1.2.1.jar:. -Djava.library.path=./jni HelloTF</b></pre>
|
||||
<pre><b>java -cp libtensorflow-1.3.0-rc0.jar:. -Djava.library.path=./jni HelloTF</b></pre>
|
||||
|
||||
And the following command line executes the `HelloTF` program on Windows:
|
||||
|
||||
<pre><b>java -cp libtensorflow-1.2.1.jar;. -Djava.library.path=jni HelloTF</b></pre>
|
||||
<pre><b>java -cp libtensorflow-1.3.0-rc0.jar;. -Djava.library.path=jni HelloTF</b></pre>
|
||||
|
||||
If the program prints <tt>Hello from <i>version</i></tt>, you've successfully
|
||||
installed TensorFlow for Java and are ready to use the API. If the program
|
||||
|
@ -172,7 +172,7 @@ Take the following steps to install TensorFlow with Virtualenv:
|
||||
virtualenv environment:
|
||||
|
||||
<pre>(tensorflow)$ <b>pip3 install --upgrade \
|
||||
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.2.1-cp34-cp34m-linux_x86_64.whl</b></pre>
|
||||
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.3.0rc0-cp34-cp34m-linux_x86_64.whl</b></pre>
|
||||
|
||||
If you encounter installation problems, see
|
||||
[Common Installation Problems](#common_installation_problems).
|
||||
@ -277,7 +277,7 @@ take the following steps:
|
||||
|
||||
<pre>
|
||||
$ <b>sudo pip3 install --upgrade \
|
||||
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.2.1-cp34-cp34m-linux_x86_64.whl</b>
|
||||
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.3.0rc0-cp34-cp34m-linux_x86_64.whl</b>
|
||||
</pre>
|
||||
|
||||
If this step fails, see
|
||||
@ -464,7 +464,7 @@ Take the following steps to install TensorFlow in an Anaconda environment:
|
||||
|
||||
<pre>
|
||||
(tensorflow)$ <b>pip install --ignore-installed --upgrade \
|
||||
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.2.1-cp34-cp34m-linux_x86_64.whl</b></pre>
|
||||
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.3.0rc0-cp34-cp34m-linux_x86_64.whl</b></pre>
|
||||
|
||||
|
||||
<a name="ValidateYourInstallation"></a>
|
||||
@ -632,14 +632,14 @@ This section documents the relevant values for Linux installations.
|
||||
CPU only:
|
||||
|
||||
<pre>
|
||||
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.2.1-cp27-none-linux_x86_64.whl
|
||||
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.3.0rc0-cp27-none-linux_x86_64.whl
|
||||
</pre>
|
||||
|
||||
|
||||
GPU support:
|
||||
|
||||
<pre>
|
||||
https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.2.1-cp27-none-linux_x86_64.whl
|
||||
https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.3.0rc0-cp27-none-linux_x86_64.whl
|
||||
</pre>
|
||||
|
||||
Note that GPU support requires the NVIDIA hardware and software described in
|
||||
@ -651,14 +651,14 @@ Note that GPU support requires the NVIDIA hardware and software described in
|
||||
CPU only:
|
||||
|
||||
<pre>
|
||||
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.2.1-cp34-cp34m-linux_x86_64.whl
|
||||
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.3.0rc0-cp34-cp34m-linux_x86_64.whl
|
||||
</pre>
|
||||
|
||||
|
||||
GPU support:
|
||||
|
||||
<pre>
|
||||
https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.2.1-cp34-cp34m-linux_x86_64.whl
|
||||
https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.3.0rc0-cp34-cp34m-linux_x86_64.whl
|
||||
</pre>
|
||||
|
||||
Note that GPU support requires the NVIDIA hardware and software described in
|
||||
@ -670,14 +670,14 @@ Note that GPU support requires the NVIDIA hardware and software described in
|
||||
CPU only:
|
||||
|
||||
<pre>
|
||||
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.2.1-cp35-cp35m-linux_x86_64.whl
|
||||
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.3.0rc0-cp35-cp35m-linux_x86_64.whl
|
||||
</pre>
|
||||
|
||||
|
||||
GPU support:
|
||||
|
||||
<pre>
|
||||
https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.2.1-cp35-cp35m-linux_x86_64.whl
|
||||
https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.3.0rc0-cp35-cp35m-linux_x86_64.whl
|
||||
</pre>
|
||||
|
||||
|
||||
@ -689,14 +689,14 @@ Note that GPU support requires the NVIDIA hardware and software described in
|
||||
CPU only:
|
||||
|
||||
<pre>
|
||||
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.2.1-cp36-cp36m-linux_x86_64.whl
|
||||
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.3.0rc0-cp36-cp36m-linux_x86_64.whl
|
||||
</pre>
|
||||
|
||||
|
||||
GPU support:
|
||||
|
||||
<pre>
|
||||
https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.2.1-cp36-cp36m-linux_x86_64.whl
|
||||
https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.3.0rc0-cp36-cp36m-linux_x86_64.whl
|
||||
</pre>
|
||||
|
||||
|
||||
|
@ -109,7 +109,7 @@ Take the following steps to install TensorFlow with Virtualenv:
|
||||
TensorFlow in the active Virtualenv is as follows:
|
||||
|
||||
<pre> $ <b>pip3 install --upgrade \
|
||||
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.2.1-py2-none-any.whl</b></pre>
|
||||
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.3.0rc0-py2-none-any.whl</b></pre>
|
||||
|
||||
If you encounter installation problems, see
|
||||
[Common Installation Problems](#common-installation-problems).
|
||||
@ -230,7 +230,7 @@ take the following steps:
|
||||
issue the following command:
|
||||
|
||||
<pre> $ <b>sudo pip3 install --upgrade \
|
||||
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.2.1-py2-none-any.whl</b> </pre>
|
||||
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.3.0rc0-py2-none-any.whl</b> </pre>
|
||||
|
||||
If the preceding command fails, see
|
||||
[installation problems](#common-installation-problems).
|
||||
@ -339,7 +339,7 @@ Take the following steps to install TensorFlow in an Anaconda environment:
|
||||
TensorFlow for Python 2.7:
|
||||
|
||||
<pre> (tensorflow)$ <b>pip install --ignore-installed --upgrade \
|
||||
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.2.1-py2-none-any.whl</b></pre>
|
||||
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.3.0rc0-py2-none-any.whl</b></pre>
|
||||
|
||||
|
||||
<a name="ValidateYourInstallation"></a>
|
||||
@ -512,7 +512,7 @@ This section documents the relevant values for Mac OS installations.
|
||||
|
||||
|
||||
<pre>
|
||||
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.2.1-py2-none-any.whl
|
||||
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.3.0rc0-py2-none-any.whl
|
||||
</pre>
|
||||
|
||||
|
||||
@ -520,7 +520,7 @@ https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.2.1-py2-none-any.
|
||||
|
||||
|
||||
<pre>
|
||||
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.2.1-py3-none-any.whl
|
||||
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.3.0rc0-py3-none-any.whl
|
||||
</pre>
|
||||
|
||||
|
||||
|
@ -342,10 +342,10 @@ Invoke `pip install` to install that pip package.
|
||||
The filename of the `.whl` file depends on your platform.
|
||||
For example, the following command will install the pip package
|
||||
|
||||
for TensorFlow 1.2.1 on Linux:
|
||||
for TensorFlow 1.3.0rc0 on Linux:
|
||||
|
||||
<pre>
|
||||
$ <b>sudo pip install /tmp/tensorflow_pkg/tensorflow-1.2.1-py2-none-any.whl</b>
|
||||
$ <b>sudo pip install /tmp/tensorflow_pkg/tensorflow-1.3.0rc0-py2-none-any.whl</b>
|
||||
</pre>
|
||||
|
||||
## Validate your installation
|
||||
|
@ -115,12 +115,12 @@ Take the following steps to install TensorFlow in an Anaconda environment:
|
||||
environment. To install the CPU-only version of TensorFlow, enter the
|
||||
following command:
|
||||
|
||||
<pre>(tensorflow)C:\> <b>pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-1.2.1-cp35-cp35m-win_amd64.whl</b> </pre>
|
||||
<pre>(tensorflow)C:\> <b>pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-1.3.0rc0-cp35-cp35m-win_amd64.whl</b> </pre>
|
||||
|
||||
To install the GPU version of TensorFlow, enter the following command
|
||||
(on a single line):
|
||||
|
||||
<pre>(tensorflow)C:\> <b>pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/windows/gpu/tensorflow_gpu-1.2.1-cp35-cp35m-win_amd64.whl</b> </pre>
|
||||
<pre>(tensorflow)C:\> <b>pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/windows/gpu/tensorflow_gpu-1.3.0rc0-cp35-cp35m-win_amd64.whl</b> </pre>
|
||||
|
||||
## Validate your installation
|
||||
|
||||
|
@ -23,7 +23,6 @@ exports_files(["LICENSE"])
|
||||
load("//tensorflow:tensorflow.bzl", "cuda_py_test")
|
||||
load("//tensorflow:tensorflow.bzl", "py_test")
|
||||
load("//tensorflow:tensorflow.bzl", "if_not_windows")
|
||||
load("//tensorflow:tensorflow.bzl", "tf_gen_op_wrapper_py")
|
||||
|
||||
py_library(
|
||||
name = "debug_py",
|
||||
@ -32,7 +31,6 @@ py_library(
|
||||
visibility = ["//visibility:public"],
|
||||
deps = [
|
||||
":debug_data",
|
||||
":debug_gradients",
|
||||
":debug_utils",
|
||||
":hooks",
|
||||
":local_cli_wrapper",
|
||||
@ -62,24 +60,6 @@ py_library(
|
||||
],
|
||||
)
|
||||
|
||||
tf_gen_op_wrapper_py(
|
||||
name = "debug_ops",
|
||||
deps = ["//tensorflow/core:debug_ops_op_lib"],
|
||||
)
|
||||
|
||||
py_library(
|
||||
name = "debug_gradients",
|
||||
srcs = ["lib/debug_gradients.py"],
|
||||
srcs_version = "PY2AND3",
|
||||
deps = [
|
||||
":debug_data",
|
||||
":debug_ops",
|
||||
"//tensorflow/python:framework",
|
||||
"//tensorflow/python:platform",
|
||||
"@six_archive//:six",
|
||||
],
|
||||
)
|
||||
|
||||
py_library(
|
||||
name = "debug_utils",
|
||||
srcs = ["lib/debug_utils.py"],
|
||||
@ -401,26 +381,6 @@ py_test(
|
||||
],
|
||||
)
|
||||
|
||||
cuda_py_test(
|
||||
name = "debug_gradients_test",
|
||||
size = "small",
|
||||
srcs = [
|
||||
"lib/debug_gradients_test.py",
|
||||
],
|
||||
additional_deps = [
|
||||
":debug_data",
|
||||
":debug_gradients",
|
||||
":debug_utils",
|
||||
"//tensorflow/python:client",
|
||||
"//tensorflow/python:framework_test_lib",
|
||||
"//tensorflow/python:gradients",
|
||||
"//tensorflow/python:math_ops",
|
||||
"//tensorflow/python:platform_test",
|
||||
"//tensorflow/python:training",
|
||||
"//tensorflow/python:variables",
|
||||
],
|
||||
)
|
||||
|
||||
py_test(
|
||||
name = "debug_utils_test",
|
||||
size = "small",
|
||||
|
@ -31,9 +31,6 @@ See the @{$python/tfdbg} guide.
|
||||
@@LocalCLIDebugHook
|
||||
@@LocalCLIDebugWrapperSession
|
||||
@@WatchOptions
|
||||
|
||||
@@GradientsDebugger
|
||||
@@clear_gradient_debuggers
|
||||
"""
|
||||
|
||||
from __future__ import absolute_import
|
||||
@ -47,8 +44,6 @@ from tensorflow.python.debug.lib.debug_data import has_inf_or_nan
|
||||
from tensorflow.python.debug.lib.debug_data import load_tensor_from_event
|
||||
from tensorflow.python.debug.lib.debug_data import load_tensor_from_event_file
|
||||
|
||||
from tensorflow.python.debug.lib.debug_gradients import GradientsDebugger
|
||||
|
||||
from tensorflow.python.debug.lib.debug_utils import add_debug_tensor_watch
|
||||
from tensorflow.python.debug.lib.debug_utils import watch_graph
|
||||
from tensorflow.python.debug.lib.debug_utils import watch_graph_with_blacklists
|
||||
|
@ -1,417 +0,0 @@
|
||||
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
"""TensorFlow Debugger: Tools for debugging gradients."""
|
||||
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
|
||||
import re
|
||||
import uuid
|
||||
|
||||
import six
|
||||
|
||||
from tensorflow.python.debug.lib import debug_data
|
||||
from tensorflow.python.debug.ops import gen_debug_ops
|
||||
from tensorflow.python.framework import ops
|
||||
from tensorflow.python.ops import variables
|
||||
|
||||
_GRADIENT_DEBUG_TAG = "gradient_debug_"
|
||||
|
||||
_gradient_debuggers = {}
|
||||
|
||||
|
||||
def _tensor_to_grad_debug_op_name(tensor, grad_debugger_uuid):
|
||||
op_name, slot = debug_data.parse_node_or_tensor_name(tensor.name)
|
||||
return "%s_%d/%s%s" % (op_name, slot, _GRADIENT_DEBUG_TAG, grad_debugger_uuid)
|
||||
|
||||
|
||||
def _parse_grad_debug_op_name(op_name):
|
||||
"""Parse the name of a debug gradient op.
|
||||
|
||||
Args:
|
||||
op_name: the name of the debug gradient op.
|
||||
|
||||
Returns:
|
||||
1) The UUID of the GradientsDebugger that created the debug gradient op.
|
||||
2) Name of the original tensor whose gradient is debugged by the debug
|
||||
gradient op.
|
||||
"""
|
||||
name_items = op_name.split("/")
|
||||
assert len(name_items) > 1
|
||||
assert name_items[-1].startswith(_GRADIENT_DEBUG_TAG)
|
||||
|
||||
grad_debugger_uuid = name_items[-1][len(_GRADIENT_DEBUG_TAG):]
|
||||
if "_" in grad_debugger_uuid:
|
||||
grad_debugger_uuid = grad_debugger_uuid[:grad_debugger_uuid.index("_")]
|
||||
orig_tensor_slot = int(name_items[-2][name_items[-2].rfind("_") + 1:])
|
||||
orig_base_op_name = name_items[-2][:name_items[-2].rfind("_")]
|
||||
orig_tensor_name = ("/".join(name_items[:-2] + [orig_base_op_name]) +
|
||||
":%d" % orig_tensor_slot)
|
||||
|
||||
return grad_debugger_uuid, orig_tensor_name
|
||||
|
||||
|
||||
class GradientsDebugger(object):
|
||||
"""Gradients Debugger.
|
||||
|
||||
Allows retrieval of gradient tensors created by TensorFlow's automatic
|
||||
differentiation algorithm, i.e., @{tf.gradients} and optimizer classes that
|
||||
use it.
|
||||
"""
|
||||
# TODO(cais): Add examples code in the doc string?
|
||||
|
||||
def __init__(self, y_tensor=None):
|
||||
"""Constructor of GradientsDebugger.
|
||||
|
||||
Args:
|
||||
y_tensor: optional: the `tf.Tensor` to be differentiated, i.e., the tensor
|
||||
on the numerator of the differentiation.
|
||||
"""
|
||||
|
||||
self._uuid = uuid.uuid4().hex
|
||||
_gradient_debuggers[self._uuid] = self
|
||||
|
||||
# A dict mapping x-tensor names to gradient tensor. x-tensor refers to the
|
||||
# independent tf.Tensor, i.e., the tensor on the denominator of the
|
||||
# differentiation.
|
||||
self._gradient_tensors = {}
|
||||
self._y_tensor = y_tensor
|
||||
|
||||
self._graph = None
|
||||
if y_tensor:
|
||||
self._graph = y_tensor.graph
|
||||
|
||||
self._is_active_context = False
|
||||
|
||||
@property
|
||||
def y_tensor(self):
|
||||
return self._y_tensor
|
||||
|
||||
@property
|
||||
def graph(self):
|
||||
return self._graph
|
||||
|
||||
def __enter__(self):
|
||||
self._is_active_context = True
|
||||
|
||||
def __exit__(self, unused_type, unused_value, unused_traceback):
|
||||
self._is_active_context = False
|
||||
|
||||
def identify_gradient(self, input_tensor):
|
||||
"""Create a debug identity tensor that registers and forwards gradients.
|
||||
|
||||
The side effect of this method is that when gradient tensor(s) are created
|
||||
with respect to the any paths that include the `input_tensor`, the gradient
|
||||
tensor(s) with repsect to `input_tensor` will be registered with this
|
||||
this `GradientsDebugger` instance and can later be retrieved, with the
|
||||
methods `gradient_tensor` and `gradient_tensors`.
|
||||
|
||||
Example:
|
||||
|
||||
```python
|
||||
x = tf.Variable(1.0)
|
||||
y = tf.add(x, x)
|
||||
|
||||
grad_debugger = tf_debug.GradientsDebugger()
|
||||
debug_y = grad_debugger.identify_gradient(y)
|
||||
z = tf.square(debug_y)
|
||||
|
||||
# Create a train op under the grad_debugger context.
|
||||
with grad_debugger:
|
||||
train_op = tf.train.GradientDescentOptimizer(z)
|
||||
|
||||
# Now we can reflect through grad_debugger to get the gradient tensor
|
||||
# with respect to y.
|
||||
y_grad = grad_debugger.gradient_tensor(y)
|
||||
```
|
||||
|
||||
Args:
|
||||
input_tensor: the input `tf.Tensor` object whose related gradient tensors
|
||||
are to be reigstered with this `GradientsDebugger` instance when they
|
||||
are created, e.g., during @{tf.gradients} calls or the construction
|
||||
of optimization (training) op that uses @{tf.gradients}.
|
||||
|
||||
Returns:
|
||||
A forwarded identity of `input_tensor`, as a `tf.Tensor`.
|
||||
|
||||
Raises:
|
||||
ValueError: If an op with name that duplicates the gradient-debugging op
|
||||
already exists in the graph (highly unlikely).
|
||||
"""
|
||||
# TODO(cais): Allow overriding gradient.
|
||||
# TODO(cais): Implement value_stack.
|
||||
grad_debug_op_name = _tensor_to_grad_debug_op_name(input_tensor, self._uuid)
|
||||
debug_identity = gen_debug_ops.debug_identity(
|
||||
input_tensor,
|
||||
tensor_name=input_tensor.name,
|
||||
debug_urls=[],
|
||||
name=grad_debug_op_name)
|
||||
if debug_identity.op.name != grad_debug_op_name:
|
||||
raise ValueError(
|
||||
"The graph already contains an op named %s" % grad_debug_op_name)
|
||||
return debug_identity
|
||||
|
||||
def watch_gradients_by_tensors(self, graph, tensors):
|
||||
"""Watch gradient tensors by x-tensor(s).
|
||||
|
||||
The side effect of this method is that when gradient tensor(s) are created
|
||||
with respect to the any paths that include the `x_tensor`s, the gradient
|
||||
tensor(s) with repsect to the tensor will be registered with this
|
||||
this `GradientsDebugger` instance and can later be retrieved, with the
|
||||
methods `gradient_tensor` and `gradient_tensors`.
|
||||
|
||||
Unlike the method `identify_gradient`, this method is used to retrieve
|
||||
gradient tensors after the construction of the forward subgraph has
|
||||
completed (but before the construction of the backward subgraph).
|
||||
|
||||
This method is the same as `watch_gradients_by_x_tensor_names` except that
|
||||
the tensors are specified by the Python `tf.Tensor` or `tf.Variable`
|
||||
objects, instead by name patterns.
|
||||
|
||||
Example:
|
||||
|
||||
```python
|
||||
x = tf.Variable(1.0)
|
||||
y = tf.add(x, x, name="y")
|
||||
z = tf.square(debug_y)
|
||||
|
||||
# Create a train op under the grad_debugger context.
|
||||
grad_debugger = tf_debug.GradientsDebugger()
|
||||
with grad_debugger.watch_gradients_by_tensors(y):
|
||||
train_op = tf.train.GradientDescentOptimizer(z)
|
||||
|
||||
# Now we can reflect through grad_debugger to get the gradient tensor
|
||||
# with respect to y.
|
||||
y_grad = grad_debugger.gradient_tensor(y)
|
||||
# or
|
||||
y_grad = grad_debugger.gradient_tensor("y:0")
|
||||
```
|
||||
|
||||
Args:
|
||||
graph: the `tf.Graph` to watch the gradients on.
|
||||
tensors: a `tf.Tensor` or `tf.Variable` object, or a list of such objects.
|
||||
|
||||
Returns:
|
||||
The GradientsDebugger instance itself.
|
||||
"""
|
||||
|
||||
if not isinstance(tensors, list):
|
||||
tensors = [tensors]
|
||||
|
||||
tensor_name_regex = []
|
||||
for tensor in tensors:
|
||||
tensor_name_regex.append(re.escape(tensor.name) + "$")
|
||||
tensor_name_regex = "(" + "|".join(tensor_name_regex) + ")"
|
||||
return self.watch_gradients_by_tensor_names(graph, tensor_name_regex)
|
||||
|
||||
def watch_gradients_by_tensor_names(self, graph, tensor_name_regex):
|
||||
"""Watch gradient tensors by name(s) of the x-tensor(s).
|
||||
|
||||
The side effect of this method is that when gradient tensor(s) are created
|
||||
with respect to the x-tensors, the gradient tensor(s) will be registered
|
||||
with this `GradientsDebugger` instance and can later be retrieved.
|
||||
|
||||
Unlike the `identify_gradient` method, this method is used after the
|
||||
construction of the forward graph has completed. Unlike the
|
||||
`watch_gradients_by_tensor` method, this method does not use handles to the
|
||||
tensors of interest; it uses their names.
|
||||
|
||||
This method is the same as `watch_gradients_by_tensors` except that the
|
||||
x-tensors are specified by name patterns, instead of `tf.Tensor` or
|
||||
`tf.Variable` objects.
|
||||
|
||||
Example:
|
||||
|
||||
```python
|
||||
x = tf.Variable(1.0, name="x")
|
||||
y = tf.add(x, x, name="y")
|
||||
z = tf.square(debug_y)
|
||||
|
||||
# Create a train op under the grad_debugger context.
|
||||
grad_debugger = tf_debug.GradientsDebugger()
|
||||
with grad_debugger.watch_gradients_by_tensor_names(r"(x|y):0$"):
|
||||
train_op = tf.train.GradientDescentOptimizer(z)
|
||||
|
||||
# Now we can reflect through grad_debugger to get the gradient tensor
|
||||
# with respect to x and y.
|
||||
x_grad = grad_debugger.gradient_tensor("x:0")
|
||||
y_grad = grad_debugger.gradient_tensor("y:0")
|
||||
```
|
||||
|
||||
Args:
|
||||
graph: the `tf.Graph` to watch the gradients on.
|
||||
tensor_name_regex: the regular-expression pattern of the name(s) of the
|
||||
x-tensor(s) to watch. x-tensor refers to the tensors on the denominator
|
||||
of the differentiation.
|
||||
|
||||
Returns:
|
||||
The GradientsDebugger instance itself.
|
||||
"""
|
||||
tensor_name_pattern = re.compile(tensor_name_regex)
|
||||
|
||||
# pylint: disable=protected-access
|
||||
with graph.as_default():
|
||||
for op in graph.get_operations():
|
||||
for output in op.outputs:
|
||||
if tensor_name_pattern.match(output.name):
|
||||
debug_op = self.identify_gradient(output)
|
||||
|
||||
for consumer in output.consumers():
|
||||
if consumer == debug_op.op:
|
||||
continue
|
||||
|
||||
# Locate the slot index of the original input.
|
||||
input_slots = []
|
||||
for i, consumer_input in enumerate(consumer._inputs):
|
||||
if consumer_input == output:
|
||||
input_slots.append(i)
|
||||
|
||||
for slot in input_slots:
|
||||
consumer._inputs[slot] = debug_op
|
||||
debug_op._consumers.append(consumer)
|
||||
|
||||
del output._consumers[:]
|
||||
output._consumers.append(debug_op.op)
|
||||
# pylint: enable=protected-access
|
||||
|
||||
return self
|
||||
|
||||
def _check_same_graph(self, tensor):
|
||||
if self._graph is None:
|
||||
self._graph = tensor.graph
|
||||
elif self._graph != tensor.graph:
|
||||
raise ValueError(
|
||||
"The graph of the value (%s) is not the same as the graph %s" %
|
||||
(tensor.graph, self._graph))
|
||||
|
||||
def register_gradient_tensor(self,
|
||||
x_tensor_name,
|
||||
gradient_tensor):
|
||||
"""Register the gradient tensor for an x-tensor.
|
||||
|
||||
Args:
|
||||
x_tensor_name: (`str`) the name of the independent `tf.Tensor`, i.e.,
|
||||
the tensor on the denominator of the differentiation.
|
||||
gradient_tensor: the gradient `tf.Tensor`.
|
||||
"""
|
||||
if len(_gradient_debuggers) == 1 or self._is_active_context:
|
||||
self._check_same_graph(gradient_tensor)
|
||||
self._gradient_tensors[x_tensor_name] = gradient_tensor
|
||||
|
||||
def gradient_tensor(self, x_tensor):
|
||||
"""Get the gradient tensor of an x-tensor.
|
||||
|
||||
Args:
|
||||
x_tensor: (`tf.Tensor`, `tf.Variable` or `str`) The x-tensor object or its
|
||||
name. x-tensor refers to the independent `tf.Tensor`, i.e., the tensor
|
||||
on the denominator of the differentiation.
|
||||
|
||||
Returns:
|
||||
If found, the gradient tensor.
|
||||
|
||||
Raises:
|
||||
TypeError: If `x_tensor` is not a `tf.Tensor`, `tf.Variable` or `str`.
|
||||
LookupError: If the `x_tensor` has not been registered with a gradient
|
||||
tensor.
|
||||
"""
|
||||
x_tensor_name = self._get_tensor_name(x_tensor)
|
||||
if x_tensor_name not in self._gradient_tensors:
|
||||
raise LookupError(
|
||||
"This GradientsDebugger has not received any gradient tensor for "
|
||||
"x-tensor %s" % x_tensor_name)
|
||||
return self._gradient_tensors[x_tensor_name]
|
||||
|
||||
def gradient_tensors(self):
|
||||
"""Get the gradient tensors that this object is aware of.
|
||||
|
||||
Returns:
|
||||
A dict mapping x-tensor names to gradient tensor objects. x-tensor refers
|
||||
to the tensors on the denominator of the differentation.
|
||||
"""
|
||||
return self._gradient_tensors
|
||||
|
||||
def _get_tensor_name(self, tensor):
|
||||
if isinstance(tensor, (ops.Tensor, variables.Variable)):
|
||||
return tensor.name
|
||||
elif isinstance(tensor, six.string_types):
|
||||
return tensor
|
||||
else:
|
||||
raise TypeError(
|
||||
"x_tensor must be a str or tf.Tensor or tf.Variable, "
|
||||
"but instead has type %s" % type(tensor))
|
||||
|
||||
|
||||
def clear_gradient_debuggers():
|
||||
"""Clear all globally registered gradient debuggers."""
|
||||
_gradient_debuggers.clear()
|
||||
|
||||
|
||||
@ops.RegisterGradient("DebugIdentity")
|
||||
def _identify_gradient_grad(op, dy):
|
||||
"""Gradient function for the DebugIdentity op."""
|
||||
# TODO(cais): Allow overriding gradient.
|
||||
grad_debugger_uuid, orig_tensor_name = _parse_grad_debug_op_name(op.name)
|
||||
grad_debugger = _gradient_debuggers[grad_debugger_uuid]
|
||||
grad_debugger.register_gradient_tensor(orig_tensor_name, dy)
|
||||
return dy
|
||||
|
||||
|
||||
def gradient_values_from_dump(grad_debugger, x_tensor, dump):
|
||||
"""Find gradient values from a `DebugDumpDir` object.
|
||||
|
||||
Args:
|
||||
grad_debugger: the `tf_debug.GradientsDebugger` instance to be used.
|
||||
x_tensor: (`tf.Tensor`, `tf.Variable` or `str`) The x-tensor object or its
|
||||
name. x-tensor refers to the independent `tf.Tensor`, i.e., the tensor
|
||||
on the denominator of the differentiation.
|
||||
dump: A `tfdbg.DebugDumpDir` object.
|
||||
|
||||
Returns:
|
||||
If this `GradientsDebugger` instance has the gradient tensor of `x_tensor`
|
||||
registered: a list of `numpy.ndarray` representing the value of the
|
||||
gradient tensor from `dump`. The list could be empty, if the gradient
|
||||
tensor is not executed in the `tf.Session.run()` call that generated
|
||||
the `dump`. The list could also contain multiple values of the gradient
|
||||
tensor, e.g., if gradient tensor is computed repeatedly in a
|
||||
`tf.while_loop` during the run that generated the `dump`.
|
||||
|
||||
Raises:
|
||||
LookupError: If this `GradientsDebugger` instance does not have the
|
||||
gradient tensor of `x_tensor` registered.
|
||||
ValueError: If this `GradientsDebugger` has a `tf.Graph` object that
|
||||
does not match the `tf.Graph` object of the `dump`.
|
||||
TypeError: If `x_tensor` is not a `tf.Tensor`, `tf.Variable` or `str`.
|
||||
"""
|
||||
# TODO(cais): Use this method in LocalCLIDebugWrapperSession to present the
|
||||
# gradient tensors to the TFDBG CLI.
|
||||
|
||||
# If possible, verify that the Python graph of the dump and that of this
|
||||
# GradientsDebugger match.
|
||||
if (dump.python_graph and grad_debugger.graph and
|
||||
dump.python_graph != grad_debugger.graph):
|
||||
raise ValueError(
|
||||
"This GradientsDebugger instance has a graph (%s) that differs from "
|
||||
"the graph of the DebugDumpDir object (%s)." %
|
||||
(grad_debugger.graph, dump.python_graph))
|
||||
|
||||
gradient_tensor = grad_debugger.gradient_tensor(x_tensor)
|
||||
node_name, output_slot = debug_data.parse_node_or_tensor_name(
|
||||
gradient_tensor.name)
|
||||
|
||||
try:
|
||||
return dump.get_tensors(node_name, output_slot, "DebugIdentity")
|
||||
except debug_data.WatchKeyDoesNotExistInDebugDumpDirError:
|
||||
return []
|
@ -1,378 +0,0 @@
|
||||
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
"""Unit tests for debug_gradients module."""
|
||||
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
|
||||
import shutil
|
||||
import tempfile
|
||||
|
||||
from tensorflow.core.protobuf import config_pb2
|
||||
from tensorflow.python.client import session
|
||||
from tensorflow.python.debug.lib import debug_data
|
||||
from tensorflow.python.debug.lib import debug_gradients
|
||||
from tensorflow.python.debug.lib import debug_utils
|
||||
from tensorflow.python.framework import ops
|
||||
from tensorflow.python.framework import test_util
|
||||
from tensorflow.python.ops import gradients_impl
|
||||
from tensorflow.python.ops import math_ops
|
||||
from tensorflow.python.ops import variables
|
||||
from tensorflow.python.platform import googletest
|
||||
from tensorflow.python.training import gradient_descent
|
||||
|
||||
|
||||
class IdentifyGradientTest(test_util.TensorFlowTestCase):
|
||||
|
||||
def setUp(self):
|
||||
self.sess = session.Session()
|
||||
with self.sess:
|
||||
self.u = variables.Variable(2.0, name="u")
|
||||
self.v = variables.Variable(3.0, name="v")
|
||||
self.w = math_ops.multiply(self.u.value(), self.v.value(), name="w")
|
||||
|
||||
def tearDown(self):
|
||||
ops.reset_default_graph()
|
||||
debug_gradients.clear_gradient_debuggers()
|
||||
|
||||
def testIdentifyGradientGivesCorrectTensorObjectWithoutContextManager(self):
|
||||
grad_debugger = debug_gradients.GradientsDebugger()
|
||||
id_grad_w = grad_debugger.identify_gradient(self.w)
|
||||
y = math_ops.add(id_grad_w, -1.0, name="y")
|
||||
|
||||
grads = gradients_impl.gradients(y, [self.u, self.v])
|
||||
self.assertEqual(2, len(grads))
|
||||
u_grad = grads[0]
|
||||
v_grad = grads[1]
|
||||
|
||||
self.sess.run(variables.global_variables_initializer())
|
||||
self.assertAllClose(5.0, self.sess.run(y))
|
||||
self.assertAllClose(3.0, self.sess.run(u_grad))
|
||||
self.assertAllClose(2.0, self.sess.run(v_grad))
|
||||
|
||||
# Fetch the gradient tensor with the x-tensor object.
|
||||
w_grad = grad_debugger.gradient_tensor(self.w)
|
||||
self.assertIsInstance(w_grad, ops.Tensor)
|
||||
self.assertAllClose(1.0, self.sess.run(w_grad))
|
||||
|
||||
# Fetch the gradient tensor with the x-tensor's name.
|
||||
w_grad = grad_debugger.gradient_tensor(self.w.name)
|
||||
self.assertIsInstance(w_grad, ops.Tensor)
|
||||
self.assertAllClose(1.0, self.sess.run(w_grad))
|
||||
|
||||
# Fetch the gradient tensor with the x-tensor name.
|
||||
w_grad = grad_debugger.gradient_tensor(self.w.name)
|
||||
self.assertIsInstance(w_grad, ops.Tensor)
|
||||
self.assertAllClose(1.0, self.sess.run(w_grad))
|
||||
|
||||
def testIdentifyGradientGivesCorrectTensorObjectWithTfGradients(self):
|
||||
grad_debugger = debug_gradients.GradientsDebugger()
|
||||
id_grad_w = grad_debugger.identify_gradient(self.w)
|
||||
y = math_ops.add(id_grad_w, -1.0, name="y")
|
||||
|
||||
with grad_debugger:
|
||||
grads = gradients_impl.gradients(y, [self.u, self.v])
|
||||
self.assertEqual(2, len(grads))
|
||||
u_grad = grads[0]
|
||||
v_grad = grads[1]
|
||||
|
||||
self.sess.run(variables.global_variables_initializer())
|
||||
self.assertAllClose(5.0, self.sess.run(y))
|
||||
self.assertAllClose(3.0, self.sess.run(u_grad))
|
||||
self.assertAllClose(2.0, self.sess.run(v_grad))
|
||||
|
||||
# Fetch the gradient tensor with the x-tensor object.
|
||||
w_grad = grad_debugger.gradient_tensor(self.w)
|
||||
self.assertIsInstance(w_grad, ops.Tensor)
|
||||
self.assertAllClose(1.0, self.sess.run(w_grad))
|
||||
|
||||
# Fetch the gradient tensor with the x-tensor's name.
|
||||
w_grad = grad_debugger.gradient_tensor(self.w.name)
|
||||
self.assertIsInstance(w_grad, ops.Tensor)
|
||||
self.assertAllClose(1.0, self.sess.run(w_grad))
|
||||
|
||||
# Fetch the gradient tensor with the x-tensor name.
|
||||
w_grad = grad_debugger.gradient_tensor(self.w.name)
|
||||
self.assertIsInstance(w_grad, ops.Tensor)
|
||||
self.assertAllClose(1.0, self.sess.run(w_grad))
|
||||
|
||||
def testCallingIdentifyGradientTwiceWithTheSameGradientsDebuggerErrors(self):
|
||||
grad_debugger = debug_gradients.GradientsDebugger()
|
||||
grad_debugger.identify_gradient(self.w)
|
||||
with self.assertRaisesRegexp(
|
||||
ValueError, "The graph already contains an op named .*"):
|
||||
grad_debugger.identify_gradient(self.w)
|
||||
|
||||
def testIdentifyGradientWorksOnMultipleLosses(self):
|
||||
grad_debugger_1 = debug_gradients.GradientsDebugger()
|
||||
grad_debugger_2 = debug_gradients.GradientsDebugger()
|
||||
|
||||
y = math_ops.add(self.w, -1.0, name="y")
|
||||
debug_y = grad_debugger_1.identify_gradient(y)
|
||||
z1 = math_ops.square(debug_y, name="z1")
|
||||
|
||||
debug_y = grad_debugger_2.identify_gradient(y)
|
||||
z2 = math_ops.sqrt(debug_y, name="z2")
|
||||
|
||||
with grad_debugger_1:
|
||||
gradient_descent.GradientDescentOptimizer(0.1).minimize(z1)
|
||||
with grad_debugger_2:
|
||||
gradient_descent.GradientDescentOptimizer(0.1).minimize(z2)
|
||||
|
||||
dz1_dy = grad_debugger_1.gradient_tensor(y)
|
||||
dz2_dy = grad_debugger_2.gradient_tensor(y)
|
||||
self.assertIsInstance(dz1_dy, ops.Tensor)
|
||||
self.assertIsInstance(dz2_dy, ops.Tensor)
|
||||
self.assertIsNot(dz1_dy, dz2_dy)
|
||||
|
||||
self.sess.run(variables.global_variables_initializer())
|
||||
self.assertAllClose(5.0 ** 2, self.sess.run(z1))
|
||||
self.assertAllClose(5.0 ** 0.5, self.sess.run(z2))
|
||||
self.assertAllClose(2.0 * 5.0, self.sess.run(dz1_dy))
|
||||
self.assertAllClose(0.5 * (5.0 ** -0.5), self.sess.run(dz2_dy))
|
||||
|
||||
def testIdentifyGradientRaisesLookupErrorForUnknownXTensor(self):
|
||||
grad_debugger_1 = debug_gradients.GradientsDebugger()
|
||||
grad_debugger_2 = debug_gradients.GradientsDebugger()
|
||||
id_grad_w = grad_debugger_1.identify_gradient(self.w)
|
||||
y = math_ops.add(id_grad_w, -1.0, name="y")
|
||||
|
||||
# There are >1 gradient debuggers registered, and grad_debugger is not used
|
||||
# as a context manager here, so the gradient w.r.t. self.w will not be
|
||||
# registered.
|
||||
gradients_impl.gradients(y, [self.u, self.v])
|
||||
|
||||
with self.assertRaisesRegexp(
|
||||
LookupError,
|
||||
r"This GradientsDebugger has not received any gradient tensor for "):
|
||||
grad_debugger_1.gradient_tensor(self.w)
|
||||
with self.assertRaisesRegexp(
|
||||
LookupError,
|
||||
r"This GradientsDebugger has not received any gradient tensor for "):
|
||||
grad_debugger_2.gradient_tensor(self.w)
|
||||
|
||||
def testIdentifyGradientRaisesTypeErrorForNonTensorOrTensorNameInput(self):
|
||||
grad_debugger = debug_gradients.GradientsDebugger()
|
||||
with self.assertRaisesRegexp(
|
||||
TypeError,
|
||||
r"x_tensor must be a str or tf\.Tensor or tf\.Variable, but instead "
|
||||
r"has type .*Operation.*"):
|
||||
grad_debugger.gradient_tensor(variables.global_variables_initializer())
|
||||
|
||||
def testIdentifyGradientTensorWorksWithGradientDescentOptimizer(self):
|
||||
grad_debugger = debug_gradients.GradientsDebugger()
|
||||
id_grad_w = grad_debugger.identify_gradient(self.w)
|
||||
y = math_ops.add(id_grad_w, -1.0, name="y")
|
||||
|
||||
with grad_debugger:
|
||||
gradient_descent.GradientDescentOptimizer(0.1).minimize(y)
|
||||
|
||||
self.sess.run(variables.global_variables_initializer())
|
||||
|
||||
# Fetch the gradient tensor with the x-tensor object.
|
||||
w_grad = grad_debugger.gradient_tensor(self.w)
|
||||
self.assertIsInstance(w_grad, ops.Tensor)
|
||||
self.assertAllClose(1.0, self.sess.run(w_grad))
|
||||
|
||||
def testWatchGradientsByXTensorNamesWorks(self):
|
||||
y = math_ops.add(self.w, -1.0, name="y")
|
||||
|
||||
# The constructrion of the forward graph has completed.
|
||||
# But we can still get the gradient tensors by using
|
||||
# watch_gradients_by_tensor_names().
|
||||
grad_debugger = debug_gradients.GradientsDebugger()
|
||||
with grad_debugger.watch_gradients_by_tensor_names(self.sess.graph, "w:0$"):
|
||||
grads = gradients_impl.gradients(y, [self.u, self.v])
|
||||
self.assertEqual(2, len(grads))
|
||||
u_grad = grads[0]
|
||||
v_grad = grads[1]
|
||||
|
||||
self.sess.run(variables.global_variables_initializer())
|
||||
self.assertAllClose(5.0, self.sess.run(y))
|
||||
self.assertAllClose(3.0, self.sess.run(u_grad))
|
||||
self.assertAllClose(2.0, self.sess.run(v_grad))
|
||||
|
||||
w_grad = grad_debugger.gradient_tensor(self.w)
|
||||
self.assertIsInstance(w_grad, ops.Tensor)
|
||||
self.assertAllClose(1.0, self.sess.run(w_grad))
|
||||
|
||||
w_grad = grad_debugger.gradient_tensor("w:0")
|
||||
self.assertIsInstance(w_grad, ops.Tensor)
|
||||
self.assertAllClose(1.0, self.sess.run(w_grad))
|
||||
|
||||
def testWatchGradientsByXTensorNamesWorksWithoutContextManager(self):
|
||||
y = math_ops.add(self.w, -1.0, name="y")
|
||||
|
||||
# The constructrion of the forward graph has completed.
|
||||
# But we can still get the gradient tensors by using
|
||||
# watch_gradients_by_tensor_names().
|
||||
grad_debugger = debug_gradients.GradientsDebugger()
|
||||
grad_debugger.watch_gradients_by_tensor_names(self.sess.graph, "w:0$")
|
||||
grads = gradients_impl.gradients(y, [self.u, self.v])
|
||||
self.assertEqual(2, len(grads))
|
||||
u_grad = grads[0]
|
||||
v_grad = grads[1]
|
||||
|
||||
self.sess.run(variables.global_variables_initializer())
|
||||
self.assertAllClose(5.0, self.sess.run(y))
|
||||
self.assertAllClose(3.0, self.sess.run(u_grad))
|
||||
self.assertAllClose(2.0, self.sess.run(v_grad))
|
||||
|
||||
w_grad = grad_debugger.gradient_tensor(self.w)
|
||||
self.assertIsInstance(w_grad, ops.Tensor)
|
||||
self.assertAllClose(1.0, self.sess.run(w_grad))
|
||||
|
||||
w_grad = grad_debugger.gradient_tensor("w:0")
|
||||
self.assertIsInstance(w_grad, ops.Tensor)
|
||||
self.assertAllClose(1.0, self.sess.run(w_grad))
|
||||
|
||||
def testWatchGradientsWorksOnRefTensor(self):
|
||||
y = math_ops.add(self.w, -1.0, name="y")
|
||||
|
||||
grad_debugger = debug_gradients.GradientsDebugger()
|
||||
with grad_debugger.watch_gradients_by_tensor_names(self.sess.graph, "u:0$"):
|
||||
grads = gradients_impl.gradients(y, [self.u, self.v])
|
||||
self.assertEqual(2, len(grads))
|
||||
u_grad = grads[0]
|
||||
v_grad = grads[1]
|
||||
|
||||
self.assertIs(u_grad, grad_debugger.gradient_tensor("u:0"))
|
||||
|
||||
self.sess.run(variables.global_variables_initializer())
|
||||
self.assertAllClose(3.0, self.sess.run(u_grad))
|
||||
self.assertAllClose(2.0, self.sess.run(v_grad))
|
||||
self.assertAllClose(
|
||||
3.0, self.sess.run(grad_debugger.gradient_tensor("u:0")))
|
||||
|
||||
def testWatchGradientsWorksOnMultipleTensors(self):
|
||||
y = math_ops.add(self.w, -1.0, name="y")
|
||||
|
||||
grad_debugger = debug_gradients.GradientsDebugger()
|
||||
with grad_debugger.watch_gradients_by_tensor_names(self.sess.graph,
|
||||
"(u|w):0$"):
|
||||
grads = gradients_impl.gradients(y, [self.u, self.v])
|
||||
self.assertEqual(2, len(grads))
|
||||
u_grad = grads[0]
|
||||
|
||||
self.assertEqual(2, len(grad_debugger.gradient_tensors()))
|
||||
self.assertIs(u_grad, grad_debugger.gradient_tensor("u:0"))
|
||||
self.assertIsInstance(grad_debugger.gradient_tensor("w:0"), ops.Tensor)
|
||||
|
||||
self.sess.run(variables.global_variables_initializer())
|
||||
self.assertAllClose(
|
||||
1.0, self.sess.run(grad_debugger.gradient_tensor("w:0")))
|
||||
self.assertAllClose(
|
||||
3.0, self.sess.run(grad_debugger.gradient_tensor("u:0")))
|
||||
|
||||
def testWatchGradientsByXTensorsWorks(self):
|
||||
y = math_ops.add(self.w, -1.0, name="foo/y")
|
||||
z = math_ops.square(y, name="foo/z")
|
||||
|
||||
# The constructrion of the forward graph has completed.
|
||||
# But we can still get the gradient tensors by using
|
||||
# watch_gradients_by_x_tensors().
|
||||
grad_debugger = debug_gradients.GradientsDebugger()
|
||||
with grad_debugger.watch_gradients_by_tensors(
|
||||
self.sess.graph, [self.w, self.u, y]):
|
||||
gradient_descent.GradientDescentOptimizer(0.1).minimize(z)
|
||||
|
||||
self.assertEqual(3, len(grad_debugger.gradient_tensors()))
|
||||
u_grad = grad_debugger.gradient_tensor(self.u)
|
||||
w_grad = grad_debugger.gradient_tensor(self.w)
|
||||
y_grad = grad_debugger.gradient_tensor(y)
|
||||
|
||||
self.sess.run(variables.global_variables_initializer())
|
||||
self.assertAllClose(10.0, self.sess.run(y_grad))
|
||||
self.assertAllClose(10.0, self.sess.run(w_grad))
|
||||
self.assertAllClose(30.0, self.sess.run(u_grad))
|
||||
|
||||
def testWatchGradientsByTensorCanWorkOnMultipleLosses(self):
|
||||
y = math_ops.add(self.w, -1.0, name="y")
|
||||
z1 = math_ops.square(y, name="z1")
|
||||
z2 = math_ops.sqrt(y, name="z2")
|
||||
|
||||
grad_debugger_1 = debug_gradients.GradientsDebugger()
|
||||
with grad_debugger_1.watch_gradients_by_tensors(self.sess.graph, y):
|
||||
gradient_descent.GradientDescentOptimizer(0.1).minimize(z1)
|
||||
|
||||
grad_debugger_2 = debug_gradients.GradientsDebugger()
|
||||
with grad_debugger_2.watch_gradients_by_tensors(self.sess.graph, y):
|
||||
gradient_descent.GradientDescentOptimizer(0.1).minimize(z2)
|
||||
|
||||
dz1_dy = grad_debugger_1.gradient_tensor(y)
|
||||
dz2_dy = grad_debugger_2.gradient_tensor(y)
|
||||
self.assertIsInstance(dz1_dy, ops.Tensor)
|
||||
self.assertIsInstance(dz2_dy, ops.Tensor)
|
||||
self.assertIsNot(dz1_dy, dz2_dy)
|
||||
|
||||
self.sess.run(variables.global_variables_initializer())
|
||||
self.assertAllClose(5.0 ** 2, self.sess.run(z1))
|
||||
self.assertAllClose(5.0 ** 0.5, self.sess.run(z2))
|
||||
self.assertAllClose(2.0 * 5.0, self.sess.run(dz1_dy))
|
||||
self.assertAllClose(0.5 * (5.0 ** -0.5), self.sess.run(dz2_dy))
|
||||
|
||||
def testGradientsValuesFromDumpWorks(self):
|
||||
y = math_ops.add(self.w, -1.0, name="y")
|
||||
z = math_ops.square(y, name="z")
|
||||
|
||||
grad_debugger = debug_gradients.GradientsDebugger()
|
||||
with grad_debugger.watch_gradients_by_tensors(
|
||||
self.sess.graph, [self.w, self.u, y]):
|
||||
train_op = gradient_descent.GradientDescentOptimizer(0.1).minimize(z)
|
||||
|
||||
self.sess.run(variables.global_variables_initializer())
|
||||
|
||||
run_options = config_pb2.RunOptions(output_partition_graphs=True)
|
||||
dump_dir = tempfile.mkdtemp()
|
||||
debug_url = "file://" + dump_dir
|
||||
debug_utils.watch_graph(
|
||||
run_options,
|
||||
self.sess.graph,
|
||||
debug_urls=debug_url)
|
||||
run_metadata = config_pb2.RunMetadata()
|
||||
self.sess.run(train_op, options=run_options, run_metadata=run_metadata)
|
||||
|
||||
dump = debug_data.DebugDumpDir(
|
||||
dump_dir, partition_graphs=run_metadata.partition_graphs)
|
||||
dump.set_python_graph(self.sess.graph)
|
||||
|
||||
y_grad_values = debug_gradients.gradient_values_from_dump(
|
||||
grad_debugger, y, dump)
|
||||
self.assertEqual(1, len(y_grad_values))
|
||||
self.assertAllClose(10.0, y_grad_values[0])
|
||||
|
||||
w_grad_values = debug_gradients.gradient_values_from_dump(
|
||||
grad_debugger, self.w, dump)
|
||||
self.assertEqual(1, len(w_grad_values))
|
||||
self.assertAllClose(10.0, w_grad_values[0])
|
||||
|
||||
u_grad_values = debug_gradients.gradient_values_from_dump(
|
||||
grad_debugger, self.u, dump)
|
||||
self.assertEqual(1, len(u_grad_values))
|
||||
self.assertAllClose(30.0, u_grad_values[0])
|
||||
|
||||
with self.assertRaisesRegexp(
|
||||
LookupError,
|
||||
r"This GradientsDebugger has not received any gradient tensor for "
|
||||
r"x-tensor v:0"):
|
||||
debug_gradients.gradient_values_from_dump(grad_debugger, self.v, dump)
|
||||
|
||||
# Cleanup.
|
||||
shutil.rmtree(dump_dir)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
googletest.main()
|
@ -72,7 +72,7 @@ RUN mkdir /bazel && \
|
||||
|
||||
RUN git clone https://github.com/tensorflow/tensorflow.git && \
|
||||
cd tensorflow && \
|
||||
git checkout r1.2
|
||||
git checkout r1.3
|
||||
WORKDIR /tensorflow
|
||||
|
||||
# TODO(craigcitro): Don't install the pip package, since it makes it
|
||||
|
@ -73,7 +73,7 @@ RUN mkdir /bazel && \
|
||||
|
||||
RUN git clone https://github.com/tensorflow/tensorflow.git && \
|
||||
cd tensorflow && \
|
||||
git checkout r1.2
|
||||
git checkout r1.3
|
||||
WORKDIR /tensorflow
|
||||
|
||||
# Configure the build for our CUDA configuration.
|
||||
|
@ -29,7 +29,7 @@ from setuptools.dist import Distribution
|
||||
# This version string is semver compatible, but incompatible with pip.
|
||||
# For pip, we will remove all '-' characters from this string, and use the
|
||||
# result for pip.
|
||||
_VERSION = '1.2.1'
|
||||
_VERSION = '1.3.0-rc0'
|
||||
|
||||
REQUIRED_PACKAGES = [
|
||||
'numpy >= 1.11.0',
|
||||
|
Loading…
Reference in New Issue
Block a user