Merge changes from github.

Change: 135698415
This commit is contained in:
A. Unique TensorFlower 2016-10-10 10:26:22 -08:00 committed by TensorFlower Gardener
parent d1518c2653
commit edaf3b342d
147 changed files with 2070 additions and 1643 deletions

View File

@ -33,10 +33,10 @@ and discussion.**
People who are a little more adventurous can also try our nightly binaries:
* Linux CPU-only: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=cpu-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-0.10.0-cp27-none-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=cpu-slave)) / [Python 3.4](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=cpu-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-0.10.0-cp34-cp34m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=cpu-slave/)) / [Python 3.5](https://ci.tensorflow.org/view/Nightly/job/nightly-python35-linux-cpu/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-0.10.0-cp35-cp35m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-python35-linux-cpu/))
* Linux GPU: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-linux/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-0.10.0-cp27-none-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-linux/)) / [Python 3.4](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-linux/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-0.10.0-cp34-cp34m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-linux/)) / [Python 3.5](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3.5,label=gpu-linux/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-0.10.0-cp35-cp35m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3.5,label=gpu-linux/))
* Mac CPU-only: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=mac1-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-0.10.0-py2-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=mac1-slave/)) / [Python 3](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=mac1-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-0.10.0-py3-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=mac1-slave/))
* Mac GPU: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-mac/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-0.10.0-py2-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-mac/)) / [Python 3](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-mac/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-0.10.0-py3-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-mac/))
* Linux CPU-only: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=cpu-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-0.11.0rc0-cp27-none-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=cpu-slave)) / [Python 3.4](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=cpu-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-0.11.0rc0-cp34-cp34m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=cpu-slave/)) / [Python 3.5](https://ci.tensorflow.org/view/Nightly/job/nightly-python35-linux-cpu/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-0.11.0rc0-cp35-cp35m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-python35-linux-cpu/))
* Linux GPU: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-linux/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-0.11.0rc0-cp27-none-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-linux/)) / [Python 3.4](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-linux/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-0.11.0rc0-cp34-cp34m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-linux/)) / [Python 3.5](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3.5,label=gpu-linux/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-0.11.0rc0-cp35-cp35m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3.5,label=gpu-linux/))
* Mac CPU-only: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=mac-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-0.11.0rc0-py2-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=mac-slave/)) / [Python 3](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=mac-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-0.11.0rc0-py3-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=mac-slave/))
* Mac GPU: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-mac/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-0.11.0rc0-py2-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-mac/)) / [Python 3](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-mac/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-0.11.0rc0-py3-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-mac/))
* [Android](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-android/TF_BUILD_CONTAINER_TYPE=ANDROID,TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=NO_PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=android-slave/lastSuccessfulBuild/artifact/bazel-out/local_linux/bin/tensorflow/examples/android/tensorflow_demo.apk) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-android/TF_BUILD_CONTAINER_TYPE=ANDROID,TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=NO_PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=android-slave/))
#### *Try your first TensorFlow program*

View File

@ -1,7 +1,42 @@
# Changes since last release
# Release 0.11.0
## Breaking Changes to the API
## Major Features and Improvements
* cuDNN 5 support.
* HDFS Support.
* Adds Fused LSTM support via cuDNN 5 in `tensorflow/contrib/cudnn_rnn`.
* Improved support for NumPy style basic slicing including non-1 strides,
ellipses, newaxis, and negative indices. For example complicated expressions
like `foo[1, 2:4, tf.newaxis, ..., :-3:-1, :]` are now supported. In addition
we have preliminary (non-broadcasting) support for sliced assignment to
variables. In particular one can write `var[1:3].assign([1,11,111])`.
* Deprecated `tf.op_scope` and `tf.variable_op_scope` in favor of a unified `tf.name_scope` and `tf.variable_scope`. The new argument order of `tf.variable_scope` is incompatible with previous versions.
* Introducing `core/util/tensor_bundle` module: a module to efficiently
serialize/deserialize tensors to disk. Will be used in TF's new checkpoint
format.
* Added tf.svd for computing the singular value decomposition (SVD) of dense
matrices or batches of matrices (CPU only).
* Added gradients for eigenvalues and eigenvectors computed using
`self_adjoint_eig` or `self_adjoint_eigvals`.
* Eliminated `batch_*` methods for most linear algebra and FFT ops and promoted
the non-batch version of the ops to handle batches of matrices.
* Tracing/timeline support for distributed runtime (no GPU profiler yet).
* C API gives access to inferred shapes with `TF_GraphGetTensorNumDims` and
`TF_GraphGetTensorShape`.
* Shape functions for core ops have moved to C++ via
`REGISTER_OP(...).SetShapeFn(...)`. Python shape inference RegisterShape calls
use the C++ shape functions with `common_shapes.call_cpp_shape_fn`. A future
release will remove `RegisterShape` from python.
## Bug Fixes and Other Changes
* Documentation now includes operator overloads on Tensor and Variable.
* `tensorflow.__git_version__` now allows users to identify the version of the
code that TensorFlow was compiled with. We also have
`tensorflow.__git_compiler__` which identifies the compiler used to compile
TensorFlow's core.
* Improved multi-threaded performance of `batch_matmul`.
* LSTMCell, BasicLSTMCell, and MultiRNNCell constructors now default to
`state_is_tuple=True`. For a quick fix while transitioning to the new
default, simply pass the argument `state_is_tuple=False`.
@ -10,20 +45,45 @@
* Int32 elements of list(type) arguments are no longer placed in host memory by
default. If necessary, a list(type) argument to a kernel can be placed in host
memory using a HostMemory annotation.
* uniform_unit_scaling_initializer() no longer takes a full_shape arg, instead
relying on the partition info passed to the initializer function when it's
called.
* The NodeDef protocol message is now defined in its own file node_def.proto
instead of graph.proto.
* ops.NoGradient was renamed ops.NotDifferentiable. ops.NoGradient will
* `uniform_unit_scaling_initializer()` no longer takes a `full_shape` arg,
instead relying on the partition info passed to the initializer function when
it's called.
* The NodeDef protocol message is now defined in its own file `node_def.proto`
`instead of graph.proto`.
* `ops.NoGradient` was renamed `ops.NotDifferentiable`. `ops.NoGradient` will
be removed soon.
* dot.h / DotGraph was removed (it was an early analysis tool prior
* `dot.h` / DotGraph was removed (it was an early analysis tool prior
to TensorBoard, no longer that useful). It remains in history
should someone find the code useful.
* re2 / regexp.h was removed from being a public interface of TF.
Should users need regular expressions, they should depend on the RE2
library directly rather than via TensorFlow.
## Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Abid K, @afshinrahimi, @AidanGG, Ajay Rao, Aki Sukegawa, Alex Rothberg,
Alexander Rosenberg Johansen, Andrew Gibiansky, Andrew Thomas, @Appleholic,
Bastiaan Quast, Ben Dilday, Bofu Chen, Brandon Amos, Bryon Gloden, Cissp®,
@chanis, Chenyang Liu, Corey Wharton, Daeyun Shin, Daniel Julius Lasiman, Daniel
Waterworth, Danijar Hafner, Darren Garvey, Denis Gorbachev, @DjangoPeng,
Egor-Krivov, Elia Palme, Eric Platon, Fabrizio Milo, Gaetan Semet,
Georg Nebehay, Gu Wang, Gustav Larsson, @haosdent, Harold Cooper, Hw-Zz,
@ichuang, Igor Babuschkin, Igor Macedo Quintanilha, Ilya Edrenkin, @ironhead,
Jakub Kolodziejczyk, Jennifer Guo, Jihun Choi, Jonas Rauber, Josh Bleecher
Snyder, @jpangburn, Jules Gagnon-Marchand, Karen Brems, @kborer, Kirill Bobyrev,
Laurent Mazare, Longqi Yang, Malith Yapa, Maniteja Nandana, Martin Englund,
Matthias Winkelmann, @mecab, Mu-Ik Jeon, Nand Dalal, Niels Ole Salscheider,
Nikhil Mishra, Park Jiin, Pieter De Rijk, @raix852, Ritwik Gupta, Sahil Sharma,
@Sangheum, @SergejsRk, Shinichiro Hamaji, Simon Denel, @Steve, @suiyuan2009,
Tiago Jorge, Tijmen Tieleman, @tvn, @tyfkda, Wang Yang, Wei-Ting Kuo, Wenjian
Huang, Yan Chen, @YenChenLin, Yuan (Terry) Tang, Yuncheng Li, Yunfeng Wang, Zack
Polizzi, @zhongzyd, Ziming Dong, @perhapszzy
We are also grateful to all who filed issues or helped resolve them, asked and
answered questions, and were part of inspiring discussions.
# Release 0.10.0
## Major Features and Improvements
@ -36,7 +96,7 @@
* Full version of TF-Slim available as `tf.contrib.slim`
* Added k-Means clustering and WALS matrix factorization
## Big Fixes and Other Changes
## Bug Fixes and Other Changes
* Allow gradient computation for scalar values.
* Performance improvements for gRPC
@ -58,8 +118,8 @@ This release contains contributions from many people at Google, as well as:
Alex Rothberg, Andrew Royer, Austin Marshall, @BlackCoal, Bob Adolf, Brian Diesel, Charles-Emmanuel Dias, @chemelnucfin, Chris Lesniewski, Daeyun Shin, Daniel Rodriguez, Danijar Hafner, Darcy Liu, Kristinn R. Thórisson, Daniel Castro, Dmitry Savintsev, Kashif Rasul, Dylan Paiton, Emmanuel T. Odeke, Ernest Grzybowski, Gavin Sherry, Gideon Dresdner, Gregory King, Harold Cooper, @heinzbeinz, Henry Saputra, Huarong Huo, Huazuo Gao, Igor Babuschkin, Igor Macedo Quintanilha, Ivan Ukhov, James Fysh, Jan Wilken Dörrie, Jihun Choi, Johnny Lim, Jonathan Raiman, Justin Francis, @lilac, Li Yi, Marc Khoury, Marco Marchesi, Max Melnick, Micael Carvalho, @mikowals, Mostafa Gazar, Nico Galoppo, Nishant Agrawal, Petr Janda, Yuncheng Li, @raix852, Robert Rose, @Robin-des-Bois, Rohit Girdhar, Sam Abrahams, satok16, Sergey Kishchenko, Sharkd Tu, @shotat, Siddharth Agrawal, Simon Denel, @sono-bfio, SunYeop Lee, Thijs Vogels, @tobegit3hub, @Undo1, Wang Yang, Wenjian Huang, Yaroslav Bulatov, Yuan Tang, Yunfeng Wang, Ziming Dong
We are also grateful to all who filed issues or helped resolve them, asked and
answered questions, and were part of inspiring discussions.
We are also grateful to all who filed issues or helped resolve them, asked and
answered questions, and were part of inspiring discussions.
# Release 0.9.0
@ -77,7 +137,7 @@ answered questions, and were part of inspiring discussions.
`tf.nn.rnn`, and the classes in `tf.nn.rnn_cell`).
* TensorBoard now has an Audio Dashboard, with associated audio summaries.
## Big Fixes and Other Changes
## Bug Fixes and Other Changes
* Turned on CuDNN Autotune.
* Added support for using third-party Python optimization algorithms (contrib.opt).
@ -93,8 +153,8 @@ answered questions, and were part of inspiring discussions.
* Performance improvements
* Many bugfixes
* Many documentation fixes
* TensorBoard fixes: graphs with only one data point, Nan values,
reload button and auto-reload, tooltips in scalar charts, run
* TensorBoard fixes: graphs with only one data point, Nan values,
reload button and auto-reload, tooltips in scalar charts, run
filtering, stable colors
* Tensorboard graph visualizer now supports run metadata. Clicking on nodes
while viewing a stats for a particular run will show runtime statistics, such
@ -106,8 +166,8 @@ This release contains contributions from many people at Google, as well as:
Aaron Schumacher, Aidan Dang, Akihiko ITOH, Aki Sukegawa, Arbit Chen, Aziz Alto, Danijar Hafner, Erik Erwitt, Fabrizio Milo, Felix Maximilian Möller, Henry Saputra, Sung Kim, Igor Babuschkin, Jan Zikes, Jeremy Barnes, Jesper Steen Møller, Johannes Mayer, Justin Harris, Kashif Rasul, Kevin Robinson, Loo Rong Jie, Lucas Moura, Łukasz Bieniasz-Krzywiec, Mario Cho, Maxim Grechkin, Michael Heilman, Mostafa Rahmani, Mourad Mourafiq, @ninotoshi, Orion Reblitz-Richardson, Yuncheng Li, @raoqiyu, Robert DiPietro, Sam Abrahams, Sebastian Raschka, Siddharth Agrawal, @snakecharmer1024, Stephen Roller, Sung Kim, SunYeop Lee, Thijs Vogels, Till Hoffmann, Victor Melo, Ville Kallioniemi, Waleed Abdulla, Wenjian Huang, Yaroslav Bulatov, Yeison Rodriguez, Yuan Tang, Yuxin Wu, @zhongzyd, Ziming Dong, Zohar Jackson
We are also grateful to all who filed issues or helped resolve them, asked and
answered questions, and were part of inspiring discussions.
We are also grateful to all who filed issues or helped resolve them, asked and
answered questions, and were part of inspiring discussions.
# Release 0.8.0
@ -124,11 +184,11 @@ answered questions, and were part of inspiring discussions.
* Add an extension mechanism for adding network file system support
* TensorBoard displays metadata stats (running time, memory usage and device used) and tensor shapes
## Big Fixes and Other Changes
## Bug Fixes and Other Changes
* Utility for inspecting checkpoints
* Basic tracing and timeline support
* Allow building against cuDNN 5 (not incl. RNN/LSTM support)
* Allow building against cuDNN 5 (not incl. RNN/LSTM support)
* Added instructions and binaries for ProtoBuf library with fast serialization and without 64MB limit
* Added special functions
* `bool`-strictness: Tensors have to be explicitly compared to `None`
@ -148,8 +208,8 @@ This release contains contributions from many people at Google, as well as:
Abhinav Upadhyay, Aggelos Avgerinos, Alan Wu, Alexander G. de G. Matthews, Aleksandr Yahnev, @amchercashin, Andy Kitchen, Aurelien Geron, Awni Hannun, @BanditCat, Bas Veeling, Cameron Chen, @cg31, Cheng-Lung Sung, Christopher Bonnett, Dan Becker, Dan Van Boxel, Daniel Golden, Danijar Hafner, Danny Goodman, Dave Decker, David Dao, David Kretch, Dongjoon Hyun, Dustin Dorroh, @e-lin, Eurico Doirado, Erik Erwitt, Fabrizio Milo, @gaohuazuo, Iblis Lin, Igor Babuschkin, Isaac Hodes, Isaac Turner, Iván Vallés, J Yegerlehner, Jack Zhang, James Wexler, Jan Zikes, Jay Young, Jeff Hodges, @jmtatsch, Johnny Lim, Jonas Meinertz Hansen, Kanit Wongsuphasawat, Kashif Rasul, Ken Shirriff, Kenneth Mitchner, Kenta Yonekura, Konrad Magnusson, Konstantin Lopuhin, @lahwran, @lekaha, @liyongsea, Lucas Adams, @makseq, Mandeep Singh, @manipopopo, Mark Amery, Memo Akten, Michael Heilman, Michael Peteuil, Nathan Daly, Nicolas Fauchereau, @ninotoshi, Olav Nymoen, @panmari, @papelita1234, Pedro Lopes, Pranav Sailesh Mani, RJ Ryan, Rob Culliton, Robert DiPietro, @ronrest, Sam Abrahams, Sarath Shekkizhar, Scott Graham, Sebastian Raschka, Sung Kim, Surya Bhupatiraju, Syed Ahmed, Till Hoffmann, @timsl, @urimend, @vesnica, Vlad Frolov, Vlad Zagorodniy, Wei-Ting Kuo, Wenjian Huang, William Dmitri Breaden Madden, Wladimir Schmidt, Yuan Tang, Yuwen Yan, Yuxin Wu, Yuya Kusakabe, @zhongzyd, @znah.
We are also grateful to all who filed issues or helped resolve them, asked and
answered questions, and were part of inspiring discussions.
We are also grateful to all who filed issues or helped resolve them, asked and
answered questions, and were part of inspiring discussions.
# Release 0.7.1
@ -175,12 +235,12 @@ answered questions, and were part of inspiring discussions.
* Allow using any installed Cuda >= 7.0 and cuDNN >= R2, and add support
for cuDNN R4
* Added a `contrib/` directory for unsupported or experimental features,
* Added a `contrib/` directory for unsupported or experimental features,
including higher level `layers` module
* Added an easy way to add and dynamically load user-defined ops
* Built out a good suite of tests, things should break less!
* Added `MetaGraphDef` which makes it easier to save graphs with metadata
* Added assignments for "Deep Learning with TensorFlow" udacity course
* Added assignments for "Deep Learning with TensorFlow" udacity course
## Bug Fixes and Other Changes
@ -270,8 +330,8 @@ Vlad Zavidovych, Yangqing Jia, Yi-Lin Juang, Yuxin Wu, Zachary Lipton,
Zero Chen, Alan Wu, @brchiu, @emmjaykay, @jalammar, @Mandar-Shinde,
@nsipplswezey, @ninotoshi, @panmari, @prolearner and @rizzomichaelg.
We are also grateful to all who filed issues or helped resolve them, asked and
answered questions, and were part of inspiring discussions.
We are also grateful to all who filed issues or helped resolve them, asked and
answered questions, and were part of inspiring discussions.
# Release 0.6.0

36
configure vendored
View File

@ -1,5 +1,8 @@
#!/usr/bin/env bash
set -e
set -o pipefail
# Find out the absolute path to where ./configure resides
pushd `dirname $0` #> /dev/null
SOURCE_BASE_DIR=`pwd -P`
@ -14,7 +17,7 @@ function bazel_clean_and_fetch() {
while true; do
fromuser=""
if [ -z "$PYTHON_BIN_PATH" ]; then
default_python_bin_path=$(which python)
default_python_bin_path=$(which python || which python3 || true)
read -p "Please specify the location of python. [Default is $default_python_bin_path]: " PYTHON_BIN_PATH
fromuser="1"
if [ -z "$PYTHON_BIN_PATH" ]; then
@ -47,7 +50,6 @@ while [ "$TF_NEED_GCP" == "" ]; do
done
if [ "$TF_NEED_GCP" == "1" ]; then
## Verify that libcurl header files are available.
# Only check Linux, since on MacOS the header files are installed with XCode.
if [[ $(uname -a) =~ Linux ]] && [[ ! -f "/usr/include/curl/curl.h" ]]; then
@ -96,7 +98,7 @@ fi
echo "$SWIG_PATH" > tensorflow/tools/swig/swig_path
# Invoke python_config and set up symlinks to python includes
(./util/python/python_config.sh --setup "$PYTHON_BIN_PATH";) || exit -1
./util/python/python_config.sh --setup "$PYTHON_BIN_PATH"
# Run the gen_git_source to create links where bazel can track dependencies for
# git hash propagation
@ -127,7 +129,7 @@ fi
while true; do
fromuser=""
if [ -z "$GCC_HOST_COMPILER_PATH" ]; then
default_gcc_host_compiler_path=$(which gcc)
default_gcc_host_compiler_path=$(which gcc || true)
read -p "Please specify which gcc should be used by nvcc as the host compiler. [Default is $default_gcc_host_compiler_path]: " GCC_HOST_COMPILER_PATH
fromuser="1"
if [ -z "$GCC_HOST_COMPILER_PATH" ]; then
@ -214,18 +216,36 @@ while true; do
if [[ -z "$TF_CUDNN_VERSION" ]]; then
TF_CUDNN_EXT=""
cudnn_lib_path=""
cudnn_alt_lib_path=""
if [ "$OSNAME" == "Linux" ]; then
cudnn_lib_path="${CUDNN_INSTALL_PATH}/lib64/libcudnn.so"
cudnn_alt_lib_path="${CUDNN_INSTALL_PATH}/libcudnn.so"
elif [ "$OSNAME" == "Darwin" ]; then
cudnn_lib_path="${CUDNN_INSTALL_PATH}/lib/libcudnn.dylib"
cudnn_alt_lib_path="${CUDNN_INSTALL_PATH}/libcudnn.dylib"
fi
# Resolve to the SONAME of the symlink. Use readlink without -f
# to resolve exactly once to the SONAME. E.g, libcudnn.so ->
# libcudnn.so.4
REALVAL=`readlink ${CUDNN_INSTALL_PATH}/lib64/libcudnn.so`
# libcudnn.so.4.
# If the path is not a symlink, readlink will exit with an error code, so
# in that case, we return the path itself.
if [ -f "$cudnn_lib_path" ]; then
REALVAL=`readlink ${cudnn_lib_path} || echo "${cudnn_lib_path}"`
else
REALVAL=`readlink ${cudnn_alt_lib_path} || echo "${cudnn_alt_lib_path}"`
fi
# Extract the version of the SONAME, if it was indeed symlinked to
# the SONAME version of the file.
if [[ "$REALVAL" =~ .so[.]+([0-9]*) ]];
then
if [[ "$REALVAL" =~ .so[.]+([0-9]*) ]]; then
TF_CUDNN_EXT="."${BASH_REMATCH[1]}
TF_CUDNN_VERSION=${BASH_REMATCH[1]}
echo "libcudnn.so resolves to libcudnn${TF_CUDNN_EXT}"
elif [[ "$REALVAL" =~ ([0-9]*).dylib ]]; then
TF_CUDNN_EXT=${BASH_REMATCH[1]}".dylib"
TF_CUDNN_VERSION=${BASH_REMATCH[1]}
echo "libcudnn.dylib resolves to libcudnn${TF_CUDNN_EXT}"
fi
else
TF_CUDNN_EXT=".$TF_CUDNN_VERSION"

View File

@ -1544,7 +1544,7 @@ TF_Operation* TF_GraphOperationByName(TF_Graph* graph, const char* oper_name) {
TF_Operation* TF_GraphNextOperation(TF_Graph* graph, size_t* pos) {
if (*pos == 0) {
// Advance past the first sentinal nodes in every graph (the source & sink).
// Advance past the first sentinel nodes in every graph (the source & sink).
*pos += 2;
} else {
// Advance to the next node.

View File

@ -37,7 +37,7 @@ typedef Status (*GradFunc)(const Scope& scope, const Operation& op,
class GradOpRegistry {
public:
// Registers 'func' as the the gradient function for 'op'.
// Returns true if registration was succesful, check fails otherwise.
// Returns true if registration was successful, check fails otherwise.
bool Register(const string& op, GradFunc func);
// Sets 'func' to the gradient function for 'op' and returns Status OK if

View File

@ -47,7 +47,7 @@ Status ComputeTheoreticalJacobianTranspose(
auto dy_data_flat = dy_data.flat<T>();
dy_data_flat.setZero();
// Compute the theoretical Jacobian one row at a time by backproping '1.0'
// Compute the theoretical Jacobian one row at a time by back propagating '1.0'
// for each element of 'dy', while holding all other elements of 'dy' at zero.
ClientSession session(scope);
std::vector<Tensor> dxout;
@ -133,7 +133,7 @@ Status ComputeGradientError(const Scope& scope, const ops::Output& x,
TF_RETURN_IF_ERROR(ComputeTheoreticalJacobianTranspose<T>(
scope, x, x_shape, x_data, y, y_shape, &jacobian_t));
// Inititalize numeric Jacobian to zeros.
// Initialize numeric Jacobian to zeros.
Tensor jacobian_n(x.type(), {x_size, y_size});
auto jacobian_n_flat = jacobian_n.flat<T>();
jacobian_n_flat.setZero();

View File

@ -95,7 +95,7 @@ class Input {
// constants such as simple primitive constants and nested initializer lists
// representing a multi-dimensional array. Initializer constructors are all
// templates, so the aforementioned kinds of C++ constants can be used to
// construct an Initializer. Intializer stores the value it got constructed
// construct an Initializer. Initializer stores the value it got constructed
// with in a Tensor object.
struct Initializer {
// Construct from a scalar value of an arithmetic type or a type that can be
@ -156,7 +156,7 @@ class Input {
}
// Construct a multi-dimensional tensor from a nested initializer list. Note
// that C++ syntax allows nesting of arbitrarily typed intializer lists, so
// that C++ syntax allows nesting of arbitrarily typed initializer lists, so
// such invalid initializers cannot be disallowed at compile time. This
// function performs checks to make sure that the nested initializer list is
// indeed a valid multi-dimensional tensor.

View File

@ -15,10 +15,13 @@ cmake_policy(SET CMP0022 NEW)
# Options
option(tensorflow_VERBOSE "Enable for verbose output" OFF)
option(tensorflow_BUILD_TESTS "Build tests" ON)
option(tensorflow_ENABLE_SSL_SUPPORT "Enable boringssl support" OFF)
option(tensorflow_ENABLE_GRPC_SUPPORT "Enable gRPC support" ON)
option(tensorflow_BUILD_CC_EXAMPLE "Build the C++ tutorial example" ON)
option(tensorflow_BUILD_PYTHON_BINDINGS "Build the Python bindings" ON)
option(tensorflow_BUILD_ALL_KERNELS "Build all OpKernels" ON)
option(tensorflow_BUILD_CONTRIB_KERNELS "Build OpKernels from tensorflow/contrib/..." ON)
#Threads: defines CMAKE_THREAD_LIBS_INIT and adds -pthread compile option for
# targets that link ${CMAKE_THREAD_LIBS_INIT}.
@ -42,6 +45,15 @@ set (DOWNLOAD_LOCATION "${CMAKE_CURRENT_BINARY_DIR}/downloads"
mark_as_advanced(DOWNLOAD_LOCATION)
set(CMAKE_POSITION_INDEPENDENT_CODE ON)
add_definitions(-DEIGEN_AVOID_STL_ARRAY)
if(WIN32)
add_definitions(-DNOMINMAX -D_WIN32_WINNT=0x0A00 -DLANG_CXX11 -DCOMPILER_MSVC -D__VERSION__=\"MSVC\")
set(CMAKE_CXX_FLAGS ${CMAKE_CXX_FLAGS} /MP)
endif()
if ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU")
set(CMAKE_CXX_FLAGS ${CMAKE_CXX_FLAGS} "-fno-exceptions -std=c++11")
endif()
# External dependencies
include(gif)
@ -49,35 +61,76 @@ include(png)
include(jpeg)
include(eigen)
include(jsoncpp)
if(tensorflow_ENABLE_SSL_SUPPORT)
include(boringssl)
endif()
include(farmhash)
include(highwayhash)
include(protobuf)
include(grpc)
find_package(ZLIB REQUIRED)
set(tensorflow_EXTERNAL_LIBRARIES
${gif_STATIC_LIBRARIES}
${png_STATIC_LIBRARIES}
${jpeg_STATIC_LIBRARIES}
${jsoncpp_STATIC_LIBRARIES}
${farmhash_STATIC_LIBRARIES}
${highwayhash_STATIC_LIBRARIES}
${protobuf_STATIC_LIBRARIES}
${ZLIB_LIBRARIES}
)
set(tensorflow_EXTERNAL_DEPENDENCIES
gif_copy_headers_to_destination png_copy_headers_to_destination jpeg_copy_headers_to_destination jsoncpp farmhash_copy_headers_to_destination highwayhash_copy_headers_to_destination protobuf eigen)
include_directories(
# Source and generated code.
${tensorflow_source_dir}
${CMAKE_CURRENT_BINARY_DIR}
# External dependencies.
${gif_INCLUDE_DIR}
${png_INCLUDE_DIR}
${jpeg_INCLUDE_DIR}
${eigen_INCLUDE_DIRS}
${jsoncpp_INCLUDE_DIR}
${farmhash_INCLUDE_DIR}
${highwayhash_INCLUDE_DIR}
${PROTOBUF_INCLUDE_DIRS}
${ZLIB_INCLUDE_DIRS}
)
if(tensorflow_ENABLE_SSL_SUPPORT)
include(boringssl)
list(APPEND tensorflow_EXTERNAL_LIBRARIES ${boringssl_STATIC_LIBRARIES})
list(APPEND tensorflow_EXTERNAL_DEPENDENCIES boringssl)
include_directories(${boringssl_INCLUDE_DIR})
endif()
if(tensorflow_ENABLE_GRPC_SUPPORT)
include(grpc)
list(APPEND tensorflow_EXTERNAL_LIBRARIES ${grpc_STATIC_LIBRARIES})
list(APPEND tensorflow_EXTERNAL_DEPENDENCIES grpc)
include_directories(${GRPC_INCLUDE_DIRS})
endif()
if(WIN32)
list(APPEND tensorflow_EXTERNAL_LIBRARIES wsock32 ws2_32 shlwapi)
endif()
if(UNIX)
list(APPEND tensorflow_EXTERNAL_LIBRARIES ${CMAKE_THREAD_LIBS_INIT} ${CMAKE_DL_LIBS})
endif()
# Let's get to work!
include(tf_core_framework.cmake)
include(tf_tools.cmake)
# NOTE: Disabled until issue #3996 is fixed.
# include(tf_stream_executor.cmake)
include(tf_core_cpu.cmake)
include(tf_models.cmake)
include(tf_core_ops.cmake)
include(tf_core_direct_session.cmake)
include(tf_core_distributed_runtime.cmake)
if(tensorflow_ENABLE_GRPC_SUPPORT)
include(tf_core_distributed_runtime.cmake)
endif()
include(tf_core_kernels.cmake)
include(tf_cc_ops.cmake)
include(tf_tools.cmake)
if(tensorflow_BUILD_CC_EXAMPLE)
include(tf_tutorials.cmake)
endif()
if(tensorflow_BUILD_PYTHON_BINDINGS)
include(tf_python.cmake)
endif()
if (tensorflow_BUILD_TESTS)
include(tests.cmake)
endif (tensorflow_BUILD_TESTS)
include(install.cmake)

View File

@ -1,283 +1,218 @@
This directory contains *CMake* files that can be used to build TensorFlow
core library.
TensorFlow CMake build
======================
This directory contains CMake files for building TensorFlow on Microsoft
Windows. [CMake](https://cmake.org) is a cross-platform tool that can
generate build scripts for multiple build systems, including Microsoft
Visual Studio.
**N.B.** We provide Linux build instructions primarily for the purpose of
testing the build. We recommend using the standard Bazel-based build on
Linux.
Current Status
--------------
CMake build is not yet ready for general usage!
The CMake files in this directory can build the core TensorFlow runtime, an
example C++ binary, and a PIP package containing the runtime and Python
bindings. Currently, only CPU builds are supported, but we are working on
providing a GPU build as well.
We are actively working on CMake support. Please help us improve it.
Pull requests are welcomed!
Note: Windows support is in an **alpha** state, and we welcome your feedback.
### Pre-requisites
Linux CMake + Docker (very simple)
----------------------------------
* CMake version 3.1 or later
```bash
git clone --recursive https://github.com/tensorflow/tensorflow.git
cd tensorflow
tensorflow/tools/ci_build/ci_build.sh CPU tensorflow/tools/ci_build/builds/cmake.sh
```
* [Git](http://git-scm.com)
That's it. Dependencies included. Otherwise read the rest of this readme...
* [SWIG](http://www.swig.org/download.html)
* Additional pre-requisites for Microsoft Windows:
- Visual Studio 2015
- Python 3.5
- NumPy 1.11.0 or later
Prerequisites
=============
* Additional pre-requisites for Linux:
- Python 2.7 or later
- [Docker](https://www.docker.com/) (for automated testing)
- NumPy 1.11.0 or later
You need to have [CMake](http://www.cmake.org) and [Git](http://git-scm.com)
installed on your computer before proceeding.
### Known-good configurations
Most of the instructions will be given to the *Сommand Prompt*, but the same
actions can be performed using appropriate GUI tools.
* Microsoft Windows 10
- Microsoft Visual Studio Enterprise 2015 with Visual C++ 2015
- [Anaconda 4.1.1 (Python 3.5 64-bit)](https://www.continuum.io/downloads)
- [Git for Windows version 2.9.2.windows.1](https://git-scm.com/download/win)
- [swigwin-3.0.10](http://www.swig.org/download.html)
* Ubuntu 14.04
- Makefile generator
- Docker 1.9.1 (for automated testing)
Environment Setup
=================
### Current known limitations
Open the appropriate *Command Prompt* from the *Start* menu.
* CPU support only
For example *VS2013 x64 Native Tools Command Prompt*:
- We are in the process of porting the GPU code in
`tensorflow/stream_executor` to build with CMake and work on non-POSIX
platforms.
C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin\amd64>
* Additional limitations for the Windows build:
Change to your working directory:
- The Python package supports **Python 3.5 only**, because that is the only
version for which standard Python binaries exist and those binaries are
compatible with the TensorFlow runtime. (On Windows, the standard Python
binaries for versions earlier than 3.5 were compiled with older compilers
that do not have all of the features (e.g. C++11 support) needed to compile
TensorFlow. We welcome patches for making TensorFlow work with Python 2.7
on Windows, but have not yet committed to supporting that configuration.)
C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin\amd64>cd C:\Path\to
C:\Path\to>
- The following Python APIs are not currently implemented:
* Loading custom op libraries via `tf.load_op_library()`.
* Path manipulation functions (such as `tf.gfile.ListDirectory()`) are not
functional.
Where *C:\Path\to* is the path to your real working directory.
- The `tf.contrib` libraries are not currently included in the PIP package.
Create a folder where TensorFlow headers/libraries/binaries will be installed
after they are built:
- The following operations are not currently implemented:
* `DepthwiseConv2dNative`
* `Digamma`
* `Erf`
* `Erfc`
* `Igamma`
* `Igammac`
* `ImmutableConst`
* `Lgamma`
* `Polygamma`
* `SparseMatmul`
* `Zeta`
C:\Path\to>mkdir install
- Google Cloud Storage support is not currently implemented. The GCS library
currently depends on `libcurl` and `boringssl`, and the Windows version
could use standard Windows APIs for making HTTP requests and cryptography
(for OAuth). Contributions are welcome for this feature.
If *cmake* command is not available from *Command Prompt*, add it to system
*PATH* variable:
We are actively working on improving CMake and Windows support, and addressing
these limitations. We would appreciate pull requests that implement missing
ops or APIs.
C:\Path\to>set PATH=%PATH%;C:\Program Files (x86)\CMake\bin
If *git* command is not available from *Command Prompt*, add it to system
*PATH* variable:
C:\Path\to>set PATH=%PATH%;C:\Program Files\Git\cmd
Good. Now you are ready to continue.
Getting Sources
===============
You can get the latest stable source packages from the
[releases](https://github.com/tensorflow/tensorflow/releases) page.
Or you can type:
C:\Path\to> git clone --recursive -b [release_tag] https://github.com/tensorflow/tensorflow.git
Where *[release_tag]* is a git tag like *v0.6.0* or a branch name like *master*
if you want to get the latest code.
Go to the project folder:
C:\Path\to>cd tensorflow
C:\Path\to\tensorflow>
Now go to *tensorflow\contrib\cmake* folder in TensorFlow's contrib sources:
C:\Path\to\tensorflow>cd tensorflow\contrib\cmake
C:\Path\to\tensorflow\tensorflow\contrib\cmake>
Good. Now you are ready to configure *CMake*.
CMake Configuration
===================
*CMake* supports a lot of different
[generators](http://www.cmake.org/cmake/help/latest/manual/cmake-generators.7.html)
for various native build systems. We are only interested in
[Makefile](http://www.cmake.org/cmake/help/latest/manual/cmake-generators.7.html#makefile-generators)
and
[Visual Studio](http://www.cmake.org/cmake/help/latest/manual/cmake-generators.7.html#visual-studio-generators)
generators.
We will use shadow building to separate the temporary files from the TensorFlow
source code.
Create a temporary *build* folder and change your working directory to it:
C:\Path\to\tensorflow\tensorflow\contrib\cmake>mkdir build & cd build
C:\Path\to\tensorflow\tensorflow\contrib\cmake\build>
The *Makefile* generator can build the project in only one configuration, so
you need to build a separate folder for each configuration.
To start using a *Release* configuration:
[...]\contrib\cmake\build>mkdir release & cd release
[...]\contrib\cmake\build\release>cmake -G "NMake Makefiles" ^
-DCMAKE_BUILD_TYPE=Release ^
-DCMAKE_INSTALL_PREFIX=../../../../../../install ^
../..
It will generate *nmake* *Makefile* in current directory.
To use *Debug* configuration:
[...]\contrib\cmake\build>mkdir debug & cd debug
[...]\contrib\cmake\build\debug>cmake -G "NMake Makefiles" ^
-DCMAKE_BUILD_TYPE=Debug ^
-DCMAKE_INSTALL_PREFIX=../../../../../../install ^
../..
It will generate *nmake* *Makefile* in current directory.
To create *Visual Studio* solution file:
[...]\contrib\cmake\build>mkdir solution & cd solution
[...]\contrib\cmake\build\solution>cmake -G "Visual Studio 12 2013 Win64" ^
-DCMAKE_INSTALL_PREFIX=../../../../../../install ^
../..
It will generate *Visual Studio* solution file *tensorflow.sln* in current
directory.
If the *gmock* directory does not exist, and/or you do not want to build
TensorFlow unit tests, you need to add *cmake* command argument
`-Dtensorflow_BUILD_TESTS=OFF` to disable testing.
Compiling
=========
To compile tensorflow:
[...]\contrib\cmake\build\release>nmake
or
[...]\contrib\cmake\build\debug>nmake
And wait for the compilation to finish.
If you prefer to use the IDE:
* Open the generated tensorflow.sln file in Microsoft Visual Studio.
* Choose "Debug" or "Release" configuration as desired.
* From the Build menu, choose "Build Solution".
And wait for the compilation to finish.
Testing
=======
To run unit-tests:
[...]\contrib\cmake\build\release>nmake check
or
[...]\contrib\cmake\build\debug>nmake check
You can also build project *check* from Visual Studio solution.
Yes, it may sound strange, but it works.
You should see an output similar to:
Running main() from gmock_main.cc
[==========] Running 1546 tests from 165 test cases.
...
[==========] 1546 tests from 165 test cases ran. (2529 ms total)
[ PASSED ] 1546 tests.
To run specific tests:
C:\Path\to\tensorflow>tensorflow\contrib\cmake\build\release\tests.exe ^
--gtest_filter=AnyTest*
Running main() from gmock_main.cc
Note: Google Test filter = AnyTest*
[==========] Running 3 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 3 tests from AnyTest
[ RUN ] AnyTest.TestPackAndUnpack
[ OK ] AnyTest.TestPackAndUnpack (0 ms)
[ RUN ] AnyTest.TestPackAndUnpackAny
[ OK ] AnyTest.TestPackAndUnpackAny (0 ms)
[ RUN ] AnyTest.TestIs
[ OK ] AnyTest.TestIs (0 ms)
[----------] 3 tests from AnyTest (1 ms total)
[----------] Global test environment tear-down
[==========] 3 tests from 1 test case ran. (2 ms total)
[ PASSED ] 3 tests.
Note that the tests must be run from the source folder.
If all tests are passed, safely continue.
Installing
==========
To install TensorFlow to the specified *install* folder:
[...]\contrib\cmake\build\release>nmake install
or
[...]\contrib\cmake\build\debug>nmake install
You can also build project *INSTALL* from Visual Studio solution.
It sounds not so strange and it works.
This will create the following folders under the *install* location:
* bin - that contains tensorflow binaries;
* include - that contains C++ headers and TensorFlow *.proto files;
* lib - that contains linking libraries and *CMake* configuration files for
*tensorflow* package.
Now you can if needed:
* Copy the contents of the include directory to wherever you want to put
headers.
* Copy binaries wherever you put build tools (probably somewhere in your
PATH).
* Copy linking libraries libtensorflow[d].lib wherever you put libraries.
To avoid conflicts between the MSVC debug and release runtime libraries, when
compiling a debug build of your application, you may need to link against a
debug build of libtensorflowd.lib with "d" postfix. Similarly, release builds
should link against release libtensorflow.lib library.
DLLs vs. static linking
=======================
Static linking is now the default for the TensorFlow Buffer libraries. Due to
issues with Win32's use of a separate heap for each DLL, as well as binary
compatibility issues between different versions of MSVC's STL library, it is
recommended that you use static linkage only. However, it is possible to
build libtensorflow as DLLs if you really want. To do this, do the following:
* Add an additional flag `-Dtensorflow_BUILD_SHARED_LIBS=ON` when invoking
cmake
* Follow the same steps as described in the above section.
* When compiling your project, make sure to `#define TENSORFLOW_USE_DLLS`.
When distributing your software to end users, we strongly recommend that you
do NOT install libtensorflow.dll to any shared location.
Instead, keep these libraries next to your binaries, in your application's
own install directory. C++ makes it very difficult to maintain binary
compatibility between releases, so it is likely that future versions of these
libraries will *not* be usable as drop-in replacements.
If your project is itself a DLL intended for use by third-party software, we
recommend that you do NOT expose TensorFlow objects in your library's
public interface, and that you statically link them into your library.
Notes on Compiler Warnings
Step-by-step Windows build
==========================
The following warnings have been disabled while building the tensorflow
libraries and binaries. You may have to disable some of them in your own
project as well, or live with them.
1. Install the pre-requisites detailed above, and set up your environment.
* [TODO]
* The following commands assume that you are using the Windows Command
Prompt (`cmd.exe`). You will need to set up your environment to use the
appropriate toolchain, i.e. the 64-bit tools. (Some of the binary targets
we will build are too large for the 32-bit tools, and they will fail with
out-of-memory errors.) The typical command to do set up your
environment is:
```
D:\temp> "C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\bin\amd64\vcvarsall.bat"
```
* We assume that `cmake` and `git` are installed and in your `%PATH%`. If
for example `cmake` is not in your path and it is installed in
`C:\Program Files (x86)\CMake\bin\cmake.exe`, you can add this directory
to your `%PATH%` as follows:
```
D:\temp> set PATH="%PATH%;C:\Program Files (x86)\CMake\bin\cmake.exe"
```
2. Clone the TensorFlow repository and create a working directory for your
build:
```
D:\temp> git clone https://github.com/tensorflow/tensorflow.git
D:\temp> cd tensorflow\tensorflow\contrib\cmake
D:\temp\tensorflow\tensorflow\contrib\cmake> mkdir build
D:\temp\tensorflow\tensorflow\contrib\cmake> cd build
D:\temp\tensorflow\tensorflow\contrib\cmake\build>
```
3. Invoke CMake to create Visual Studio solution and project files.
**N.B.** This assumes that `cmake.exe` is in your `%PATH%` environment
variable. The other paths are for illustrative purposes only, and may
be different on your platform. The `^` character is a line continuation
and must be the last character on each line.
```
D:\...\build> cmake .. -A x64 -DCMAKE_BUILD_TYPE=Release ^
More? -DSWIG_EXECUTABLE=C:/tools/swigwin-3.0.10/swig.exe ^
More? -DPYTHON_EXECUTABLE=C:/Users/%USERNAME%/AppData/Local/Continuum/Anaconda3/python.exe ^
More? -DPYTHON_LIBRARIES=C:/Users/%USERNAME%/AppData/Local/Continuum/Anaconda3/libs/python35.lib
```
Note that the `-DCMAKE_BUILD_TYPE=Release` flag must match the build
configuration that you choose when invoking `msbuild`. The known-good
values are `Release` and `RelWithDebInfo`. The `Debug` build type is
not currently supported, because it relies on a `Debug` library for
Python (`python35d.lib`) that is not distributed by default.
There are various options that can be specified when generating the
solution and project files:
* `-DCMAKE_BUILD_TYPE=(Release|RelWithDebInfo)`: Note that the
`CMAKE_BUILD_TYPE` option must match the build configuration that you
choose when invoking MSBuild in step 4. The known-good values are
`Release` and `RelWithDebInfo`. The `Debug` build type is not currently
supported, because it relies on a `Debug` library for Python
(`python35d.lib`) that is not distributed by default.
* `-Dtensorflow_BUILD_ALL_KERNELS=(ON|OFF)`. Defaults to `ON`. You can
build a small subset of the kernels for a faster build by setting this
option to `OFF`.
* `-Dtensorflow_BUILD_CC_EXAMPLE=(ON|OFF)`. Defaults to `ON`. Generate
project files for a simple C++
[example training program](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/cc/tutorials/example_trainer.cc).
* `-Dtensorflow_BUILD_PYTHON_BINDINGS=(ON|OFF)`. Defaults to `ON`. Generate
project files for building a PIP package containing the TensorFlow runtime
and its Python bindings.
* `-Dtensorflow_ENABLE_GRPC_SUPPORT=(ON|OFF)`. Defaults to `ON`. Include
gRPC support and the distributed client and server code in the TensorFlow
runtime.
* `-Dtensorflow_ENABLE_SSL_SUPPORT=(ON|OFF)`. Defaults to `OFF`. Include
SSL support (for making secure HTTP requests) in the TensorFlow runtime.
This support is incomplete, and will be used for Google Cloud Storage
support.
4. Invoke MSBuild to build TensorFlow.
To build the C++ example program, which will be created as a `.exe`
executable in the subdirectory `.\Release`:
```
D:\...\build> MSBuild /p:Configuration=Release tf_tutorials_example_trainer.vcxproj
D:\...\build> Release\tf_tutorials_example_trainer.exe
```
To build the PIP package, which will be created as a `.whl` file in the
subdirectory `.\tf_python\dist`:
```
D:\...\build> MSBuild /p:Configuration=Release tf_python_build_pip_package.vcxproj
```
Linux Continuous Integration build
==================================
This build requires [Docker](https://www.docker.com/) to be installed on the
local machine.
```bash
$ git clone --recursive https://github.com/tensorflow/tensorflow.git
$ cd tensorflow
$ tensorflow/tools/ci_build/ci_build.sh CMAKE tensorflow/tools/ci_build/builds/cmake.sh
```
That's it. Dependencies included.

View File

@ -1,6 +1,6 @@
include (ExternalProject)
set(farmhash_INCLUDE_DIR ${CMAKE_CURRENT_BINARY_DIR}/external/farmhash_archive)
set(farmhash_INCLUDE_DIR ${CMAKE_CURRENT_BINARY_DIR}/external/farmhash_archive ${CMAKE_CURRENT_BINARY_DIR}/external/farmhash_archive/util)
set(farmhash_URL https://github.com/google/farmhash/archive/34c13ddfab0e35422f4c3979f360635a8c050260.zip)
set(farmhash_HASH SHA256=e3d37a59101f38fd58fb799ed404d630f0eee18bfc2a2433910977cc8fea9c28)
set(farmhash_BUILD ${CMAKE_BINARY_DIR}/farmhash/src/farmhash)

View File

@ -4,28 +4,58 @@ set(gif_INCLUDE_DIR ${CMAKE_CURRENT_BINARY_DIR}/external/gif_archive/giflib-5.1.
set(gif_URL http://ufpr.dl.sourceforge.net/project/giflib/giflib-5.1.4.tar.gz)
set(gif_HASH SHA256=34a7377ba834397db019e8eb122e551a49c98f49df75ec3fcc92b9a794a4f6d1)
set(gif_INSTALL ${CMAKE_BINARY_DIR}/gif/install)
set(gif_STATIC_LIBRARIES ${gif_INSTALL}/lib/libgif.a)
set(gif_BUILD ${CMAKE_BINARY_DIR}/gif/src/gif)
set(gif_HEADERS
"${gif_INSTALL}/include/gif_lib.h"
)
set(ENV{CFLAGS} "$ENV{CFLAGS} -fPIC")
if(WIN32)
ExternalProject_Add(gif
PREFIX gif
URL ${gif_URL}
URL_HASH ${gif_HASH}
INSTALL_DIR ${gif_INSTALL}
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
BUILD_COMMAND $(MAKE)
INSTALL_COMMAND $(MAKE) install
CONFIGURE_COMMAND
${CMAKE_CURRENT_BINARY_DIR}/gif/src/gif/configure
--with-pic
--prefix=${gif_INSTALL}
--enable-shared=yes
)
set(gif_STATIC_LIBRARIES ${gif_INSTALL}/lib/giflib.lib)
ExternalProject_Add(gif
PREFIX gif
URL ${gif_URL}
URL_HASH ${gif_HASH}
PATCH_COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_SOURCE_DIR}/patches/gif/CMakeLists.txt ${gif_BUILD}
INSTALL_DIR ${gif_INSTALL}
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
CMAKE_CACHE_ARGS
-DCMAKE_BUILD_TYPE:STRING=Release
-DCMAKE_VERBOSE_MAKEFILE:BOOL=OFF
-DCMAKE_INSTALL_PREFIX:STRING=${gif_INSTALL}
)
ExternalProject_Add_Step(gif copy_unistd
COMMAND ${CMAKE_COMMAND} -E copy
${CMAKE_SOURCE_DIR}/patches/gif/unistd.h ${gif_BUILD}/lib/unistd.h
DEPENDEES patch
DEPENDERS build
)
else()
set(gif_STATIC_LIBRARIES ${gif_INSTALL}/lib/libgif.a)
set(ENV{CFLAGS} "$ENV{CFLAGS} -fPIC")
ExternalProject_Add(gif
PREFIX gif
URL ${gif_URL}
URL_HASH ${gif_HASH}
INSTALL_DIR ${gif_INSTALL}
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
BUILD_COMMAND $(MAKE)
INSTALL_COMMAND $(MAKE) install
CONFIGURE_COMMAND
${CMAKE_CURRENT_BINARY_DIR}/gif/src/gif/configure
--with-pic
--prefix=${gif_INSTALL}
--enable-shared=yes
)
endif()
# put gif includes in the directory where they are expected
add_custom_target(gif_create_destination_dir

View File

@ -6,12 +6,12 @@ set(GRPC_BUILD ${CMAKE_CURRENT_BINARY_DIR}/grpc/src/grpc)
set(GRPC_TAG 3bc78cd0b5bd784a235c01612d634b1ec5f8fb97)
if(WIN32)
set(GRPC_LIBRARIES
set(grpc_STATIC_LIBRARIES
${CMAKE_CURRENT_BINARY_DIR}/grpc/src/grpc/${CMAKE_BUILD_TYPE}/grpc++_unsecure.lib
${CMAKE_CURRENT_BINARY_DIR}/grpc/src/grpc/${CMAKE_BUILD_TYPE}/grpc_unsecure.lib
${CMAKE_CURRENT_BINARY_DIR}/grpc/src/grpc/${CMAKE_BUILD_TYPE}/gpr.lib)
else()
set(GRPC_LIBRARIES
set(grpc_STATIC_LIBRARIES
${CMAKE_CURRENT_BINARY_DIR}/grpc/src/grpc/libgrpc++_unsecure.a
${CMAKE_CURRENT_BINARY_DIR}/grpc/src/grpc/libgrpc_unsecure.a
${CMAKE_CURRENT_BINARY_DIR}/grpc/src/grpc/libgpr.a)
@ -30,6 +30,6 @@ ExternalProject_Add(grpc
-DCMAKE_BUILD_TYPE:STRING=Release
-DCMAKE_VERBOSE_MAKEFILE:BOOL=OFF
-DPROTOBUF_INCLUDE_DIRS:STRING=${PROTOBUF_INCLUDE_DIRS}
-DPROTOBUF_LIBRARIES:STRING=${PROTOBUF_LIBRARIES}
-DPROTOBUF_LIBRARIES:STRING=${protobuf_STATIC_LIBRARIES}
)

View File

@ -17,41 +17,23 @@ add_custom_target(highwayhash_copy_headers_to_destination
if(WIN32)
set(highwayhash_HEADERS "${highwayhash_BUILD}/highwayhash/*.h")
set(highwayhash_STATIC_LIBRARIES ${highwayhash_INSTALL}/lib/highwayhash.lib)
ExternalProject_Add(highwayhash
PREFIX highwayhash
GIT_REPOSITORY ${highwayhash_URL}
GIT_TAG ${highwayhash_TAG}
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
BUILD_IN_SOURCE 1
PATCH_COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_SOURCE_DIR}/patches/highwayhash/CMakeLists.txt ${highwayhash_BUILD}
INSTALL_DIR ${highwayhash_INSTALL}
CMAKE_CACHE_ARGS
-DCMAKE_BUILD_TYPE:STRING=Release
-DCMAKE_VERBOSE_MAKEFILE:BOOL=OFF
-DCMAKE_INSTALL_PREFIX:STRING=${highwayhash_INSTALL})
add_custom_command(TARGET highwayhash_copy_headers_to_destination PRE_BUILD
COMMAND ${CMAKE_COMMAND} -E copy_directory ${highwayhash_INSTALL}/include/ ${highwayhash_INCLUDE_DIR}/highwayhash)
else()
set(highwayhash_HEADERS "${highwayhash_BUILD}/highwayhash/*.h")
set(highwayhash_STATIC_LIBRARIES ${highwayhash_INSTALL}/lib/libhighwayhash.a)
ExternalProject_Add(highwayhash
PREFIX highwayhash
GIT_REPOSITORY ${highwayhash_URL}
GIT_TAG ${highwayhash_TAG}
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
BUILD_IN_SOURCE 1
BUILD_COMMAND $(MAKE)
CONFIGURE_COMMAND ""
INSTALL_COMMAND "")
foreach(header_file ${highwayhash_HEADERS})
add_custom_command(TARGET highwayhash_copy_headers_to_destination PRE_BUILD
COMMAND ${CMAKE_COMMAND} -E copy ${header_file} ${highwayhash_INCLUDE_DIR}/highwayhash)
endforeach()
endif()
ExternalProject_Add(highwayhash
PREFIX highwayhash
GIT_REPOSITORY ${highwayhash_URL}
GIT_TAG ${highwayhash_TAG}
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
BUILD_IN_SOURCE 1
PATCH_COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_SOURCE_DIR}/patches/highwayhash/CMakeLists.txt ${highwayhash_BUILD}
INSTALL_DIR ${highwayhash_INSTALL}
CMAKE_CACHE_ARGS
-DCMAKE_BUILD_TYPE:STRING=Release
-DCMAKE_VERBOSE_MAKEFILE:BOOL=OFF
-DCMAKE_INSTALL_PREFIX:STRING=${highwayhash_INSTALL})
add_custom_command(TARGET highwayhash_copy_headers_to_destination PRE_BUILD
COMMAND ${CMAKE_COMMAND} -E copy_directory ${highwayhash_INSTALL}/include/ ${highwayhash_INCLUDE_DIR}/highwayhash)

View File

@ -1,28 +1,32 @@
include (ExternalProject)
set(PROTOBUF_INCLUDE_DIRS ${CMAKE_CURRENT_BINARY_DIR}/protobuf/src/protobuf/src)
set(PROTOBUF_URL https://github.com/google/protobuf/releases/download/v3.1.0/protobuf-cpp-3.1.0.zip)
set(PROTOBUF_HASH SHA256=0c18ccc99e921c407f359047f9b56cca196c3ab36eed79e5979df6c1f9e623b7)
set(PROTOBUF_URL https://github.com/mrry/protobuf.git) # Includes MSVC fix.
set(PROTOBUF_TAG 1d2c7b6c7376f396c8c7dd9b6afd2d4f83f3cb05)
if(WIN32)
set(PROTOBUF_LIBRARIES ${CMAKE_CURRENT_BINARY_DIR}/protobuf/src/protobuf/${CMAKE_BUILD_TYPE}/libprotobuf.lib)
set(protobuf_STATIC_LIBRARIES ${CMAKE_CURRENT_BINARY_DIR}/protobuf/src/protobuf/${CMAKE_BUILD_TYPE}/libprotobuf.lib)
set(PROTOBUF_PROTOC_EXECUTABLE ${CMAKE_CURRENT_BINARY_DIR}/protobuf/src/protobuf/${CMAKE_BUILD_TYPE}/protoc.exe)
set(PROTOBUF_ADDITIONAL_CMAKE_OPTIONS -Dprotobuf_MSVC_STATIC_RUNTIME:BOOL=OFF -A x64)
else()
set(PROTOBUF_LIBRARIES ${CMAKE_CURRENT_BINARY_DIR}/protobuf/src/protobuf/libprotobuf.a)
set(protobuf_STATIC_LIBRARIES ${CMAKE_CURRENT_BINARY_DIR}/protobuf/src/protobuf/libprotobuf.a)
set(PROTOBUF_PROTOC_EXECUTABLE ${CMAKE_CURRENT_BINARY_DIR}/protobuf/src/protobuf/protoc)
endif()
ExternalProject_Add(protobuf
PREFIX protobuf
URL ${PROTOBUF_URL}
GIT_REPOSITORY ${PROTOBUF_URL}
GIT_TAG ${PROTOBUF_TAG}
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
BUILD_IN_SOURCE 1
SOURCE_DIR ${CMAKE_BINARY_DIR}/protobuf/src/protobuf
CONFIGURE_COMMAND ${CMAKE_COMMAND} cmake/ -Dprotobuf_BUILD_TESTS=OFF -DCMAKE_POSITION_INDEPENDENT_CODE=ON -Dprotobuf_MSVC_STATIC_RUNTIME:BOOL=OFF
CONFIGURE_COMMAND ${CMAKE_COMMAND} cmake/
-Dprotobuf_BUILD_TESTS=OFF
-DCMAKE_POSITION_INDEPENDENT_CODE=ON
${PROTOBUF_ADDITIONAL_CMAKE_OPTIONS}
INSTALL_COMMAND ""
CMAKE_CACHE_ARGS
-DCMAKE_BUILD_TYPE:STRING=Release
-DCMAKE_VERBOSE_MAKEFILE:BOOL=OFF
-Dprotobuf_MSVC_STATIC_RUNTIME:BOOL=OFF
-DCMAKE_POSITION_INDEPENDENT_CODE:BOOL=ON
)

View File

@ -1 +0,0 @@
# [TODO]

View File

@ -0,0 +1,33 @@
cmake_minimum_required(VERSION 2.8.3)
project(giflib)
set(GIFLIB_SRCS
"lib/dgif_lib.c"
"lib/egif_lib.c"
"lib/gif_font.c"
"lib/gif_hash.h"
"lib/gifalloc.c"
"lib/openbsd-reallocarray.c"
"lib/gif_err.c"
"lib/quantize.c"
"lib/gif_hash.c"
"lib/gif_lib.h"
"lib/gif_lib_private.h"
)
set(GIFLIB_INCLUDES
"lib/gif_lib.h"
)
include_directories("${CMAKE_CURRENT_SOURCE_DIR}/lib")
add_library(giflib ${GIFLIB_SRCS})
install(TARGETS giflib
RUNTIME DESTINATION bin COMPONENT RuntimeLibraries
LIBRARY DESTINATION lib COMPONENT RuntimeLibraries
ARCHIVE DESTINATION lib COMPONENT Development)
foreach(GIFLIB_INCLUDE ${GIFLIB_INCLUDES})
install(FILES ${GIFLIB_INCLUDE} DESTINATION include COMPONENT Development)
endforeach()

View File

@ -40,6 +40,11 @@ include_directories("${CMAKE_CURRENT_SOURCE_DIR}")
add_library(highwayhash ${HIGHWAYHASH_SRCS})
# C++11
target_compile_features(highwayhash PRIVATE
cxx_rvalue_references
)
install(TARGETS highwayhash
LIBRARY DESTINATION lib COMPONENT RuntimeLibraries
ARCHIVE DESTINATION lib COMPONENT Development)

View File

@ -26,7 +26,7 @@ from setuptools import find_packages, setup, Command
from setuptools.command.install import install as InstallCommandBase
from setuptools.dist import Distribution
_VERSION = '0.10.0-cmake-experimental'
_VERSION = '0.11.0rc0-cmake-experimental'
REQUIRED_PACKAGES = [
'numpy >= 1.11.0',
@ -140,6 +140,10 @@ def find_files(pattern, root):
matches = ['../' + x for x in find_files('*', 'external') if '.py' not in x]
if os.name == 'nt':
EXTENSION_NAME = 'python/_pywrap_tensorflow.pyd'
else:
EXTENSION_NAME = 'python/_pywrap_tensorflow.so'
# TODO(mrry): Add support for development headers.
@ -168,8 +172,7 @@ setup(
# Add in any packaged data.
include_package_data=True,
package_data={
'tensorflow': ['python/_pywrap_tensorflow.so',
] + matches,
'tensorflow': [EXTENSION_NAME] + matches,
},
zip_safe=False,
distclass=BinaryDistribution,

View File

@ -1 +0,0 @@
# [TODO]

View File

@ -12,21 +12,6 @@ add_library(tf_cc_framework OBJECT ${tf_cc_framework_srcs})
add_dependencies(tf_cc_framework tf_core_framework)
target_include_directories(tf_cc_framework PRIVATE
${tensorflow_source_dir}
${eigen_INCLUDE_DIRS}
)
target_compile_options(tf_cc_framework PRIVATE
-fno-exceptions
-DEIGEN_AVOID_STL_ARRAY
)
# C++11
target_compile_features(tf_cc_framework PRIVATE
cxx_rvalue_references
)
########################################################
# tf_cc_op_gen_main library
########################################################
@ -40,67 +25,10 @@ add_library(tf_cc_op_gen_main OBJECT ${tf_cc_op_gen_main_srcs})
add_dependencies(tf_cc_op_gen_main tf_core_framework)
target_include_directories(tf_cc_op_gen_main PRIVATE
${tensorflow_source_dir}
${eigen_INCLUDE_DIRS}
)
#target_link_libraries(tf_cc_op_gen_main
# ${CMAKE_THREAD_LIBS_INIT}
# ${PROTOBUF_LIBRARIES}
# tf_protos_cc
# tf_core_lib
# tf_core_framework
#)
target_compile_options(tf_cc_op_gen_main PRIVATE
-fno-exceptions
-DEIGEN_AVOID_STL_ARRAY
)
# C++11
target_compile_features(tf_cc_op_gen_main PRIVATE
cxx_rvalue_references
)
########################################################
# tf_gen_op_wrapper_cc executables
########################################################
#
# # Run the op generator.
# if name == "sendrecv_ops":
# include_internal = "1"
# else:
# include_internal = "0"
# native.genrule(
# name=name + "_genrule",
# outs=[out_ops_file + ".h", out_ops_file + ".cc"],
# tools=[":" + tool],
# cmd=("$(location :" + tool + ") $(location :" + out_ops_file + ".h) " +
# "$(location :" + out_ops_file + ".cc) " + include_internal))
#def tf_gen_op_wrappers_cc(name,
# op_lib_names=[],
# other_srcs=[],
# other_hdrs=[],
# pkg=""):
# subsrcs = other_srcs
# subhdrs = other_hdrs
# for n in op_lib_names:
# tf_gen_op_wrapper_cc(n, "ops/" + n, pkg=pkg)
# subsrcs += ["ops/" + n + ".cc"]
# subhdrs += ["ops/" + n + ".h"]
#
# native.cc_library(name=name,
# srcs=subsrcs,
# hdrs=subhdrs,
# deps=["//tensorflow/core:core_cpu"],
# copts=tf_copts(),
# alwayslink=1,)
# create directory for ops generated files
set(cc_ops_target_dir ${CMAKE_CURRENT_BINARY_DIR}/tensorflow/cc/ops)
@ -115,18 +43,6 @@ set(tf_cc_op_lib_names
"user_ops"
)
foreach(tf_cc_op_lib_name ${tf_cc_op_lib_names})
#tf_gen_op_wrapper_cc(name, out_ops_file, pkg=""):
# # Construct an op generator binary for these ops.
# tool = out_ops_file + "_gen_cc" #example ops/array_ops_gen_cc
# native.cc_binary(
# name = tool,
# copts = tf_copts(),
# linkopts = ["-lm"],
# linkstatic = 1, # Faster to link this one-time-use binary dynamically
# deps = (["//tensorflow/cc:cc_op_gen_main",
# pkg + ":" + name + "_op_lib"])
# )
# Using <TARGET_OBJECTS:...> to work around an issue where no ops were
# registered (static initializers dropped by the linker because the ops
# are not used explicitly in the *_gen_cc executables).
@ -137,39 +53,9 @@ foreach(tf_cc_op_lib_name ${tf_cc_op_lib_names})
$<TARGET_OBJECTS:tf_core_framework>
)
target_include_directories(${tf_cc_op_lib_name}_gen_cc PRIVATE
${tensorflow_source_dir}
${eigen_INCLUDE_DIRS}
)
find_package(ZLIB REQUIRED)
target_link_libraries(${tf_cc_op_lib_name}_gen_cc PRIVATE
${CMAKE_THREAD_LIBS_INIT}
${PROTOBUF_LIBRARIES}
tf_protos_cc
${gif_STATIC_LIBRARIES}
${jpeg_STATIC_LIBRARIES}
${png_STATIC_LIBRARIES}
${ZLIB_LIBRARIES}
${jsoncpp_STATIC_LIBRARIES}
${boringssl_STATIC_LIBRARIES}
${CMAKE_DL_LIBS}
)
if(tensorflow_ENABLE_SSL_SUPPORT)
target_link_libraries(${tf_cc_op_lib_name}_gen_cc PRIVATE
${boringssl_STATIC_LIBRARIES})
endif()
target_compile_options(${tf_cc_op_lib_name}_gen_cc PRIVATE
-fno-exceptions
-DEIGEN_AVOID_STL_ARRAY
-lm
)
# C++11
target_compile_features(${tf_cc_op_lib_name}_gen_cc PRIVATE
cxx_rvalue_references
${tensorflow_EXTERNAL_LIBRARIES}
)
set(cc_ops_include_internal 0)
@ -198,43 +84,3 @@ add_library(tf_cc_ops OBJECT
"${tensorflow_source_dir}/tensorflow/cc/ops/const_op.cc"
"${tensorflow_source_dir}/tensorflow/cc/ops/standard_ops.h"
)
target_include_directories(tf_cc_ops PRIVATE
${tensorflow_source_dir}
${eigen_INCLUDE_DIRS}
)
#target_link_libraries(tf_cc_ops
# ${CMAKE_THREAD_LIBS_INIT}
# ${PROTOBUF_LIBRARIES}
# tf_protos_cc
# tf_core_lib
# tf_core_cpu
# tf_models_word2vec_ops
#)
target_compile_options(tf_cc_ops PRIVATE
-fno-exceptions
-DEIGEN_AVOID_STL_ARRAY
)
# C++11
target_compile_features(tf_cc_ops PRIVATE
cxx_rvalue_references
)
#tf_gen_op_wrappers_cc(
# name = "cc_ops",
# op_lib_names = [
# ...
# ],
# other_hdrs = [
# "ops/const_op.h",
# "ops/standard_ops.h",
# ],
# other_srcs = [
# "ops/const_op.cc",
# ] + glob(["ops/*_grad.cc"]),
# pkg = "//tensorflow/core",
#)

View File

@ -30,30 +30,4 @@ list(APPEND tf_core_cpu_srcs
)
add_library(tf_core_cpu OBJECT ${tf_core_cpu_srcs})
target_include_directories(tf_core_cpu PRIVATE
${tensorflow_source_dir}
${eigen_INCLUDE_DIRS}
)
add_dependencies(tf_core_cpu
tf_core_framework
)
#target_link_libraries(tf_core_cpu
# ${CMAKE_THREAD_LIBS_INIT}
# ${PROTOBUF_LIBRARIES}
# tf_core_framework
# tf_core_lib
# tf_protos_cc
#)
target_compile_options(tf_core_cpu PRIVATE
-fno-exceptions
-DEIGEN_AVOID_STL_ARRAY
)
# C++11
target_compile_features(tf_core_cpu PRIVATE
cxx_rvalue_references
)
add_dependencies(tf_core_cpu tf_core_framework)

View File

@ -18,27 +18,3 @@ list(REMOVE_ITEM tf_core_direct_session_srcs ${tf_core_direct_session_test_srcs}
add_library(tf_core_direct_session OBJECT ${tf_core_direct_session_srcs})
add_dependencies(tf_core_direct_session tf_core_cpu)
target_include_directories(tf_core_direct_session PRIVATE
${tensorflow_source_dir}
${eigen_INCLUDE_DIRS}
)
#target_link_libraries(tf_core_direct_session
# ${CMAKE_THREAD_LIBS_INIT}
# ${PROTOBUF_LIBRARIES}
# tf_core_cpu
# tf_core_framework
# tf_core_lib
# tf_protos_cc
#)
target_compile_options(tf_core_direct_session PRIVATE
-fno-exceptions
-DEIGEN_AVOID_STL_ARRAY
)
# C++11
target_compile_features(tf_core_direct_session PRIVATE
cxx_rvalue_references
)

View File

@ -20,22 +20,6 @@ add_dependencies(tf_core_distributed_runtime
tf_core_cpu grpc
)
target_include_directories(tf_core_distributed_runtime PRIVATE
${tensorflow_source_dir}
${eigen_INCLUDE_DIRS}
${GRPC_INCLUDE_DIRS}
)
target_compile_options(tf_core_distributed_runtime PRIVATE
-fno-exceptions
-DEIGEN_AVOID_STL_ARRAY
)
# C++11
target_compile_features(tf_core_distributed_runtime PRIVATE
cxx_rvalue_references
)
########################################################
# grpc_tensorflow_server executable
########################################################
@ -56,42 +40,7 @@ add_executable(grpc_tensorflow_server
$<TARGET_OBJECTS:tf_core_distributed_runtime>
)
add_dependencies(tf_core_distributed_runtime
grpc
)
target_include_directories(grpc_tensorflow_server PUBLIC
${tensorflow_source_dir}
${eigen_INCLUDE_DIRS}
${GRPC_INCLUDE_DIRS}
)
find_package(ZLIB REQUIRED)
target_link_libraries(grpc_tensorflow_server PUBLIC
${CMAKE_THREAD_LIBS_INIT}
${PROTOBUF_LIBRARIES}
${GRPC_LIBRARIES}
tf_protos_cc
${farmhash_STATIC_LIBRARIES}
${gif_STATIC_LIBRARIES}
${jpeg_STATIC_LIBRARIES}
${jsoncpp_STATIC_LIBRARIES}
${png_STATIC_LIBRARIES}
${ZLIB_LIBRARIES}
${CMAKE_DL_LIBS}
)
if(tensorflow_ENABLE_SSL_SUPPORT)
target_link_libraries(grpc_tensorflow_server PUBLIC
${boringssl_STATIC_LIBRARIES})
endif()
target_compile_options(grpc_tensorflow_server PRIVATE
-fno-exceptions
-DEIGEN_AVOID_STL_ARRAY
)
# C++11
target_compile_features(grpc_tensorflow_server PRIVATE
cxx_rvalue_references
${tensorflow_EXTERNAL_LIBRARIES}
)

View File

@ -71,8 +71,6 @@ endfunction()
# tf_protos_cc library
########################################################
include_directories(${PROTOBUF_INCLUDE_DIRS})
include_directories(${CMAKE_CURRENT_BINARY_DIR})
file(GLOB_RECURSE tf_protos_cc_srcs RELATIVE ${tensorflow_source_dir}
"${tensorflow_source_dir}/tensorflow/core/*.proto"
)
@ -114,16 +112,6 @@ RELATIVE_PROTOBUF_TEXT_GENERATE_CPP(PROTO_TEXT_SRCS PROTO_TEXT_HDRS
)
add_library(tf_protos_cc ${PROTO_SRCS} ${PROTO_HDRS})
target_include_directories(tf_protos_cc PUBLIC
${CMAKE_CURRENT_BINARY_DIR}
)
target_link_libraries(tf_protos_cc PUBLIC
${PROTOBUF_LIBRARIES}
)
# C++11
target_compile_features(tf_protos_cc PRIVATE
cxx_rvalue_references
)
########################################################
# tf_core_lib library
@ -131,11 +119,43 @@ target_compile_features(tf_protos_cc PRIVATE
file(GLOB_RECURSE tf_core_lib_srcs
"${tensorflow_source_dir}/tensorflow/core/lib/*.h"
"${tensorflow_source_dir}/tensorflow/core/lib/*.cc"
"${tensorflow_source_dir}/tensorflow/core/platform/*.h"
"${tensorflow_source_dir}/tensorflow/core/platform/*.cc"
"${tensorflow_source_dir}/tensorflow/core/public/*.h"
)
file(GLOB tf_core_platform_srcs
"${tensorflow_source_dir}/tensorflow/core/platform/*.h"
"${tensorflow_source_dir}/tensorflow/core/platform/*.cc"
"${tensorflow_source_dir}/tensorflow/core/platform/default/*.h"
"${tensorflow_source_dir}/tensorflow/core/platform/default/*.cc")
list(APPEND tf_core_lib_srcs ${tf_core_platform_srcs})
if(UNIX)
file(GLOB tf_core_platform_posix_srcs
"${tensorflow_source_dir}/tensorflow/core/platform/posix/*.h"
"${tensorflow_source_dir}/tensorflow/core/platform/posix/*.cc"
)
list(APPEND tf_core_lib_srcs ${tf_core_platform_posix_srcs})
endif(UNIX)
if(WIN32)
file(GLOB tf_core_platform_windows_srcs
"${tensorflow_source_dir}/tensorflow/core/platform/windows/*.h"
"${tensorflow_source_dir}/tensorflow/core/platform/windows/*.cc"
"${tensorflow_source_dir}/tensorflow/core/platform/posix/error.h"
"${tensorflow_source_dir}/tensorflow/core/platform/posix/error.cc"
)
list(APPEND tf_core_lib_srcs ${tf_core_platform_windows_srcs})
endif(WIN32)
if(tensorflow_ENABLE_SSL_SUPPORT)
# Cloud libraries require boringssl.
file(GLOB tf_core_platform_cloud_srcs
"${tensorflow_source_dir}/tensorflow/core/platform/cloud/*.h"
"${tensorflow_source_dir}/tensorflow/core/platform/cloud/*.cc"
)
list(APPEND tf_core_lib_srcs ${tf_core_platform_cloud_srcs})
endif()
file(GLOB_RECURSE tf_core_lib_test_srcs
"${tensorflow_source_dir}/tensorflow/core/lib/*test*.h"
"${tensorflow_source_dir}/tensorflow/core/lib/*test*.cc"
@ -143,50 +163,10 @@ file(GLOB_RECURSE tf_core_lib_test_srcs
"${tensorflow_source_dir}/tensorflow/core/platform/*test*.cc"
"${tensorflow_source_dir}/tensorflow/core/public/*test*.h"
)
list(REMOVE_ITEM tf_core_lib_srcs ${tf_core_lib_test_srcs})
if(NOT tensorflow_ENABLE_SSL_SUPPORT)
file(GLOB_RECURSE tf_core_lib_cloud_srcs
"${tensorflow_source_dir}/tensorflow/core/platform/cloud/*.h"
"${tensorflow_source_dir}/tensorflow/core/platform/cloud/*.cc"
)
list(REMOVE_ITEM tf_core_lib_srcs ${tf_core_lib_cloud_srcs})
endif()
list(REMOVE_ITEM tf_core_lib_srcs ${tf_core_lib_test_srcs})
add_library(tf_core_lib OBJECT ${tf_core_lib_srcs})
target_include_directories(tf_core_lib PUBLIC
${tensorflow_source_dir}
${gif_INCLUDE_DIR}
${jpeg_INCLUDE_DIR}
${png_INCLUDE_DIR}
${eigen_INCLUDE_DIRS}
${jsoncpp_INCLUDE_DIR}
)
target_compile_options(tf_core_lib PRIVATE
-fno-exceptions
-DEIGEN_AVOID_STL_ARRAY
)
# C++11
target_compile_features(tf_core_lib PRIVATE
cxx_rvalue_references
)
add_dependencies(tf_core_lib
gif_copy_headers_to_destination
jpeg_copy_headers_to_destination
png_copy_headers_to_destination
eigen
tf_protos_cc
jsoncpp
)
if(tensorflow_ENABLE_SSL_SUPPORT)
target_include_directories(tf_core_lib PUBLIC ${boringssl_INCLUDE_DIR})
add_dependencies(tf_core_lib boringssl)
endif()
add_dependencies(tf_core_lib ${tensorflow_EXTERNAL_DEPENDENCIES} tf_protos_cc)
# Tricky setup to force always rebuilding
# force_rebuild always runs forcing ${VERSION_INFO_CC} target to run
@ -197,13 +177,12 @@ add_custom_target(force_rebuild_target ALL DEPENDS ${VERSION_INFO_CC})
add_custom_command(OUTPUT __force_rebuild COMMAND cmake -E echo)
add_custom_command(OUTPUT
${VERSION_INFO_CC}
COMMAND ${tensorflow_source_dir}/tensorflow/tools/git/gen_git_source.py
COMMAND ${PYTHON_EXECUTABLE} ${tensorflow_source_dir}/tensorflow/tools/git/gen_git_source.py
--raw_generate ${VERSION_INFO_CC}
DEPENDS __force_rebuild)
set(tf_version_srcs ${tensorflow_source_dir}/tensorflow/core/util/version_info.cc)
########################################################
# tf_core_framework library
########################################################
@ -212,7 +191,6 @@ file(GLOB_RECURSE tf_core_framework_srcs
"${tensorflow_source_dir}/tensorflow/core/framework/*.cc"
"${tensorflow_source_dir}/tensorflow/core/util/*.h"
"${tensorflow_source_dir}/tensorflow/core/util/*.cc"
"${tensorflow_source_dir}/tensorflow/core/client/tensor_c_api.cc"
"${tensorflow_source_dir}/tensorflow/core/common_runtime/session.cc"
"${tensorflow_source_dir}/tensorflow/core/common_runtime/session_factory.cc"
"${tensorflow_source_dir}/tensorflow/core/common_runtime/session_options.cc"
@ -230,26 +208,18 @@ file(GLOB_RECURSE tf_core_framework_test_srcs
"${tensorflow_source_dir}/tensorflow/core/util/*main.cc"
)
list(REMOVE_ITEM tf_core_framework_srcs ${tf_core_framework_test_srcs})
list(REMOVE_ITEM tf_core_framework_srcs ${tf_core_framework_test_srcs}
"${tensorflow_source_dir}/tensorflow/core/util/memmapped_file_system.cc"
"${tensorflow_source_dir}/tensorflow/core/util/memmapped_file_system.h"
"${tensorflow_source_dir}/tensorflow/core/util/memmapped_file_system_writer.cc"
)
add_library(tf_core_framework OBJECT
${tf_core_framework_srcs}
${tf_version_srcs}
${PROTO_TEXT_HDRS}
${PROTO_TEXT_SRCS})
target_include_directories(tf_core_framework PUBLIC
${tensorflow_source_dir}
${eigen_INCLUDE_DIRS}
)
add_dependencies(tf_core_framework
tf_core_lib
proto_text
)
target_compile_options(tf_core_framework PRIVATE
-fno-exceptions
-DEIGEN_AVOID_STL_ARRAY
)
# C++11
target_compile_features(tf_core_framework PRIVATE
cxx_rvalue_references
)

View File

@ -1,10 +1,73 @@
########################################################
# tf_core_kernels library
########################################################
file(GLOB_RECURSE tf_core_kernels_srcs
"${tensorflow_source_dir}/tensorflow/core/kernels/*.h"
"${tensorflow_source_dir}/tensorflow/core/kernels/*.cc"
)
if(tensorflow_BUILD_ALL_KERNELS)
file(GLOB_RECURSE tf_core_kernels_srcs
"${tensorflow_source_dir}/tensorflow/core/kernels/*.h"
"${tensorflow_source_dir}/tensorflow/core/kernels/*.cc"
)
else(tensorflow_BUILD_ALL_KERNELS)
# Build a minimal subset of kernels to be able to run a test program.
set(tf_core_kernels_srcs
"${tensorflow_source_dir}/tensorflow/core/kernels/bounds_check.h"
"${tensorflow_source_dir}/tensorflow/core/kernels/constant_op.h"
"${tensorflow_source_dir}/tensorflow/core/kernels/constant_op.cc"
"${tensorflow_source_dir}/tensorflow/core/kernels/fill_functor.h"
"${tensorflow_source_dir}/tensorflow/core/kernels/fill_functor.cc"
"${tensorflow_source_dir}/tensorflow/core/kernels/matmul_op.h"
"${tensorflow_source_dir}/tensorflow/core/kernels/matmul_op.cc"
"${tensorflow_source_dir}/tensorflow/core/kernels/no_op.h"
"${tensorflow_source_dir}/tensorflow/core/kernels/no_op.cc"
"${tensorflow_source_dir}/tensorflow/core/kernels/sendrecv_ops.h"
"${tensorflow_source_dir}/tensorflow/core/kernels/sendrecv_ops.cc"
)
endif(tensorflow_BUILD_ALL_KERNELS)
if(tensorflow_BUILD_CONTRIB_KERNELS)
set(tf_contrib_kernels_srcs
"${tensorflow_source_dir}/tensorflow/contrib/factorization/kernels/clustering_ops.cc"
"${tensorflow_source_dir}/tensorflow/contrib/factorization/kernels/wals_solver_ops.cc"
"${tensorflow_source_dir}/tensorflow/contrib/factorization/ops/clustering_ops.cc"
"${tensorflow_source_dir}/tensorflow/contrib/factorization/ops/factorization_ops.cc"
#"${tensorflow_source_dir}/tensorflow/contrib/ffmpeg/decode_audio_op.cc"
#"${tensorflow_source_dir}/tensorflow/contrib/ffmpeg/encode_audio_op.cc"
"${tensorflow_source_dir}/tensorflow/contrib/layers/kernels/bucketization_kernel.cc"
"${tensorflow_source_dir}/tensorflow/contrib/layers/kernels/sparse_feature_cross_kernel.cc"
"${tensorflow_source_dir}/tensorflow/contrib/layers/ops/bucketization_op.cc"
"${tensorflow_source_dir}/tensorflow/contrib/layers/ops/sparse_feature_cross_op.cc"
"${tensorflow_source_dir}/tensorflow/contrib/metrics/kernels/set_kernels.cc"
"${tensorflow_source_dir}/tensorflow/contrib/metrics/ops/set_ops.cc"
"${tensorflow_source_dir}/tensorflow/contrib/rnn/kernels/gru_ops.cc"
"${tensorflow_source_dir}/tensorflow/contrib/rnn/kernels/lstm_ops.cc"
"${tensorflow_source_dir}/tensorflow/contrib/rnn/ops/gru_ops.cc"
"${tensorflow_source_dir}/tensorflow/contrib/rnn/ops/lstm_ops.cc"
"${tensorflow_source_dir}/tensorflow/contrib/tensor_forest"
"${tensorflow_source_dir}/tensorflow/contrib/tensor_forest/core/ops/best_splits_op.cc"
"${tensorflow_source_dir}/tensorflow/contrib/tensor_forest/core/ops/count_extremely_random_stats_op.cc"
"${tensorflow_source_dir}/tensorflow/contrib/tensor_forest/core/ops/finished_nodes_op.cc"
"${tensorflow_source_dir}/tensorflow/contrib/tensor_forest/core/ops/grow_tree_op.cc"
"${tensorflow_source_dir}/tensorflow/contrib/tensor_forest/core/ops/sample_inputs_op.cc"
"${tensorflow_source_dir}/tensorflow/contrib/tensor_forest/core/ops/scatter_add_ndim_op.cc"
"${tensorflow_source_dir}/tensorflow/contrib/tensor_forest/core/ops/topn_ops.cc"
"${tensorflow_source_dir}/tensorflow/contrib/tensor_forest/core/ops/tree_predictions_op.cc"
"${tensorflow_source_dir}/tensorflow/contrib/tensor_forest/core/ops/tree_utils.cc"
"${tensorflow_source_dir}/tensorflow/contrib/tensor_forest/core/ops/update_fertile_slots_op.cc"
"${tensorflow_source_dir}/tensorflow/contrib/tensor_forest/data/sparse_values_to_indices.cc"
"${tensorflow_source_dir}/tensorflow/contrib/tensor_forest/data/string_to_float_op.cc"
"${tensorflow_source_dir}/tensorflow/contrib/tensor_forest/hybrid/core/ops/hard_routing_function_op.cc"
"${tensorflow_source_dir}/tensorflow/contrib/tensor_forest/hybrid/core/ops/k_feature_gradient_op.cc"
"${tensorflow_source_dir}/tensorflow/contrib/tensor_forest/hybrid/core/ops/k_feature_routing_function_op.cc"
"${tensorflow_source_dir}/tensorflow/contrib/tensor_forest/hybrid/core/ops/routing_function_op.cc"
"${tensorflow_source_dir}/tensorflow/contrib/tensor_forest/hybrid/core/ops/routing_gradient_op.cc"
"${tensorflow_source_dir}/tensorflow/contrib/tensor_forest/hybrid/core/ops/stochastic_hard_routing_function_op.cc"
"${tensorflow_source_dir}/tensorflow/contrib/tensor_forest/hybrid/core/ops/stochastic_hard_routing_gradient_op.cc"
"${tensorflow_source_dir}/tensorflow/contrib/tensor_forest/hybrid/core/ops/unpack_path_op.cc"
"${tensorflow_source_dir}/tensorflow/contrib/tensor_forest/hybrid/core/ops/utils.cc"
)
list(APPEND tf_core_kernels_srcs ${tf_contrib_kernels_srcs})
endif(tensorflow_BUILD_CONTRIB_KERNELS)
file(GLOB_RECURSE tf_core_kernels_exclude_srcs
"${tensorflow_source_dir}/tensorflow/core/kernels/*test*.h"
@ -13,51 +76,28 @@ file(GLOB_RECURSE tf_core_kernels_exclude_srcs
"${tensorflow_source_dir}/tensorflow/core/kernels/*testutil.cc"
"${tensorflow_source_dir}/tensorflow/core/kernels/*main.cc"
"${tensorflow_source_dir}/tensorflow/core/kernels/*.cu.cc"
"${tensorflow_source_dir}/tensorflow/core/kernels/debug_ops.h"
"${tensorflow_source_dir}/tensorflow/core/kernels/debug_ops.cc"
"${tensorflow_source_dir}/tensorflow/core/kernels/debug_ops.h" # stream_executor dependency
"${tensorflow_source_dir}/tensorflow/core/kernels/debug_ops.cc" # stream_executor dependency
)
list(REMOVE_ITEM tf_core_kernels_srcs ${tf_core_kernels_exclude_srcs})
list(REMOVE_ITEM tf_core_kernels_srcs ${tf_core_kernels_exclude_srcs})
if(WIN32)
file(GLOB_RECURSE tf_core_kernels_windows_exclude_srcs
# Not currently working on Windows:
"${tensorflow_source_dir}/tensorflow/core/kernels/depthwise_conv_op.cc" # Cannot find symbol: tensorflow::LaunchConv2DOp<struct Eigen::ThreadPoolDevice, double>::launch(...).
"${tensorflow_source_dir}/tensorflow/core/kernels/fact_op.cc"
"${tensorflow_source_dir}/tensorflow/core/kernels/immutable_constant_op.cc"
"${tensorflow_source_dir}/tensorflow/core/kernels/immutable_constant_op.h"
"${tensorflow_source_dir}/tensorflow/core/kernels/sparse_matmul_op.cc"
"${tensorflow_source_dir}/tensorflow/core/kernels/sparse_matmul_op.h"
)
list(REMOVE_ITEM tf_core_kernels_srcs ${tf_core_kernels_windows_exclude_srcs})
endif(WIN32)
add_library(tf_core_kernels OBJECT ${tf_core_kernels_srcs})
add_dependencies(tf_core_kernels
tf_core_cpu
farmhash
highwayhash
farmhash_copy_headers_to_destination
highwayhash_copy_headers_to_destination
)
if(WIN32)
target_compile_options(tf_core_kernels PRIVATE /MP)
endif()
target_include_directories(tf_core_kernels PRIVATE
${tensorflow_source_dir}
${png_INCLUDE_DIR}
${eigen_INCLUDE_DIRS}
${farmhash_INCLUDE_DIR}
${highwayhash_INCLUDE_DIR}
)
#target_link_libraries(tf_core_kernels
# ${CMAKE_THREAD_LIBS_INIT}
# ${PROTOBUF_LIBRARIES}
# tf_core_cpu
# tf_core_framework
# tf_core_lib
# tf_protos_cc
# tf_models_word2vec_kernels
# tf_stream_executor
# tf_core_ops
# tf_core_cpu
#)
# "@gemmlowp//:eight_bit_int_gemm",
target_compile_options(tf_core_kernels PRIVATE
-fno-exceptions
-DEIGEN_AVOID_STL_ARRAY
)
# C++11
target_compile_features(tf_core_kernels PRIVATE
cxx_rvalue_references
)
add_dependencies(tf_core_kernels tf_core_cpu)

View File

@ -1,39 +1,25 @@
#def tf_gen_op_libs(op_lib_names):
# # Make library out of each op so it can also be used to generate wrappers
# # for various languages.
# for n in op_lib_names:
# native.cc_library(name=n + "_op_lib"
# copts=tf_copts(),
# srcs=["ops/" + n + ".cc"],
# deps=(["//tensorflow/core:framework"]),
# visibility=["//visibility:public"],
# alwayslink=1,
# linkstatic=1,)
set(tf_op_lib_names
"array_ops"
"attention_ops"
"candidate_sampling_ops"
"control_flow_ops"
"ctc_ops"
"data_flow_ops"
"functional_ops"
"image_ops"
"io_ops"
"linalg_ops"
"logging_ops"
"functional_ops"
"math_ops"
"nn_ops"
"no_op"
"parsing_ops"
"random_ops"
"script_ops"
"sdca_ops"
"sendrecv_ops"
"sparse_ops"
"state_ops"
"string_ops"
"summary_ops"
"training_ops"
)
@ -48,32 +34,8 @@ foreach(tf_op_lib_name ${tf_op_lib_names})
add_library(tf_${tf_op_lib_name} OBJECT ${tf_${tf_op_lib_name}_srcs})
add_dependencies(tf_${tf_op_lib_name} tf_core_framework)
target_include_directories(tf_${tf_op_lib_name} PRIVATE
${tensorflow_source_dir}
${eigen_INCLUDE_DIRS}
)
target_compile_options(tf_${tf_op_lib_name} PRIVATE
-fno-exceptions
-DEIGEN_AVOID_STL_ARRAY
)
# C++11
target_compile_features(tf_${tf_op_lib_name} PRIVATE
cxx_rvalue_references
)
endforeach()
#cc_library(
# name = "user_ops_op_lib"
# srcs = glob(["user_ops/**/*.cc"]),
# copts = tf_copts(),
# linkstatic = 1,
# visibility = ["//visibility:public"],
# deps = [":framework"],
# alwayslink = 1,
#)
########################################################
# tf_user_ops library
########################################################
@ -85,50 +47,6 @@ add_library(tf_user_ops OBJECT ${tf_user_ops_srcs})
add_dependencies(tf_user_ops tf_core_framework)
target_include_directories(tf_user_ops PRIVATE
${tensorflow_source_dir}
${eigen_INCLUDE_DIRS}
)
target_compile_options(tf_user_ops PRIVATE
-fno-exceptions
-DEIGEN_AVOID_STL_ARRAY
)
# C++11
target_compile_features(tf_user_ops PRIVATE
cxx_rvalue_references
)
#tf_cuda_library(
# name = "ops"
# srcs = glob(
# [
# "ops/**/*.h"
# "ops/**/*.cc"
# "user_ops/**/*.h"
# "user_ops/**/*.cc"
# ],
# exclude = [
# "**/*test*"
# "**/*main.cc"
# "user_ops/**/*.cu.cc"
# ],
# ),
# copts = tf_copts(),
# linkstatic = 1,
# visibility = ["//visibility:public"],
# deps = [
# ":core"
# ":lib"
# ":protos_cc"
# "//tensorflow/models/embedding:word2vec_ops"
# "//third_party/eigen3"
# ],
# alwayslink = 1,
#)
########################################################
# tf_core_ops library
########################################################
@ -154,29 +72,3 @@ list(REMOVE_ITEM tf_core_ops_srcs ${tf_core_ops_exclude_srcs})
add_library(tf_core_ops OBJECT ${tf_core_ops_srcs})
add_dependencies(tf_core_ops tf_core_cpu)
target_include_directories(tf_core_ops PRIVATE
${tensorflow_source_dir}
${eigen_INCLUDE_DIRS}
)
#target_link_libraries(tf_core_ops
# ${CMAKE_THREAD_LIBS_INIT}
# ${PROTOBUF_LIBRARIES}
# tf_protos_cc
# tf_core_lib
# tf_core_cpu
# tf_models_word2vec_ops
#)
target_compile_options(tf_core_ops PRIVATE
-fno-exceptions
-DEIGEN_AVOID_STL_ARRAY
)
# C++11
target_compile_features(tf_core_ops PRIVATE
cxx_rvalue_references
)

View File

@ -1,15 +1,3 @@
#cc_library(
# name = "word2vec_ops",
# srcs = [
# "word2vec_ops.cc",
# ],
# visibility = ["//tensorflow:internal"],
# deps = [
# "//tensorflow/core:framework",
# ],
# alwayslink = 1,
#)
########################################################
# tf_models_word2vec_ops library
########################################################
@ -19,43 +7,8 @@ file(GLOB tf_models_word2vec_ops_srcs
add_library(tf_models_word2vec_ops OBJECT ${tf_models_word2vec_ops_srcs})
target_include_directories(tf_models_word2vec_ops PRIVATE
${tensorflow_source_dir}
${eigen_INCLUDE_DIRS}
)
add_dependencies(tf_models_word2vec_ops tf_core_framework)
add_dependencies(tf_models_word2vec_ops
tf_core_framework
)
#target_link_libraries(tf_models_word2vec_ops
# ${CMAKE_THREAD_LIBS_INIT}
# ${PROTOBUF_LIBRARIES}
# tf_core_framework
# tf_core_lib
# tf_protos_cc
#)
target_compile_options(tf_models_word2vec_ops PRIVATE
-fno-exceptions
-DEIGEN_AVOID_STL_ARRAY
)
# C++11
target_compile_features(tf_models_word2vec_ops PRIVATE
cxx_rvalue_references
)
#cc_library(
# name = "word2vec_kernels",
# srcs = [
# "word2vec_kernels.cc",
# ],
# visibility = ["//tensorflow:internal"],
# deps = [
# "//tensorflow/core",
# ],
# alwayslink = 1,
#)
########################################################
# tf_models_word2vec_kernels library
########################################################
@ -65,30 +18,4 @@ file(GLOB tf_models_word2vec_kernels_srcs
add_library(tf_models_word2vec_kernels OBJECT ${tf_models_word2vec_kernels_srcs})
target_include_directories(tf_models_word2vec_kernels PRIVATE
${tensorflow_source_dir}
${eigen_INCLUDE_DIRS}
)
add_dependencies(tf_models_word2vec_kernels
tf_core_cpu
)
#target_link_libraries(tf_models_word2vec_kernels
# ${CMAKE_THREAD_LIBS_INIT}
# ${PROTOBUF_LIBRARIES}
# tf_core_framework
# tf_core_lib
# tf_protos_cc
# tf_core_cpu
#)
target_compile_options(tf_models_word2vec_kernels PRIVATE
-fno-exceptions
-DEIGEN_AVOID_STL_ARRAY
)
# C++11
target_compile_features(tf_models_word2vec_kernels PRIVATE
cxx_rvalue_references
)
add_dependencies(tf_models_word2vec_kernels tf_core_cpu)

View File

@ -18,7 +18,7 @@ include(FindPythonInterp)
if(NOT PYTHON_INCLUDE_DIR)
set(PYTHON_NOT_FOUND false)
exec_program("${PYTHON_EXECUTABLE}"
ARGS "-c 'import distutils.sysconfig; print distutils.sysconfig.get_python_inc()'"
ARGS "-c \"import distutils.sysconfig; print(distutils.sysconfig.get_python_inc())\""
OUTPUT_VARIABLE PYTHON_INCLUDE_DIR
RETURN_VALUE PYTHON_NOT_FOUND)
if(${PYTHON_NOT_FOUND})
@ -32,7 +32,7 @@ FIND_PACKAGE(PythonLibs)
if(NOT NUMPY_INCLUDE_DIR)
set(NUMPY_NOT_FOUND false)
exec_program("${PYTHON_EXECUTABLE}"
ARGS "-c 'import numpy; print numpy.get_include()'"
ARGS "-c \"import numpy; print(numpy.get_include())\""
OUTPUT_VARIABLE NUMPY_INCLUDE_DIR
RETURN_VALUE NUMPY_NOT_FOUND)
if(${NUMPY_NOT_FOUND})
@ -50,7 +50,6 @@ find_package(ZLIB REQUIRED)
########################################################
# TODO(mrry): Configure this to build in a directory other than tf_python/
# TODO(mrry): Assemble the Python files into a PIP package.
# tf_python_srcs contains all static .py files
file(GLOB_RECURSE tf_python_srcs RELATIVE ${tensorflow_source_dir}
@ -172,21 +171,6 @@ add_library(tf_python_op_gen_main OBJECT ${tf_python_op_gen_main_srcs})
add_dependencies(tf_python_op_gen_main tf_core_framework)
target_include_directories(tf_python_op_gen_main PRIVATE
${tensorflow_source_dir}
${eigen_INCLUDE_DIRS}
)
target_compile_options(tf_python_op_gen_main PRIVATE
-fno-exceptions
-DEIGEN_AVOID_STL_ARRAY
)
# C++11
target_compile_features(tf_python_op_gen_main PRIVATE
cxx_rvalue_references
)
# create directory for ops generated files
set(python_ops_target_dir ${CMAKE_CURRENT_BINARY_DIR}/tf_python/tensorflow/python/ops)
@ -216,37 +200,13 @@ function(GENERATE_PYTHON_OP_LIB tf_python_op_lib_name)
$<TARGET_OBJECTS:tf_${tf_python_op_lib_name}>
$<TARGET_OBJECTS:tf_core_lib>
$<TARGET_OBJECTS:tf_core_framework>
${GENERATE_PYTHON_OP_LIB_ADDITIONAL_LIBRARIES}
)
target_include_directories(${tf_python_op_lib_name}_gen_python PRIVATE
${tensorflow_source_dir}
${eigen_INCLUDE_DIRS}
${GENERATE_PYTHON_OP_LIB_ADDITIONAL_LIBRARIES}
)
target_link_libraries(${tf_python_op_lib_name}_gen_python PRIVATE
${CMAKE_THREAD_LIBS_INIT}
${PROTOBUF_LIBRARIES}
tf_protos_cc
${gif_STATIC_LIBRARIES}
${jpeg_STATIC_LIBRARIES}
${png_STATIC_LIBRARIES}
${ZLIB_LIBRARIES}
${jsoncpp_STATIC_LIBRARIES}
${CMAKE_DL_LIBS}
${tensorflow_EXTERNAL_LIBRARIES}
)
target_compile_options(${tf_python_op_lib_name}_gen_python PRIVATE
-fno-exceptions
-DEIGEN_AVOID_STL_ARRAY
-lm
)
# C++11
target_compile_features(${tf_python_op_lib_name}_gen_python PRIVATE
cxx_rvalue_references
)
if(tensorflow_ENABLE_SSL_SUPPORT)
target_link_libraries(${tf_python_op_lib_name}_gen_python PRIVATE
${boringssl_STATIC_LIBRARIES})
endif()
# Use the generated C++ executable to create a Python file
# containing the wrappers.
add_custom_command(
@ -275,6 +235,7 @@ GENERATE_PYTHON_OP_LIB("nn_ops")
GENERATE_PYTHON_OP_LIB("parsing_ops")
GENERATE_PYTHON_OP_LIB("random_ops")
GENERATE_PYTHON_OP_LIB("script_ops")
GENERATE_PYTHON_OP_LIB("sdca_ops")
GENERATE_PYTHON_OP_LIB("state_ops")
GENERATE_PYTHON_OP_LIB("sparse_ops")
GENERATE_PYTHON_OP_LIB("string_ops")
@ -328,6 +289,8 @@ add_library(pywrap_tensorflow SHARED
"${tensorflow_source_dir}/tensorflow/python/lib/io/py_record_reader.cc"
"${tensorflow_source_dir}/tensorflow/python/lib/io/py_record_writer.h"
"${tensorflow_source_dir}/tensorflow/python/lib/io/py_record_writer.cc"
"${tensorflow_source_dir}/tensorflow/python/util/kernel_registry.h"
"${tensorflow_source_dir}/tensorflow/python/util/kernel_registry.cc"
"${tensorflow_source_dir}/tensorflow/c/c_api.cc"
"${tensorflow_source_dir}/tensorflow/c/c_api.h"
"${tensorflow_source_dir}/tensorflow/c/checkpoint_reader.cc"
@ -340,38 +303,18 @@ add_library(pywrap_tensorflow SHARED
$<TARGET_OBJECTS:tf_core_framework>
$<TARGET_OBJECTS:tf_core_ops>
$<TARGET_OBJECTS:tf_core_direct_session>
$<TARGET_OBJECTS:tf_core_distributed_runtime>
$<$<BOOL:${tensorflow_ENABLE_GRPC_SUPPORT}>:$<TARGET_OBJECTS:tf_core_distributed_runtime>>
$<TARGET_OBJECTS:tf_core_kernels>
)
target_link_libraries(pywrap_tensorflow
${CMAKE_THREAD_LIBS_INIT}
tf_protos_cc
${GRPC_LIBRARIES}
${PROTOBUF_LIBRARY}
${farmhash_STATIC_LIBRARIES}
${gif_STATIC_LIBRARIES}
${jpeg_STATIC_LIBRARIES}
${jsoncpp_STATIC_LIBRARIES}
${png_STATIC_LIBRARIES}
${ZLIB_LIBRARIES}
${CMAKE_DL_LIBS}
)
target_include_directories(pywrap_tensorflow PUBLIC
${tensorflow_source_dir}
${CMAKE_CURRENT_BINARY_DIR}
${eigen_INCLUDE_DIRS}
${PYTHON_INCLUDE_DIR}
${NUMPY_INCLUDE_DIR}
)
# C++11
target_compile_features(pywrap_tensorflow PRIVATE
cxx_rvalue_references
target_link_libraries(pywrap_tensorflow
${tensorflow_EXTERNAL_LIBRARIES}
tf_protos_cc
${PYTHON_LIBRARIES}
)
if(tensorflow_ENABLE_SSL_SUPPORT)
target_link_libraries(pywrap_tensorflow ${boringssl_STATIC_LIBRARIES})
endif()
############################################################
# Build a PIP package containing the TensorFlow runtime.
@ -385,9 +328,15 @@ add_dependencies(tf_python_build_pip_package
add_custom_command(TARGET tf_python_build_pip_package POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy ${tensorflow_source_dir}/tensorflow/contrib/cmake/setup.py
${CMAKE_CURRENT_BINARY_DIR}/tf_python/)
add_custom_command(TARGET tf_python_build_pip_package POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_BINARY_DIR}/libpywrap_tensorflow.so
${CMAKE_CURRENT_BINARY_DIR}/tf_python/tensorflow/python/_pywrap_tensorflow.so)
if(WIN32)
add_custom_command(TARGET tf_python_build_pip_package POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_BINARY_DIR}/${CMAKE_BUILD_TYPE}/pywrap_tensorflow.dll
${CMAKE_CURRENT_BINARY_DIR}/tf_python/tensorflow/python/_pywrap_tensorflow.pyd)
else()
add_custom_command(TARGET tf_python_build_pip_package POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_BINARY_DIR}/libpywrap_tensorflow.so
${CMAKE_CURRENT_BINARY_DIR}/tf_python/tensorflow/python/_pywrap_tensorflow.so)
endif()
add_custom_command(TARGET tf_python_build_pip_package POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy ${tensorflow_source_dir}/tensorflow/tools/pip_package/README
${CMAKE_CURRENT_BINARY_DIR}/tf_python/)

View File

@ -56,10 +56,6 @@ file(GLOB tf_stream_executor_srcs
add_library(tf_stream_executor OBJECT ${tf_stream_executor_srcs})
target_include_directories(tf_stream_executor PRIVATE
${tensorflow_source_dir}
${eigen_INCLUDE_DIRS}
)
add_dependencies(tf_stream_executor
tf_core_lib
)
@ -69,14 +65,3 @@ add_dependencies(tf_stream_executor
# tf_protos_cc
# tf_core_lib
#)
target_compile_options(tf_stream_executor PRIVATE
-fno-exceptions
-DEIGEN_AVOID_STL_ARRAY
)
# C++11
target_compile_features(tf_stream_executor PRIVATE
cxx_rvalue_references
)

View File

@ -13,37 +13,9 @@ add_executable(${proto_text}
$<TARGET_OBJECTS:tf_core_lib>
)
target_include_directories(${proto_text} PUBLIC
${tensorflow_source_dir}
)
# TODO(mrry): Cut down the dependencies of this tool.
target_link_libraries(${proto_text} PUBLIC
${CMAKE_THREAD_LIBS_INIT}
${PROTOBUF_LIBRARIES}
${gif_STATIC_LIBRARIES}
${jpeg_STATIC_LIBRARIES}
${png_STATIC_LIBRARIES}
${ZLIB_LIBRARIES}
${jsoncpp_STATIC_LIBRARIES}
${CMAKE_DL_LIBS}
)
if(tensorflow_ENABLE_SSL_SUPPORT)
target_link_libraries(${proto_text} PUBLIC ${boringssl_STATIC_LIBRARIES})
endif()
target_link_libraries(${proto_text} PUBLIC ${tensorflow_EXTERNAL_LIBRARIES})
add_dependencies(${proto_text}
tf_core_lib
protobuf
)
target_compile_options(${proto_text} PRIVATE
-fno-exceptions
-DEIGEN_AVOID_STL_ARRAY
)
# C++11
target_compile_features(${proto_text} PRIVATE
cxx_rvalue_references
grpc
)

View File

@ -1,18 +1,3 @@
#cc_binary(
# name = "tutorials_example_trainer",
# srcs = ["tutorials/example_trainer.cc"],
# copts = tf_copts(),
# linkopts = [
# "-lpthread",
# "-lm",
# ],
# deps = [
# ":cc_ops",
# "//tensorflow/core:kernels",
# "//tensorflow/core:tensorflow",
# ],
#)
set(tf_tutorials_example_trainer_srcs
"${tensorflow_source_dir}/tensorflow/cc/tutorials/example_trainer.cc"
)
@ -29,31 +14,7 @@ add_executable(tf_tutorials_example_trainer
$<TARGET_OBJECTS:tf_core_direct_session>
)
target_include_directories(tf_tutorials_example_trainer PUBLIC
${tensorflow_source_dir}
${eigen_INCLUDE_DIRS}
)
target_link_libraries(tf_tutorials_example_trainer PUBLIC
${CMAKE_THREAD_LIBS_INIT}
${PROTOBUF_STATIC_LIBRARIES}
tf_protos_cc
${boringssl_STATIC_LIBRARIES}
${farmhash_STATIC_LIBRARIES}
${gif_STATIC_LIBRARIES}
${jpeg_STATIC_LIBRARIES}
${jsoncpp_STATIC_LIBRARIES}
${png_STATIC_LIBRARIES}
${ZLIB_LIBRARIES}
${CMAKE_DL_LIBS}
)
target_compile_options(tf_tutorials_example_trainer PRIVATE
-fno-exceptions
-DEIGEN_AVOID_STL_ARRAY
)
# C++11
target_compile_features(tf_tutorials_example_trainer PRIVATE
cxx_rvalue_references
${tensorflow_EXTERNAL_LIBRARIES}
)

View File

@ -1409,7 +1409,7 @@ class WeightedSumTest(tf.test.TestCase):
self.assertAllClose(output.eval(), [[1.6]])
def testMultivalentCrossUsageInPredictionsWithPartition(self):
# bucket size has to be big enough to allwo sharding.
# bucket size has to be big enough to allow sharding.
language = tf.contrib.layers.sparse_column_with_hash_bucket(
"language", hash_bucket_size=64 << 19)
country = tf.contrib.layers.sparse_column_with_hash_bucket(

View File

@ -143,7 +143,7 @@ def batch_norm(inputs,
updates = tf.group(*update_ops)
total_loss = control_flow_ops.with_dependencies([updates], total_loss)
One can set update_collections=None to force the updates in place, but that
One can set updates_collections=None to force the updates in place, but that
can have speed penalty, specially in distributed settings.
Args:

View File

@ -491,7 +491,7 @@ class BaseEstimator(
string key to `Tensor` and targets is a `Tensor` that's currently not
used (and so can be `None`).
input_feature_key: Only used if `use_deprecated_input_fn` is false. String
key into the features dict returned by `input_fn` that corresponds to
key into the features dict returned by `input_fn` that corresponds toa
the raw `Example` strings `Tensor` that the exported model will take as
input. Can only be `None` if you're using a custom `signature_fn` that
does not use the first arg (examples).

View File

@ -665,7 +665,7 @@ class LinearClassifierTest(tf.test.TestCase):
classifier = tf.contrib.learn.LinearClassifier(
feature_columns=[age, language])
# Evaluate on trained mdoel
# Evaluate on trained model
classifier.fit(input_fn=input_fn, steps=100)
classifier.evaluate(input_fn=input_fn, steps=1)

View File

@ -507,6 +507,8 @@ class StreamingDataFeeder(DataFeeder):
inp[i, :] = six.next(self._x)
except StopIteration:
self.stopped = True
if i == 0:
raise
inp = inp[:i, :]
if self._y is not None:
out = out[:i]

View File

@ -84,7 +84,7 @@ class BaseTest(tf.test.TestCase):
classifier.fit(iris.data, iris.target, max_steps=100)
score = accuracy_score(iris.target, classifier.predict(iris.data))
self.assertGreater(score, 0.5, "Failed with score = {0}".format(score))
# TODO(ipolosukhin): Check that summaries are correclty written.
# TODO(ipolosukhin): Check that summaries are correctly written.
def testIrisContinueTraining(self):
iris = datasets.load_iris()

View File

@ -30,7 +30,7 @@ def _get_input_fn(x, y, batch_size=None):
# We use a null optimizer since we can't get deterministic results out of
# supervisor's mulitple threads.
# supervisor's multiple threads.
class _NullOptimizer(tf.train.Optimizer):
def __init__(self):

View File

@ -454,6 +454,7 @@ $(wildcard tensorflow/core/platform/google/*/*) \
$(wildcard tensorflow/core/platform/jpeg.*) \
$(wildcard tensorflow/core/platform/png.*) \
$(wildcard tensorflow/core/platform/stream_executor.*) \
$(wildcard tensorflow/core/platform/windows/*) \
$(wildcard tensorflow/core/user_ops/*.cu.cc) \
$(wildcard tensorflow/core/common_runtime/gpu/*) \
$(wildcard tensorflow/core/common_runtime/gpu_device_factory.*)

View File

@ -48,7 +48,7 @@ download_and_extract() {
local dir="${2:?${usage}}"
echo "downloading ${url}" >&2
mkdir -p "${dir}"
tar -C "${dir}" --strip-components=1 -xz < <(curl -Ls "${url}")
curl -Ls "${url}" | tar -C "${dir}" --strip-components=1 -xz
}
download_and_extract "${EIGEN_URL}" "${DOWNLOADS_DIR}/eigen"

View File

@ -46,7 +46,7 @@ def main(unused_args):
return -1
graph = graph_pb2.GraphDef()
with open(FLAGS.graph, "rb") as f:
with open(FLAGS.graph, "r") as f:
if FLAGS.input_binary:
graph.ParseFromString(f.read())
else:

View File

@ -213,7 +213,7 @@ def quantize_weight_rounded(input_node):
# Currently, the parameter FLAGS.bitdepth is used to compute the
# number of buckets as 1 << FLAGS.bitdepth, meaning the number of
# buckets can only be a power of 2.
# This could be fixed by intorducing a new parameter, num_buckets,
# This could be fixed by introducing a new parameter, num_buckets,
# which would allow for more flexibility in chosing the right model
# size/accuracy tradeoff. But I didn't want to add more parameters
# to this script than absolutely necessary.

View File

@ -136,46 +136,54 @@ class StackBidirectionalRNNTest(tf.test.TestCase):
# - Reset states, and iterate for 5 steps. Last state is state_5.
# - Reset the sets to state_3 and iterate for 2 more steps,
# last state will be state_5'.
# - Check that state_5 and state_5' are the same.
# (Check forward and backward).
# - Check output_5 and output_5' as well.
# - Check that the state_5 and state_5' (forward and backward) are the
# same for the first layer (it does not apply for the second layer since
# it has forward-backward dependencies).
with self.test_session(use_gpu=use_gpu, graph=tf.Graph()) as sess:
batch_size = 2
# Create states placeholders.
initial_states_fw = [tf.placeholder(tf.float32, shape=(batch_size, layer*2))
for layer in self.layers]
initial_states_bw = [tf.placeholder(tf.float32, shape=(batch_size, layer*2))
for layer in self.layers]
# Create the net
input_value, inputs, outputs, state_fw, state_bw, sequence_length = (
self._createStackBidirectionalRNN(use_gpu, True, True))
self._createStackBidirectionalRNN(use_gpu, True, True,
initial_states_fw, initial_states_bw))
tf.initialize_all_variables().run()
# Run 3 steps.
feed_dict = {inputs[0]: input_value, sequence_length: [3, 2]}
# Initialize to empty state.
for i, layer in enumerate(self.layers):
feed_dict[initial_states_fw[i]] = np.zeros((batch_size, layer*2),
dtype=np.float32)
feed_dict[initial_states_bw[i]] = np.zeros((batch_size, layer*2),
dtype=np.float32)
_, st_3_fw, st_3_bw = sess.run([outputs, state_fw, state_bw],
feed_dict={inputs[0]: input_value,
sequence_length: [3, 3]})
feed_dict=feed_dict)
# Reset the net and run 5 steps.
batch_size = 2
zero_state = [cell.zero_state(
batch_size, dtype=tf.float32).eval() for cell in self.cells_fw]
feed_dict = {inputs[0]: input_value, sequence_length: [5, 5]}
for i, _ in enumerate(self.layers):
feed_dict[state_fw[i]] = zero_state[i]
feed_dict[state_bw[i]] = zero_state[i]
out_5, st_5_fw, st_5_bw = sess.run([outputs, state_fw, state_bw],
feed_dict = {inputs[0]: input_value, sequence_length: [5, 3]}
for i, layer in enumerate(self.layers):
feed_dict[initial_states_fw[i]] = np.zeros((batch_size, layer*2),
dtype=np.float32)
feed_dict[initial_states_bw[i]] = np.zeros((batch_size, layer*2),
dtype=np.float32)
_, st_5_fw, st_5_bw = sess.run([outputs, state_fw, state_bw],
feed_dict=feed_dict)
# Reset the net to state_3 and run 2 more steps.
feed_dict = {inputs[0]: input_value, sequence_length: [2, 2]}
feed_dict = {inputs[0]: input_value, sequence_length: [2, 1]}
for i, _ in enumerate(self.layers):
feed_dict[state_fw[i]] = st_3_fw[i]
feed_dict[state_bw[i]] = st_3_bw[i]
out_5, st_5_fw, st_5_bw = sess.run([outputs, state_fw, state_bw],
feed_dict=feed_dict)
feed_dict[initial_states_fw[i]] = st_3_fw[i]
feed_dict[initial_states_bw[i]] = st_3_bw[i]
out_5p, st_5p_fw, st_5p_bw = sess.run([outputs, state_fw, state_bw],
feed_dict=feed_dict)
# Check that the 3+2 and 5 outputs are the same.
self.assertAllEqual(out_5p[-1][0], out_5[-1][0])
# Check that the 3+2 and 5 last states are the same.
for i, _ in enumerate(self.layers):
self.assertAllEqual(st_5_fw[i], st_5p_fw[i])
self.assertAllEqual(st_5_bw[i], st_5p_bw[i])
# Check that the 3+2 and 5 first layer states.
self.assertAllEqual(st_5_fw[0], st_5p_fw[0])
self.assertAllEqual(st_5_bw[0], st_5p_bw[0])
def testStackBidirectionalRNN(self):
self._testStackBidirectionalRNN(use_gpu=False, use_shape=False)
@ -288,54 +296,65 @@ class StackBidirectionalRNNTest(tf.test.TestCase):
self.assertNotEqual(out[2][1][1], out[0][1][4])
self.assertNotEqual(out[2][1][2], out[0][1][5])
def _testStackBidirectionalDynamicRNNStates(self, use_gpu,
use_state_tuple):
def _testStackBidirectionalDynamicRNNStates(self, use_gpu):
# Check that the states are correctly initialized.
# - Create a net and iterate for 3 states. Keep the state (state_3).
# - Reset states, and iterate for 5 steps. Last state is state_5.
# - Reset the sets to state_3 and iterate for 2 more steps,
# last state will be state_5'.
# - Check that state_5 and state_5' are the same.
# (Check forward and backward).
# - Check output_5 and output_5' as well.
# - Check that the state_5 and state_5' (forward and backward) are the
# same for the first layer (it does not apply for the second layer since
# it has forward-backward dependencies).
with self.test_session(use_gpu=use_gpu, graph=tf.Graph()) as sess:
batch_size=2
# Create states placeholders.
initial_states_fw = [tf.placeholder(tf.float32, shape=(batch_size, layer*2))
for layer in self.layers]
initial_states_bw = [tf.placeholder(tf.float32, shape=(batch_size, layer*2))
for layer in self.layers]
# Create the net
input_value, inputs, outputs, state_fw, state_bw, sequence_length = (
self._createStackBidirectionalDynamicRNN(use_gpu, False,
use_state_tuple))
self._createStackBidirectionalDynamicRNN(
use_gpu,
use_shape=True,
use_state_tuple=False,
initial_states_fw=initial_states_fw,
initial_states_bw=initial_states_bw))
tf.initialize_all_variables().run()
# Run 3 steps.
feed_dict = {inputs[0]: input_value, sequence_length: [3, 2]}
# Initialize to empty state.
for i, layer in enumerate(self.layers):
feed_dict[initial_states_fw[i]] = np.zeros((batch_size, layer*2),
dtype=np.float32)
feed_dict[initial_states_bw[i]] = np.zeros((batch_size, layer*2),
dtype=np.float32)
_, st_3_fw, st_3_bw = sess.run([outputs, state_fw, state_bw],
feed_dict={inputs[0]: input_value,
sequence_length: [3, 3]})
feed_dict=feed_dict)
# Reset the net and run 5 steps.
batch_size = 2
zero_state = [cell.zero_state(
batch_size, dtype=tf.float32).eval() for cell in self.cells_fw]
feed_dict = {inputs[0]: input_value, sequence_length: [5, 5]}
for i, _ in enumerate(self.layers):
feed_dict[state_fw[i]] = zero_state[i]
feed_dict[state_bw[i]] = zero_state[i]
out_5, st_5_fw, st_5_bw = sess.run([outputs, state_fw, state_bw],
feed_dict = {inputs[0]: input_value, sequence_length: [5, 3]}
for i, layer in enumerate(self.layers):
feed_dict[initial_states_fw[i]] = np.zeros((batch_size, layer*2),
dtype=np.float32)
feed_dict[initial_states_bw[i]] = np.zeros((batch_size, layer*2),
dtype=np.float32)
_, st_5_fw, st_5_bw = sess.run([outputs, state_fw, state_bw],
feed_dict=feed_dict)
# Reset the net to state_3 and run 2 more steps.
feed_dict = {inputs[0]: input_value, sequence_length: [2, 2]}
feed_dict = {inputs[0]: input_value, sequence_length: [2, 1]}
for i, _ in enumerate(self.layers):
feed_dict[state_fw[i]] = st_3_fw[i]
feed_dict[state_bw[i]] = st_3_bw[i]
out_5, st_5_fw, st_5_bw = sess.run([outputs, state_fw, state_bw],
feed_dict=feed_dict)
feed_dict[initial_states_fw[i]] = st_3_fw[i]
feed_dict[initial_states_bw[i]] = st_3_bw[i]
out_5p, st_5p_fw, st_5p_bw = sess.run([outputs, state_fw, state_bw],
feed_dict=feed_dict)
# Check that the 3+2 and 5 outputs are the same.
self.assertAllEqual(out_5p[-1][0], out_5[-1][0])
# Check that the 3+2 and 5 last states are the same.
for i, _ in enumerate(self.layers):
self.assertAllEqual(st_5_fw[i], st_5p_fw[i])
self.assertAllEqual(st_5_bw[i], st_5p_bw[i])
# Check that the 3+2 and 5 first layer states.
self.assertAllEqual(st_5_fw[0], st_5p_fw[0])
self.assertAllEqual(st_5_bw[0], st_5p_bw[0])
def testBidirectionalRNN(self):
# Generate 2^3 option values
@ -346,13 +365,9 @@ class StackBidirectionalRNNTest(tf.test.TestCase):
use_gpu=option[0], use_shape=option[1], use_state_tuple=option[2])
# Check States.
self._testStackBidirectionalDynamicRNNStates(
use_gpu=False, use_state_tuple=False)
use_gpu=False)
self._testStackBidirectionalDynamicRNNStates(
use_gpu=True, use_state_tuple=False)
self._testStackBidirectionalDynamicRNNStates(
use_gpu=False, use_state_tuple=True)
self._testStackBidirectionalDynamicRNNStates(
use_gpu=True, use_state_tuple=False)
use_gpu=True)
def _testScope(self, factory, prefix="prefix", use_outer_scope=True):
# REMARKS: factory(scope) is a function accepting a scope

View File

@ -4,8 +4,8 @@
## Overview
This document describes the data formats and layouts for exporting [TensorFlow]
(https://www.tensorflow.org/) models for inference.
This document describes the data formats and layouts for exporting
[TensorFlow](https://www.tensorflow.org/) models for inference.
These exports have the following properties:
@ -50,8 +50,8 @@ binary.
### Exporting TF.learn models
TF.learn uses an [Exporter wrapper]
(https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/learn/python/learn/utils/export.py)
TF.learn uses an
[Exporter wrapper](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/learn/python/learn/utils/export.py)
that can be used for building signatures. Use the `BaseEstimator.export`
function to export your Estimator with a signature.

View File

@ -362,7 +362,7 @@ statistics for those ops without accidently missing or including extra ops.
tfprof exposes the following Python API to add op information and logging.
```python
def write_op_log(graph, log_dir, op_log=None)
tf.contrib.tfprof.tfprof_logger.write_op_log(graph, log_dir, op_log=None)
```
<b>--checkpoint_path:</b>

View File

@ -679,6 +679,7 @@ filegroup(
"platform/png.*",
"platform/gif.*",
"platform/stream_executor.*",
"platform/windows/**/*",
"user_ops/**/*.cu.cc",
"common_runtime/gpu/**/*",
"common_runtime/gpu_device_factory.*",

View File

@ -62,4 +62,15 @@ EIGEN_STRONG_INLINE bool operator==(const tensorflow::bfloat16 a,
} // namespace Eigen
#ifdef COMPILER_MSVC
namespace std {
template <>
struct hash<Eigen::half> {
std::size_t operator()(const Eigen::half& a) const {
return static_cast<std::size_t>(a.x);
}
};
} // namespace std
#endif // COMPILER_MSVC
#endif // TENSORFLOW_FRAMEWORK_NUMERIC_TYPES_H_

View File

@ -43,8 +43,8 @@ namespace {
// going to be extremely large, so break it into chunks if it's bigger than
// a limit. Each chunk will be processed serially, so we can refill the
// buffer for the next chunk and reuse it, keeping maximum memory size down.
// In this case, we've picked 16 megabytes as a reasonable limit.
const size_t kMaxChunkSize = (16 * 1024 * 1024);
// In this case, we've picked 1 megabyte as a reasonable limit.
const size_t kMaxChunkSize = (1 * 1024 * 1024);
// Lookup method used when resizing.
enum SamplingMode {

View File

@ -256,8 +256,8 @@ class Im2ColConvFunctor {
// going to be extremely large, so break it into chunks if it's bigger than
// a limit. Each chunk will be processed serially, so we can refill the
// buffer for the next chunk and reuse it, keeping maximum memory size down.
// In this case, we've picked 16 megabytes as a reasonable limit.
const size_t max_chunk_size = (16 * 1024 * 1024);
// In this case, we've picked 1 megabyte as a reasonable limit.
const size_t max_chunk_size = (1 * 1024 * 1024);
OP_REQUIRES(context, (filter_value_count * sizeof(T1)) <= max_chunk_size,
errors::InvalidArgument("Im2Col patch too large for buffer"));
const size_t patches_per_chunk =

View File

@ -31,6 +31,7 @@ limitations under the License.
#ifndef TENSORFLOW_LIB_GTL_INLINED_VECTOR_H_
#define TENSORFLOW_LIB_GTL_INLINED_VECTOR_H_
#include <cstddef>
#include <stddef.h>
#include <stdlib.h>
#include <string.h>
@ -60,7 +61,7 @@ class InlinedVector {
typedef T& reference;
typedef const T& const_reference;
typedef size_t size_type;
typedef ssize_t difference_type;
typedef std::ptrdiff_t difference_type;
typedef pointer iterator;
typedef const_pointer const_iterator;

View File

@ -18,9 +18,9 @@ limitations under the License.
#define _USE_MATH_DEFINES
#include <cmath>
#include <math.h>
#undef _USE_MATH_DEFINES
#include <math.h>
#include <string.h>
#include <algorithm>

View File

@ -1071,8 +1071,7 @@ each component is divided by the weighted, squared sum of inputs within
output = input / (bias + alpha * sqr_sum) ** beta
For details, see [Krizhevsky et al., ImageNet classification with deep
convolutional neural networks (NIPS 2012)]
(http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks).
convolutional neural networks (NIPS 2012)](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks).
input: 4-D.
depth_radius: 0-D. Half-width of the 1-D normalization window.
@ -1825,8 +1824,7 @@ Then, row_pooling_sequence should satisfy:
4. length(row_pooling_sequence) = output_row_length+1
For more details on fractional max pooling, see this paper:
[Benjamin Graham, Fractional Max-Pooling]
(http://arxiv.org/abs/1412.6071)
[Benjamin Graham, Fractional Max-Pooling](http://arxiv.org/abs/1412.6071)
value: 4-D with shape `[batch, height, width, channels]`.
pooling_ratio: Pooling ratio for each dimension of `value`, currently only

View File

@ -16,14 +16,18 @@ limitations under the License.
#include "tensorflow/core/platform/cloud/retrying_file_system.h"
#include <functional>
#include "tensorflow/core/lib/core/errors.h"
#include "tensorflow/core/lib/random/random.h"
#include "tensorflow/core/platform/env.h"
#include "tensorflow/core/platform/file_system.h"
namespace tensorflow {
namespace {
// In case of failure, every call will be retried kMaxAttempts-1 times.
constexpr int kMaxAttempts = 4;
// In case of failure, every call will be retried kMaxRetries times.
constexpr int kMaxRetries = 3;
// Maximum backoff time in microseconds.
constexpr int64 kMaximumBackoffMicroseconds = 32000000;
bool IsRetriable(Status status) {
switch (status.code()) {
@ -37,55 +41,76 @@ bool IsRetriable(Status status) {
}
}
Status CallWithRetries(const std::function<Status()>& f) {
int attempts = 0;
void WaitBeforeRetry(const int64 delay_micros) {
const int64 random_micros = random::New64() % 1000000;
Env::Default()->SleepForMicroseconds(std::min(delay_micros + random_micros,
kMaximumBackoffMicroseconds));
}
Status CallWithRetries(const std::function<Status()>& f,
const int64 initial_delay_microseconds) {
int retries = 0;
while (true) {
attempts++;
auto status = f();
if (!IsRetriable(status) || attempts >= kMaxAttempts) {
if (!IsRetriable(status) || retries >= kMaxRetries) {
return status;
}
LOG(ERROR) << "The operation resulted in an error and will be retried: "
<< status.ToString();
const int64 delay_micros = initial_delay_microseconds << retries;
LOG(ERROR) << "The operation resulted in an error: " << status.ToString()
<< " will be retried after " << delay_micros << " microseconds";
WaitBeforeRetry(delay_micros);
retries++;
}
}
class RetryingRandomAccessFile : public RandomAccessFile {
public:
RetryingRandomAccessFile(std::unique_ptr<RandomAccessFile> base_file)
: base_file_(std::move(base_file)) {}
RetryingRandomAccessFile(std::unique_ptr<RandomAccessFile> base_file,
int64 delay_microseconds = 1000000)
: base_file_(std::move(base_file)),
initial_delay_microseconds_(delay_microseconds) {}
Status Read(uint64 offset, size_t n, StringPiece* result,
char* scratch) const override {
return CallWithRetries(std::bind(&RandomAccessFile::Read, base_file_.get(),
offset, n, result, scratch));
offset, n, result, scratch),
initial_delay_microseconds_);
}
private:
std::unique_ptr<RandomAccessFile> base_file_;
const int64 initial_delay_microseconds_;
};
class RetryingWritableFile : public WritableFile {
public:
RetryingWritableFile(std::unique_ptr<WritableFile> base_file)
: base_file_(std::move(base_file)) {}
RetryingWritableFile(std::unique_ptr<WritableFile> base_file,
int64 delay_microseconds = 1000000)
: base_file_(std::move(base_file)),
initial_delay_microseconds_(delay_microseconds) {}
Status Append(const StringPiece& data) override {
return CallWithRetries(
std::bind(&WritableFile::Append, base_file_.get(), data));
std::bind(&WritableFile::Append, base_file_.get(), data),
initial_delay_microseconds_);
}
Status Close() override {
return CallWithRetries(std::bind(&WritableFile::Close, base_file_.get()));
return CallWithRetries(std::bind(&WritableFile::Close, base_file_.get()),
initial_delay_microseconds_);
}
Status Flush() override {
return CallWithRetries(std::bind(&WritableFile::Flush, base_file_.get()));
return CallWithRetries(std::bind(&WritableFile::Flush, base_file_.get()),
initial_delay_microseconds_);
}
Status Sync() override {
return CallWithRetries(std::bind(&WritableFile::Sync, base_file_.get()));
return CallWithRetries(std::bind(&WritableFile::Sync, base_file_.get()),
initial_delay_microseconds_);
}
private:
std::unique_ptr<WritableFile> base_file_;
const int64 initial_delay_microseconds_;
};
} // namespace
@ -95,7 +120,8 @@ Status RetryingFileSystem::NewRandomAccessFile(
std::unique_ptr<RandomAccessFile> base_file;
TF_RETURN_IF_ERROR(CallWithRetries(std::bind(&FileSystem::NewRandomAccessFile,
base_file_system_.get(),
filename, &base_file)));
filename, &base_file),
initial_delay_microseconds_));
result->reset(new RetryingRandomAccessFile(std::move(base_file)));
return Status::OK();
}
@ -105,7 +131,8 @@ Status RetryingFileSystem::NewWritableFile(
std::unique_ptr<WritableFile> base_file;
TF_RETURN_IF_ERROR(CallWithRetries(std::bind(&FileSystem::NewWritableFile,
base_file_system_.get(),
filename, &base_file)));
filename, &base_file),
initial_delay_microseconds_));
result->reset(new RetryingWritableFile(std::move(base_file)));
return Status::OK();
}
@ -115,7 +142,8 @@ Status RetryingFileSystem::NewAppendableFile(
std::unique_ptr<WritableFile> base_file;
TF_RETURN_IF_ERROR(CallWithRetries(std::bind(&FileSystem::NewAppendableFile,
base_file_system_.get(),
filename, &base_file)));
filename, &base_file),
initial_delay_microseconds_));
result->reset(new RetryingWritableFile(std::move(base_file)));
return Status::OK();
}
@ -123,7 +151,8 @@ Status RetryingFileSystem::NewAppendableFile(
Status RetryingFileSystem::NewReadOnlyMemoryRegionFromFile(
const string& filename, std::unique_ptr<ReadOnlyMemoryRegion>* result) {
return CallWithRetries(std::bind(&FileSystem::NewReadOnlyMemoryRegionFromFile,
base_file_system_.get(), filename, result));
base_file_system_.get(), filename, result),
initial_delay_microseconds_);
}
bool RetryingFileSystem::FileExists(const string& fname) {
@ -133,49 +162,58 @@ bool RetryingFileSystem::FileExists(const string& fname) {
Status RetryingFileSystem::Stat(const string& fname, FileStatistics* stat) {
return CallWithRetries(
std::bind(&FileSystem::Stat, base_file_system_.get(), fname, stat));
std::bind(&FileSystem::Stat, base_file_system_.get(), fname, stat),
initial_delay_microseconds_);
}
Status RetryingFileSystem::GetChildren(const string& dir,
std::vector<string>* result) {
return CallWithRetries(std::bind(&FileSystem::GetChildren,
base_file_system_.get(), dir, result));
base_file_system_.get(), dir, result),
initial_delay_microseconds_);
}
Status RetryingFileSystem::GetMatchingPaths(const string& pattern,
std::vector<string>* result) {
return CallWithRetries(std::bind(&FileSystem::GetMatchingPaths,
base_file_system_.get(), pattern, result));
base_file_system_.get(), pattern, result),
initial_delay_microseconds_);
}
Status RetryingFileSystem::DeleteFile(const string& fname) {
return CallWithRetries(
std::bind(&FileSystem::DeleteFile, base_file_system_.get(), fname));
std::bind(&FileSystem::DeleteFile, base_file_system_.get(), fname),
initial_delay_microseconds_);
}
Status RetryingFileSystem::CreateDir(const string& dirname) {
return CallWithRetries(
std::bind(&FileSystem::CreateDir, base_file_system_.get(), dirname));
std::bind(&FileSystem::CreateDir, base_file_system_.get(), dirname),
initial_delay_microseconds_);
}
Status RetryingFileSystem::DeleteDir(const string& dirname) {
return CallWithRetries(
std::bind(&FileSystem::DeleteDir, base_file_system_.get(), dirname));
std::bind(&FileSystem::DeleteDir, base_file_system_.get(), dirname),
initial_delay_microseconds_);
}
Status RetryingFileSystem::GetFileSize(const string& fname, uint64* file_size) {
return CallWithRetries(std::bind(&FileSystem::GetFileSize,
base_file_system_.get(), fname, file_size));
base_file_system_.get(), fname, file_size),
initial_delay_microseconds_);
}
Status RetryingFileSystem::RenameFile(const string& src, const string& target) {
return CallWithRetries(
std::bind(&FileSystem::RenameFile, base_file_system_.get(), src, target));
std::bind(&FileSystem::RenameFile, base_file_system_.get(), src, target),
initial_delay_microseconds_);
}
Status RetryingFileSystem::IsDirectory(const string& dirname) {
return CallWithRetries(
std::bind(&FileSystem::IsDirectory, base_file_system_.get(), dirname));
std::bind(&FileSystem::IsDirectory, base_file_system_.get(), dirname),
initial_delay_microseconds_);
}
} // namespace tensorflow

View File

@ -26,8 +26,10 @@ namespace tensorflow {
/// A wrapper to add retry logic to another file system.
class RetryingFileSystem : public FileSystem {
public:
RetryingFileSystem(std::unique_ptr<FileSystem> base_file_system)
: base_file_system_(std::move(base_file_system)) {}
RetryingFileSystem(std::unique_ptr<FileSystem> base_file_system,
int64 delay_microseconds = 1000000)
: base_file_system_(std::move(base_file_system)),
initial_delay_microseconds_(delay_microseconds) {}
Status NewRandomAccessFile(
const string& filename,
@ -66,6 +68,7 @@ class RetryingFileSystem : public FileSystem {
private:
std::unique_ptr<FileSystem> base_file_system_;
const int64 initial_delay_microseconds_;
TF_DISALLOW_COPY_AND_ASSIGN(RetryingFileSystem);
};

View File

@ -158,7 +158,7 @@ TEST(RetryingFileSystemTest, NewRandomAccessFile_ImmediateSuccess) {
std::unique_ptr<MockFileSystem> base_fs(
new MockFileSystem(expected_fs_calls));
base_fs->random_access_file_to_return = std::move(base_file);
RetryingFileSystem fs(std::move(base_fs));
RetryingFileSystem fs(std::move(base_fs), 0);
// Retrieve the wrapped random access file.
std::unique_ptr<RandomAccessFile> random_access_file;
@ -185,7 +185,7 @@ TEST(RetryingFileSystemTest, NewRandomAccessFile_SuccessWith3rdTry) {
std::unique_ptr<MockFileSystem> base_fs(
new MockFileSystem(expected_fs_calls));
base_fs->random_access_file_to_return = std::move(base_file);
RetryingFileSystem fs(std::move(base_fs));
RetryingFileSystem fs(std::move(base_fs), 0);
// Retrieve the wrapped random access file.
std::unique_ptr<RandomAccessFile> random_access_file;
@ -213,7 +213,7 @@ TEST(RetryingFileSystemTest, NewRandomAccessFile_AllRetriesFailed) {
std::unique_ptr<MockFileSystem> base_fs(
new MockFileSystem(expected_fs_calls));
base_fs->random_access_file_to_return = std::move(base_file);
RetryingFileSystem fs(std::move(base_fs));
RetryingFileSystem fs(std::move(base_fs), 0);
// Retrieve the wrapped random access file.
std::unique_ptr<RandomAccessFile> random_access_file;
@ -241,7 +241,7 @@ TEST(RetryingFileSystemTest, NewRandomAccessFile_NoRetriesForSomeErrors) {
std::unique_ptr<MockFileSystem> base_fs(
new MockFileSystem(expected_fs_calls));
base_fs->random_access_file_to_return = std::move(base_file);
RetryingFileSystem fs(std::move(base_fs));
RetryingFileSystem fs(std::move(base_fs), 0);
// Retrieve the wrapped random access file.
std::unique_ptr<RandomAccessFile> random_access_file;
@ -266,7 +266,7 @@ TEST(RetryingFileSystemTest, NewWritableFile_ImmediateSuccess) {
std::unique_ptr<MockFileSystem> base_fs(
new MockFileSystem(expected_fs_calls));
base_fs->writable_file_to_return = std::move(base_file);
RetryingFileSystem fs(std::move(base_fs));
RetryingFileSystem fs(std::move(base_fs), 0);
// Retrieve the wrapped writable file.
std::unique_ptr<WritableFile> writable_file;
@ -291,7 +291,7 @@ TEST(RetryingFileSystemTest, NewWritableFile_SuccessWith3rdTry) {
std::unique_ptr<MockFileSystem> base_fs(
new MockFileSystem(expected_fs_calls));
base_fs->writable_file_to_return = std::move(base_file);
RetryingFileSystem fs(std::move(base_fs));
RetryingFileSystem fs(std::move(base_fs), 0);
// Retrieve the wrapped writable file.
std::unique_ptr<WritableFile> writable_file;
@ -316,7 +316,7 @@ TEST(RetryingFileSystemTest, NewAppendableFile_SuccessWith3rdTry) {
std::unique_ptr<MockFileSystem> base_fs(
new MockFileSystem(expected_fs_calls));
base_fs->writable_file_to_return = std::move(base_file);
RetryingFileSystem fs(std::move(base_fs));
RetryingFileSystem fs(std::move(base_fs), 0);
// Retrieve the wrapped appendable file.
std::unique_ptr<WritableFile> writable_file;
@ -342,7 +342,7 @@ TEST(RetryingFileSystemTest, NewWritableFile_AllRetriesFailed) {
std::unique_ptr<MockFileSystem> base_fs(
new MockFileSystem(expected_fs_calls));
base_fs->writable_file_to_return = std::move(base_file);
RetryingFileSystem fs(std::move(base_fs));
RetryingFileSystem fs(std::move(base_fs), 0);
// Retrieve the wrapped writable file.
std::unique_ptr<WritableFile> writable_file;
@ -360,7 +360,7 @@ TEST(RetryingFileSystemTest,
std::make_tuple("NewReadOnlyMemoryRegionFromFile", Status::OK())});
std::unique_ptr<MockFileSystem> base_fs(
new MockFileSystem(expected_fs_calls));
RetryingFileSystem fs(std::move(base_fs));
RetryingFileSystem fs(std::move(base_fs), 0);
std::unique_ptr<ReadOnlyMemoryRegion> result;
TF_EXPECT_OK(fs.NewReadOnlyMemoryRegionFromFile("filename.txt", &result));
@ -378,7 +378,7 @@ TEST(RetryingFileSystemTest, NewReadOnlyMemoryRegionFromFile_AllRetriesFailed) {
errors::Unavailable("Last error"))});
std::unique_ptr<MockFileSystem> base_fs(
new MockFileSystem(expected_fs_calls));
RetryingFileSystem fs(std::move(base_fs));
RetryingFileSystem fs(std::move(base_fs), 0);
std::unique_ptr<ReadOnlyMemoryRegion> result;
EXPECT_EQ("Last error",
@ -393,7 +393,7 @@ TEST(RetryingFileSystemTest, GetChildren_SuccessWith2ndTry) {
std::make_tuple("GetChildren", Status::OK())});
std::unique_ptr<MockFileSystem> base_fs(
new MockFileSystem(expected_fs_calls));
RetryingFileSystem fs(std::move(base_fs));
RetryingFileSystem fs(std::move(base_fs), 0);
std::vector<string> result;
TF_EXPECT_OK(fs.GetChildren("gs://path", &result));
@ -409,7 +409,7 @@ TEST(RetryingFileSystemTest, GetChildren_AllRetriesFailed) {
std::make_tuple("GetChildren", errors::Unavailable("Last error"))});
std::unique_ptr<MockFileSystem> base_fs(
new MockFileSystem(expected_fs_calls));
RetryingFileSystem fs(std::move(base_fs));
RetryingFileSystem fs(std::move(base_fs), 0);
std::vector<string> result;
EXPECT_EQ("Last error", fs.GetChildren("gs://path", &result).error_message());
@ -422,7 +422,7 @@ TEST(RetryingFileSystemTest, GetMatchingPaths_SuccessWith2ndTry) {
std::make_tuple("GetMatchingPaths", Status::OK())});
std::unique_ptr<MockFileSystem> base_fs(
new MockFileSystem(expected_fs_calls));
RetryingFileSystem fs(std::move(base_fs));
RetryingFileSystem fs(std::move(base_fs), 0);
std::vector<string> result;
TF_EXPECT_OK(fs.GetMatchingPaths("gs://path/dir", &result));
@ -438,7 +438,7 @@ TEST(RetryingFileSystemTest, GetMatchingPaths_AllRetriesFailed) {
std::make_tuple("GetMatchingPaths", errors::Unavailable("Last error"))});
std::unique_ptr<MockFileSystem> base_fs(
new MockFileSystem(expected_fs_calls));
RetryingFileSystem fs(std::move(base_fs));
RetryingFileSystem fs(std::move(base_fs), 0);
std::vector<string> result;
EXPECT_EQ("Last error",
@ -451,7 +451,7 @@ TEST(RetryingFileSystemTest, DeleteFile_SuccessWith2ndTry) {
std::make_tuple("DeleteFile", Status::OK())});
std::unique_ptr<MockFileSystem> base_fs(
new MockFileSystem(expected_fs_calls));
RetryingFileSystem fs(std::move(base_fs));
RetryingFileSystem fs(std::move(base_fs), 0);
std::vector<string> result;
TF_EXPECT_OK(fs.DeleteFile("gs://path/file.txt"));
@ -466,7 +466,7 @@ TEST(RetryingFileSystemTest, DeleteFile_AllRetriesFailed) {
std::make_tuple("DeleteFile", errors::Unavailable("Last error"))});
std::unique_ptr<MockFileSystem> base_fs(
new MockFileSystem(expected_fs_calls));
RetryingFileSystem fs(std::move(base_fs));
RetryingFileSystem fs(std::move(base_fs), 0);
std::vector<string> result;
EXPECT_EQ("Last error", fs.DeleteFile("gs://path/file.txt").error_message());
@ -478,7 +478,7 @@ TEST(RetryingFileSystemTest, CreateDir_SuccessWith2ndTry) {
std::make_tuple("CreateDir", Status::OK())});
std::unique_ptr<MockFileSystem> base_fs(
new MockFileSystem(expected_fs_calls));
RetryingFileSystem fs(std::move(base_fs));
RetryingFileSystem fs(std::move(base_fs), 0);
std::vector<string> result;
TF_EXPECT_OK(fs.CreateDir("gs://path/newdir"));
@ -493,7 +493,7 @@ TEST(RetryingFileSystemTest, CreateDir_AllRetriesFailed) {
std::make_tuple("CreateDir", errors::Unavailable("Last error"))});
std::unique_ptr<MockFileSystem> base_fs(
new MockFileSystem(expected_fs_calls));
RetryingFileSystem fs(std::move(base_fs));
RetryingFileSystem fs(std::move(base_fs), 0);
std::vector<string> result;
EXPECT_EQ("Last error", fs.CreateDir("gs://path/newdir").error_message());
@ -505,7 +505,7 @@ TEST(RetryingFileSystemTest, DeleteDir_SuccessWith2ndTry) {
std::make_tuple("DeleteDir", Status::OK())});
std::unique_ptr<MockFileSystem> base_fs(
new MockFileSystem(expected_fs_calls));
RetryingFileSystem fs(std::move(base_fs));
RetryingFileSystem fs(std::move(base_fs), 0);
std::vector<string> result;
TF_EXPECT_OK(fs.DeleteDir("gs://path/dir"));
@ -520,7 +520,7 @@ TEST(RetryingFileSystemTest, DeleteDir_AllRetriesFailed) {
std::make_tuple("DeleteDir", errors::Unavailable("Last error"))});
std::unique_ptr<MockFileSystem> base_fs(
new MockFileSystem(expected_fs_calls));
RetryingFileSystem fs(std::move(base_fs));
RetryingFileSystem fs(std::move(base_fs), 0);
std::vector<string> result;
EXPECT_EQ("Last error", fs.DeleteDir("gs://path/dir").error_message());
@ -533,7 +533,7 @@ TEST(RetryingFileSystemTest, GetFileSize_SuccessWith2ndTry) {
std::make_tuple("GetFileSize", Status::OK())});
std::unique_ptr<MockFileSystem> base_fs(
new MockFileSystem(expected_fs_calls));
RetryingFileSystem fs(std::move(base_fs));
RetryingFileSystem fs(std::move(base_fs), 0);
uint64 size;
TF_EXPECT_OK(fs.GetFileSize("gs://path/file.txt", &size));
@ -549,7 +549,7 @@ TEST(RetryingFileSystemTest, GetFileSize_AllRetriesFailed) {
std::make_tuple("GetFileSize", errors::Unavailable("Last error"))});
std::unique_ptr<MockFileSystem> base_fs(
new MockFileSystem(expected_fs_calls));
RetryingFileSystem fs(std::move(base_fs));
RetryingFileSystem fs(std::move(base_fs), 0);
uint64 size;
EXPECT_EQ("Last error",
@ -562,7 +562,7 @@ TEST(RetryingFileSystemTest, RenameFile_SuccessWith2ndTry) {
std::make_tuple("RenameFile", Status::OK())});
std::unique_ptr<MockFileSystem> base_fs(
new MockFileSystem(expected_fs_calls));
RetryingFileSystem fs(std::move(base_fs));
RetryingFileSystem fs(std::move(base_fs), 0);
TF_EXPECT_OK(fs.RenameFile("old_name", "new_name"));
}
@ -577,7 +577,7 @@ TEST(RetryingFileSystemTest, RenameFile_AllRetriesFailed) {
});
std::unique_ptr<MockFileSystem> base_fs(
new MockFileSystem(expected_fs_calls));
RetryingFileSystem fs(std::move(base_fs));
RetryingFileSystem fs(std::move(base_fs), 0);
EXPECT_EQ("Last error",
fs.RenameFile("old_name", "new_name").error_message());
@ -589,7 +589,7 @@ TEST(RetryingFileSystemTest, Stat_SuccessWith2ndTry) {
std::make_tuple("Stat", Status::OK())});
std::unique_ptr<MockFileSystem> base_fs(
new MockFileSystem(expected_fs_calls));
RetryingFileSystem fs(std::move(base_fs));
RetryingFileSystem fs(std::move(base_fs), 0);
FileStatistics stat;
TF_EXPECT_OK(fs.Stat("file_name", &stat));
@ -604,7 +604,7 @@ TEST(RetryingFileSystemTest, Stat_AllRetriesFailed) {
});
std::unique_ptr<MockFileSystem> base_fs(
new MockFileSystem(expected_fs_calls));
RetryingFileSystem fs(std::move(base_fs));
RetryingFileSystem fs(std::move(base_fs), 0);
FileStatistics stat;
EXPECT_EQ("Last error", fs.Stat("file_name", &stat).error_message());
@ -617,7 +617,7 @@ TEST(RetryingFileSystemTest, IsDirectory_SuccessWith2ndTry) {
std::make_tuple("IsDirectory", Status::OK())});
std::unique_ptr<MockFileSystem> base_fs(
new MockFileSystem(expected_fs_calls));
RetryingFileSystem fs(std::move(base_fs));
RetryingFileSystem fs(std::move(base_fs), 0);
TF_EXPECT_OK(fs.IsDirectory("gs://path/dir"));
}
@ -632,7 +632,7 @@ TEST(RetryingFileSystemTest, IsDirectory_AllRetriesFailed) {
std::make_tuple("IsDirectory", errors::Unavailable("Last error"))});
std::unique_ptr<MockFileSystem> base_fs(
new MockFileSystem(expected_fs_calls));
RetryingFileSystem fs(std::move(base_fs));
RetryingFileSystem fs(std::move(base_fs), 0);
EXPECT_EQ("Last error", fs.IsDirectory("gs://path/dir").error_message());
}

View File

@ -24,6 +24,9 @@ limitations under the License.
#include "tensorflow/core/platform/macros.h"
#include "tensorflow/core/platform/types.h"
// TODO(mrry): Prevent this Windows.h #define from leaking out of our headers.
#undef ERROR
namespace tensorflow {
const int INFO = 0; // base_logging::INFO;
const int WARNING = 1; // base_logging::WARNING;

View File

@ -22,7 +22,7 @@ limitations under the License.
#if defined(PLATFORM_GOOGLE)
#include "tensorflow/core/platform/google/build_config/dynamic_annotations.h"
#elif defined(PLATFORM_POSIX) || defined(PLATFORM_POSIX_ANDROID) || \
defined(PLATFORM_GOOGLE_ANDROID)
defined(PLATFORM_GOOGLE_ANDROID) || defined(PLATFORM_WINDOWS)
#include "tensorflow/core/platform/default/dynamic_annotations.h"
#else
#error Define the appropriate PLATFORM_<foo> macro for this platform

View File

@ -26,9 +26,14 @@ limitations under the License.
#include "tensorflow/core/lib/core/stringpiece.h"
#include "tensorflow/core/platform/file_statistics.h"
#include "tensorflow/core/platform/macros.h"
#include "tensorflow/core/platform/platform.h"
#include "tensorflow/core/platform/protobuf.h"
#include "tensorflow/core/platform/types.h"
#ifdef PLATFORM_WINDOWS
#undef DeleteFile
#endif
namespace tensorflow {
class RandomAccessFile;

View File

@ -20,7 +20,7 @@ limitations under the License.
#if defined(PLATFORM_GOOGLE)
#include "tensorflow/core/platform/google/build_config/gif.h"
#elif defined(PLATFORM_POSIX) && !defined(IS_MOBILE_PLATFORM)
#elif (defined(PLATFORM_POSIX) && !defined(IS_MOBILE_PLATFORM)) || defined(PLATFORM_WINDOWS)
#include <gif_lib.h>
#else
#error Define the appropriate PLATFORM_<foo> macro for this platform

View File

@ -20,7 +20,7 @@ limitations under the License.
#if defined(PLATFORM_GOOGLE)
#include "tensorflow/core/platform/google/build_config/jpeg.h"
#elif defined(PLATFORM_POSIX) && !defined(IS_MOBILE_PLATFORM)
#elif (defined(PLATFORM_POSIX) && !defined(IS_MOBILE_PLATFORM)) || defined(PLATFORM_WINDOWS)
#include <stddef.h>
#include <stdio.h>
#include <stdlib.h>

View File

@ -30,7 +30,16 @@ limitations under the License.
__attribute__((__format__(__printf__, string_index, first_to_check)))
#define TF_SCANF_ATTRIBUTE(string_index, first_to_check) \
__attribute__((__format__(__scanf__, string_index, first_to_check)))
#elif defined(COMPILER_MSVC)
// Non-GCC equivalents
#define TF_ATTRIBUTE_NORETURN __declspec(noreturn)
#define TF_ATTRIBUTE_NOINLINE
#define TF_ATTRIBUTE_UNUSED
#define TF_ATTRIBUTE_COLD
#define TF_MUST_USE_RESULT
#define TF_PACKED
#define TF_PRINTF_ATTRIBUTE(string_index, first_to_check)
#define TF_SCANF_ATTRIBUTE(string_index, first_to_check)
#else
// Non-GCC equivalents
#define TF_ATTRIBUTE_NORETURN

View File

@ -27,7 +27,7 @@ enum ConditionResult { kCond_Timeout, kCond_MaybeNotified };
#if defined(PLATFORM_GOOGLE)
#include "tensorflow/core/platform/google/mutex.h"
#elif defined(PLATFORM_POSIX) || defined(PLATFORM_POSIX_ANDROID) || \
defined(PLATFORM_GOOGLE_ANDROID)
defined(PLATFORM_GOOGLE_ANDROID) || defined(PLATFORM_WINDOWS)
#include "tensorflow/core/platform/default/mutex.h"
#else
#error Define the appropriate PLATFORM_<foo> macro for this platform

View File

@ -22,7 +22,7 @@ limitations under the License.
#if defined(PLATFORM_GOOGLE)
#include "tensorflow/core/platform/google/notification.h"
#elif defined(PLATFORM_POSIX) || defined(PLATFORM_POSIX_ANDROID) || \
defined(PLATFORM_GOOGLE_ANDROID)
defined(PLATFORM_GOOGLE_ANDROID) || defined(PLATFORM_WINDOWS)
#include "tensorflow/core/platform/default/notification.h"
#else
#error Define the appropriate PLATFORM_<foo> macro for this platform

View File

@ -29,7 +29,6 @@ limitations under the License.
#elif defined(__APPLE__)
#define PLATFORM_POSIX
#include "TargetConditionals.h"
#if TARGET_IPHONE_SIMULATOR
#define IS_MOBILE_PLATFORM
@ -37,6 +36,9 @@ limitations under the License.
#define IS_MOBILE_PLATFORM
#endif
#elif defined(_WIN32)
#define PLATFORM_WINDOWS
#elif defined(__arm__)
#define PLATFORM_POSIX

View File

@ -20,7 +20,7 @@ limitations under the License.
#if defined(PLATFORM_GOOGLE)
#include "tensorflow/core/platform/google/build_config/png.h"
#elif defined(PLATFORM_POSIX) && !defined(IS_MOBILE_PLATFORM)
#elif (defined(PLATFORM_POSIX) && !defined(IS_MOBILE_PLATFORM)) || defined(PLATFORM_WINDOWS)
#include <png.h>
#else
#error Define the appropriate PLATFORM_<foo> macro for this platform

View File

@ -72,15 +72,21 @@ error::Code ErrnoToCode(int err_number) {
case EBUSY: // Device or resource busy
case ECHILD: // No child processes
case EISCONN: // Socket is connected
#if !defined(_WIN32)
case ENOTBLK: // Block device required
#endif
case ENOTCONN: // The socket is not connected
case EPIPE: // Broken pipe
#if !defined(_WIN32)
case ESHUTDOWN: // Cannot send after transport endpoint shutdown
#endif
case ETXTBSY: // Text file busy
code = error::FAILED_PRECONDITION;
break;
case ENOSPC: // No space left on device
#if !defined(_WIN32)
case EDQUOT: // Disk quota exceeded
#endif
case EMFILE: // Too many open files
case EMLINK: // Too many links
case ENFILE: // Too many open files in system
@ -88,7 +94,9 @@ error::Code ErrnoToCode(int err_number) {
case ENODATA: // No message is available on the STREAM read queue
case ENOMEM: // Not enough space
case ENOSR: // No STREAM resources
#if !defined(_WIN32)
case EUSERS: // Too many users
#endif
code = error::RESOURCE_EXHAUSTED;
break;
case EFBIG: // File too large
@ -99,9 +107,13 @@ error::Code ErrnoToCode(int err_number) {
case ENOSYS: // Function not implemented
case ENOTSUP: // Operation not supported
case EAFNOSUPPORT: // Address family not supported
#if !defined(_WIN32)
case EPFNOSUPPORT: // Protocol family not supported
#endif
case EPROTONOSUPPORT: // Protocol not supported
#if !defined(_WIN32)
case ESOCKTNOSUPPORT: // Socket type not supported
#endif
case EXDEV: // Improper link
code = error::UNIMPLEMENTED;
break;
@ -110,20 +122,24 @@ error::Code ErrnoToCode(int err_number) {
case ECONNABORTED: // Connection aborted
case ECONNRESET: // Connection reset
case EINTR: // Interrupted function call
#if !defined(_WIN32)
case EHOSTDOWN: // Host is down
#endif
case EHOSTUNREACH: // Host is unreachable
case ENETDOWN: // Network is down
case ENETRESET: // Connection aborted by network
case ENETUNREACH: // Network unreachable
case ENOLCK: // No locks available
case ENOLINK: // Link has been severed
#if !defined(__APPLE__)
#if !(defined(__APPLE__) || defined(_WIN32))
case ENONET: // Machine is not on the network
#endif
code = error::UNAVAILABLE;
break;
case EDEADLK: // Resource deadlock avoided
#if !defined(_WIN32)
case ESTALE: // Stale file handle
#endif
code = error::ABORTED;
break;
case ECANCELED: // Operation cancelled
@ -140,7 +156,9 @@ error::Code ErrnoToCode(int err_number) {
case ENOEXEC: // Exec format error
case ENOMSG: // No message of the desired type
case EPROTO: // Protocol error
#if !defined(_WIN32)
case EREMOTE: // Object is remote
#endif
code = error::UNKNOWN;
break;
default: {

View File

@ -21,7 +21,7 @@ limitations under the License.
#if defined(PLATFORM_GOOGLE)
#include "tensorflow/core/platform/google/build_config/thread_annotations.h"
#elif defined(PLATFORM_POSIX) || defined(PLATFORM_POSIX_ANDROID) || \
defined(PLATFORM_GOOGLE_ANDROID)
defined(PLATFORM_GOOGLE_ANDROID) || defined(PLATFORM_WINDOWS)
#include "tensorflow/core/platform/default/thread_annotations.h"
#else
#error Define the appropriate PLATFORM_<foo> macro for this platform

View File

@ -23,7 +23,7 @@ limitations under the License.
#if defined(PLATFORM_GOOGLE) || defined(GOOGLE_INTEGRAL_TYPES)
#include "tensorflow/core/platform/google/integral_types.h"
#elif defined(PLATFORM_POSIX) || defined(PLATFORM_POSIX_ANDROID) || \
defined(PLATFORM_GOOGLE_ANDROID)
defined(PLATFORM_GOOGLE_ANDROID) || defined(PLATFORM_WINDOWS)
#include "tensorflow/core/platform/default/integral_types.h"
#else
#error Define the appropriate PLATFORM_<foo> macro for this platform

View File

@ -0,0 +1,113 @@
/* Copyright 2015 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
#include "tensorflow/core/platform/env.h"
#include <Shlwapi.h>
#include <Windows.h>
#include <errno.h>
#include <fcntl.h>
#include <stdio.h>
#include <time.h>
#undef LoadLibrary
#undef ERROR
#include <thread>
#include <vector>
#include "tensorflow/core/lib/core/error_codes.pb.h"
#include "tensorflow/core/platform/load_library.h"
#include "tensorflow/core/platform/logging.h"
#include "tensorflow/core/platform/windows/windows_file_system.h"
namespace tensorflow {
namespace {
class StdThread : public Thread {
public:
// name and thread_options are both ignored.
StdThread(const ThreadOptions& thread_options, const string& name,
std::function<void()> fn)
: thread_(fn) {}
~StdThread() { thread_.join(); }
private:
std::thread thread_;
};
class WindowsEnv : public Env {
public:
WindowsEnv() {}
~WindowsEnv() override {
LOG(FATAL) << "Env::Default() must not be destroyed";
}
bool MatchPath(const string& path, const string& pattern) override {
return PathMatchSpec(path.c_str(), pattern.c_str()) == S_OK;
}
uint64 NowMicros() override {
FILETIME temp;
GetSystemTimeAsFileTime(&temp);
uint64 now_ticks =
(uint64)temp.dwLowDateTime + ((uint64)(temp.dwHighDateTime) << 32LL);
return now_ticks / 10LL;
}
void SleepForMicroseconds(int64 micros) override { Sleep(micros / 1000); }
Thread* StartThread(const ThreadOptions& thread_options, const string& name,
std::function<void()> fn) override {
return new StdThread(thread_options, name, fn);
}
void SchedClosure(std::function<void()> closure) override {
// TODO(b/27290852): Spawning a new thread here is wasteful, but
// needed to deal with the fact that many `closure` functions are
// blocking in the current codebase.
std::thread closure_thread(closure);
closure_thread.detach();
}
void SchedClosureAfter(int64 micros, std::function<void()> closure) override {
// TODO(b/27290852): Consuming a thread here is wasteful, but this
// code is (currently) only used in the case where a step fails
// (AbortStep). This could be replaced by a timer thread
SchedClosure([this, micros, closure]() {
SleepForMicroseconds(micros);
closure();
});
}
Status LoadLibrary(const char* library_filename, void** handle) override {
return errors::Unimplemented("WindowsEnv::LoadLibrary");
}
Status GetSymbolFromLibrary(void* handle, const char* symbol_name,
void** symbol) override {
return errors::Unimplemented("WindowsEnv::GetSymbolFromLibrary");
}
};
} // namespace
REGISTER_FILE_SYSTEM("", WindowsFileSystem);
Env* Env::Default() {
static Env* default_env = new WindowsEnv;
return default_env;
}
} // namespace tensorflow

View File

@ -0,0 +1,131 @@
/* Copyright 2016 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
#include "tensorflow/core/platform/net.h"
#include <cerrno>
#include <cstdlib>
#include <unordered_set>
#include <sys/types.h>
#include <winsock.h>
#include "tensorflow/core/lib/strings/strcat.h"
#include "tensorflow/core/platform/logging.h"
#undef ERROR
namespace tensorflow {
namespace internal {
namespace {
bool IsPortAvailable(int* port, bool is_tcp) {
const int protocol = is_tcp ? IPPROTO_TCP : 0;
const int fd = socket(AF_INET, is_tcp ? SOCK_STREAM : SOCK_DGRAM, protocol);
struct sockaddr_in addr;
int addr_len = static_cast<int>(sizeof(addr));
int actual_port;
CHECK_GE(*port, 0);
CHECK_LE(*port, 65535);
if (fd < 0) {
LOG(ERROR) << "socket() failed: " << strerror(errno);
return false;
}
// SO_REUSEADDR lets us start up a server immediately after it exists.
int one = 1;
if (setsockopt(fd, SOL_SOCKET, SO_REUSEADDR, (const char*)&one, sizeof(one)) <
0) {
LOG(ERROR) << "setsockopt() failed: " << strerror(errno);
closesocket(fd);
return false;
}
// Try binding to port.
addr.sin_family = AF_INET;
addr.sin_addr.s_addr = INADDR_ANY;
addr.sin_port = htons((uint16_t)*port);
if (bind(fd, (struct sockaddr*)&addr, sizeof(addr)) < 0) {
LOG(WARNING) << "bind(port=" << *port << ") failed: " << strerror(errno);
closesocket(fd);
return false;
}
// Get the bound port number.
if (getsockname(fd, (struct sockaddr*)&addr, &addr_len) < 0) {
LOG(WARNING) << "getsockname() failed: " << strerror(errno);
closesocket(fd);
return false;
}
CHECK_LE(addr_len, sizeof(addr));
actual_port = ntohs(addr.sin_port);
CHECK_GT(actual_port, 0);
if (*port == 0) {
*port = actual_port;
} else {
CHECK_EQ(*port, actual_port);
}
closesocket(fd);
return true;
}
const int kNumRandomPortsToPick = 100;
const int kMaximumTrials = 1000;
} // namespace
int PickUnusedPortOrDie() {
static std::unordered_set<int> chosen_ports;
// Type of port to first pick in the next iteration.
bool is_tcp = true;
int trial = 0;
while (true) {
int port;
trial++;
CHECK_LE(trial, kMaximumTrials)
<< "Failed to pick an unused port for testing.";
if (trial == 1) {
port = GetCurrentProcessId() % (65536 - 30000) + 30000;
} else if (trial <= kNumRandomPortsToPick) {
port = rand() % (65536 - 30000) + 30000;
} else {
port = 0;
}
if (chosen_ports.find(port) != chosen_ports.end()) {
continue;
}
if (!IsPortAvailable(&port, is_tcp)) {
continue;
}
CHECK_GT(port, 0);
if (!IsPortAvailable(&port, !is_tcp)) {
is_tcp = !is_tcp;
continue;
}
chosen_ports.insert(port);
return port;
}
return 0;
}
} // namespace internal
} // namespace tensorflow

View File

@ -0,0 +1,99 @@
/* Copyright 2016 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#ifdef SNAPPY
#include <snappy.h>
#endif
#include <WinSock2.h>
#include "tensorflow/core/platform/cpu_info.h"
#include "tensorflow/core/platform/demangle.h"
#include "tensorflow/core/platform/host_info.h"
#include "tensorflow/core/platform/init_main.h"
#include "tensorflow/core/platform/logging.h"
#include "tensorflow/core/platform/mem.h"
#include "tensorflow/core/platform/snappy.h"
#include "tensorflow/core/platform/types.h"
namespace tensorflow {
namespace port {
void InitMain(const char* usage, int* argc, char*** argv) {}
string Hostname() {
char hostname[1024];
gethostname(hostname, sizeof hostname);
hostname[sizeof hostname - 1] = 0;
return string(hostname);
}
int NumSchedulableCPUs() {
SYSTEM_INFO system_info;
GetSystemInfo(&system_info);
return system_info.dwNumberOfProcessors;
}
void* aligned_malloc(size_t size, int minimum_alignment) {
return _aligned_malloc(size, minimum_alignment);
}
void aligned_free(void* aligned_memory) { _aligned_free(aligned_memory); }
void MallocExtension_ReleaseToSystem(std::size_t num_bytes) {
// No-op.
}
std::size_t MallocExtension_GetAllocatedSize(const void* p) { return 0; }
void AdjustFilenameForLogging(string* filename) {
// Nothing to do
}
bool Snappy_Compress(const char* input, size_t length, string* output) {
#ifdef SNAPPY
output->resize(snappy::MaxCompressedLength(length));
size_t outlen;
snappy::RawCompress(input, length, &(*output)[0], &outlen);
output->resize(outlen);
return true;
#else
return false;
#endif
}
bool Snappy_GetUncompressedLength(const char* input, size_t length,
size_t* result) {
#ifdef SNAPPY
return snappy::GetUncompressedLength(input, length, result);
#else
return false;
#endif
}
bool Snappy_Uncompress(const char* input, size_t length, char* output) {
#ifdef SNAPPY
return snappy::RawUncompress(input, length, output);
#else
return false;
#endif
}
string Demangle(const char* mangled) { return mangled; }
} // namespace port
} // namespace tensorflow

View File

@ -0,0 +1,266 @@
/* Copyright 2015 Google Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
#include <Windows.h>
#include <direct.h>
#include <errno.h>
#include <fcntl.h>
#include <io.h>
#include <Shlwapi.h>
#undef StrCat
#include <stdio.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <time.h>
#include "tensorflow/core/lib/core/error_codes.pb.h"
#include "tensorflow/core/lib/strings/strcat.h"
#include "tensorflow/core/platform/env.h"
#include "tensorflow/core/platform/logging.h"
#include "tensorflow/core/platform/posix/error.h"
#include "tensorflow/core/platform/windows/windows_file_system.h"
// TODO(mrry): Prevent this Windows.h #define from leaking out of our headers.
#undef DeleteFile
namespace tensorflow {
namespace {
// read() based random-access
class WindowsRandomAccessFile : public RandomAccessFile {
private:
string filename_;
FILE* file_;
public:
WindowsRandomAccessFile(const string& fname, FILE* f)
: filename_(fname), file_(f) {}
~WindowsRandomAccessFile() override {
if (file_ != NULL) {
// Ignoring any potential errors
fclose(file_);
}
}
Status Read(uint64 offset, size_t n, StringPiece* result,
char* scratch) const override {
Status s;
char* dst = scratch;
int seek_result = fseek(file_, offset, SEEK_SET);
if (seek_result) {
return IOError(filename_, errno);
}
while (n > 0 && s.ok()) {
size_t r = fread(dst, 1, n, file_);
if (r > 0) {
dst += r;
n -= r;
} else if (r == 0) {
s = Status(error::OUT_OF_RANGE, "Read fewer bytes than requested");
} else if (errno == EINTR || errno == EAGAIN) {
// Retry
} else {
s = IOError(filename_, errno);
}
}
*result = StringPiece(scratch, dst - scratch);
return s;
}
};
class WindowsWritableFile : public WritableFile {
private:
string filename_;
FILE* file_;
public:
WindowsWritableFile(const string& fname, FILE* f)
: filename_(fname), file_(f) {}
~WindowsWritableFile() override {
if (file_ != NULL) {
// Ignoring any potential errors
fclose(file_);
}
}
Status Append(const StringPiece& data) override {
size_t r = fwrite(data.data(), 1, data.size(), file_);
if (r != data.size()) {
return IOError(filename_, errno);
}
return Status::OK();
}
Status Close() override {
Status result;
if (fclose(file_) != 0) {
result = IOError(filename_, errno);
}
file_ = NULL;
return result;
}
Status Flush() override {
if (fflush(file_) != 0) {
return IOError(filename_, errno);
}
return Status::OK();
}
Status Sync() override {
Status s;
if (fflush(file_) != 0) {
s = IOError(filename_, errno);
}
return s;
}
};
} // namespace
Status WindowsFileSystem::NewRandomAccessFile(
const string& fname, std::unique_ptr<RandomAccessFile>* result) {
string translated_fname = TranslateName(fname);
result->reset();
Status s;
FILE* f = fopen(translated_fname.c_str(), "r");
if (f == NULL) {
s = IOError(fname, errno);
} else {
result->reset(new WindowsRandomAccessFile(translated_fname, f));
}
return s;
}
Status WindowsFileSystem::NewWritableFile(
const string& fname, std::unique_ptr<WritableFile>* result) {
string translated_fname = TranslateName(fname);
Status s;
FILE* f = fopen(translated_fname.c_str(), "w");
if (f == NULL) {
result->reset();
s = IOError(fname, errno);
} else {
result->reset(new WindowsWritableFile(translated_fname, f));
}
return s;
}
Status WindowsFileSystem::NewAppendableFile(
const string& fname, std::unique_ptr<WritableFile>* result) {
string translated_fname = TranslateName(fname);
Status s;
FILE* f = fopen(translated_fname.c_str(), "a");
if (f == NULL) {
result->reset();
s = IOError(fname, errno);
} else {
result->reset(new WindowsWritableFile(translated_fname, f));
}
return s;
}
Status WindowsFileSystem::NewReadOnlyMemoryRegionFromFile(
const string& fname, std::unique_ptr<ReadOnlyMemoryRegion>* result) {
return errors::Unimplemented(
"WindowsFileSystem::NewReadOnlyMemoryRegionFromFile");
}
bool WindowsFileSystem::FileExists(const string& fname) {
return _access(TranslateName(fname).c_str(), 0) == 0;
}
Status WindowsFileSystem::GetChildren(const string& dir,
std::vector<string>* result) {
string translated_dir = TranslateName(dir);
result->clear();
WIN32_FIND_DATA find_data;
HANDLE find_handle = FindFirstFile(translated_dir.c_str(), &find_data);
if (find_handle == INVALID_HANDLE_VALUE) {
// TODO(mrry): Convert to a more specific error.
return errors::Unknown("Error code: ", GetLastError());
}
result->push_back(find_data.cFileName);
while (FindNextFile(find_handle, &find_data)) {
result->push_back(find_data.cFileName);
}
if (!FindClose(find_handle)) {
// TODO(mrry): Convert to a more specific error.
return errors::Unknown("Error closing find handle: ", GetLastError());
}
return Status::OK();
}
Status WindowsFileSystem::DeleteFile(const string& fname) {
Status result;
if (unlink(TranslateName(fname).c_str()) != 0) {
result = IOError(fname, errno);
}
return result;
}
Status WindowsFileSystem::CreateDir(const string& name) {
Status result;
if (_mkdir(TranslateName(name).c_str()) != 0) {
result = IOError(name, errno);
}
return result;
}
Status WindowsFileSystem::DeleteDir(const string& name) {
Status result;
if (_rmdir(TranslateName(name).c_str()) != 0) {
result = IOError(name, errno);
}
return result;
}
Status WindowsFileSystem::GetFileSize(const string& fname, uint64* size) {
Status s;
struct _stat sbuf;
if (_stat(TranslateName(fname).c_str(), &sbuf) != 0) {
*size = 0;
s = IOError(fname, errno);
} else {
*size = sbuf.st_size;
}
return s;
}
Status WindowsFileSystem::RenameFile(const string& src, const string& target) {
Status result;
if (rename(TranslateName(src).c_str(), TranslateName(target).c_str()) != 0) {
result = IOError(src, errno);
}
return result;
}
Status WindowsFileSystem::Stat(const string& fname, FileStatistics* stat) {
Status s;
struct _stat sbuf;
if (_stat(TranslateName(fname).c_str(), &sbuf) != 0) {
s = IOError(fname, errno);
} else {
stat->mtime_nsec = sbuf.st_mtime * 1e9;
stat->length = sbuf.st_size;
stat->is_directory = PathIsDirectory(TranslateName(fname).c_str());
}
return s;
}
} // namespace tensorflow

View File

@ -0,0 +1,71 @@
/* Copyright 2015 Google Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
#ifndef TENSORFLOW_CORE_PLATFORM_WINDOWS_WINDOWS_FILE_SYSTEM_H_
#define TENSORFLOW_CORE_PLATFORM_WINDOWS_WINDOWS_FILE_SYSTEM_H_
#include "tensorflow/core/platform/file_system.h"
#ifdef PLATFORM_WINDOWS
#undef DeleteFile
#endif
namespace tensorflow {
class WindowsFileSystem : public FileSystem {
public:
WindowsFileSystem() {}
~WindowsFileSystem() {}
Status NewRandomAccessFile(
const string& fname, std::unique_ptr<RandomAccessFile>* result) override;
Status NewWritableFile(const string& fname,
std::unique_ptr<WritableFile>* result) override;
Status NewAppendableFile(const string& fname,
std::unique_ptr<WritableFile>* result) override;
Status NewReadOnlyMemoryRegionFromFile(
const string& fname,
std::unique_ptr<ReadOnlyMemoryRegion>* result) override;
bool FileExists(const string& fname) override;
Status GetChildren(const string& dir, std::vector<string>* result) override;
Status Stat(const string& fname, FileStatistics* stat) override;
Status DeleteFile(const string& fname) override;
Status CreateDir(const string& name) override;
Status DeleteDir(const string& name) override;
Status GetFileSize(const string& fname, uint64* size) override;
Status RenameFile(const string& src, const string& target) override;
string TranslateName(const string& name) const override {
return name;
}
};
Status IOError(const string& context, int err_number);
} // namespace tensorflow
#endif // TENSORFLOW_CORE_PLATFORM_WINDOWS_WINDOWS_FILE_SYSTEM_H_

View File

@ -19,8 +19,8 @@ limitations under the License.
// TensorFlow uses semantic versioning, see http://semver.org/.
#define TF_MAJOR_VERSION 0
#define TF_MINOR_VERSION 10
#define TF_PATCH_VERSION 0
#define TF_MINOR_VERSION 11
#define TF_PATCH_VERSION 0rc0
// TF_VERSION_SUFFIX is non-empty for pre-releases (e.g. "-alpha", "-alpha.1",
// "-beta", "-rc", "-rc.1")

View File

@ -57,7 +57,7 @@
"from six.moves.urllib.request import urlretrieve\n",
"from six.moves import cPickle as pickle\n",
"\n",
"# Config the matlotlib backend as plotting inline in IPython\n",
"# Config the matplotlib backend as plotting inline in IPython\n",
"%matplotlib inline"
],
"outputs": [],

View File

@ -180,7 +180,7 @@
"\n",
"def reformat(dataset, labels):\n",
" dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)\n",
" # Map 2 to [0.0, 1.0, 0.0 ...], 3 to [0.0, 0.0, 1.0 ...]\n",
" # Map 1 to [0.0, 1.0, 0.0 ...], 2 to [0.0, 0.0, 1.0 ...]\n",
" labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)\n",
" return dataset, labels\n",
"train_dataset, train_labels = reformat(train_dataset, train_labels)\n",

View File

@ -1,6 +1,6 @@
FROM gcr.io/tensorflow/tensorflow:latest
MAINTAINER Vincent Vanhoucke <vanhoucke@google.com>
RUN pip install scikit-learn
RUN pip install scikit-learn pyreadline Pillow
RUN rm -rf /notebooks/*
ADD *.ipynb /notebooks/
WORKDIR /notebooks

View File

@ -85,7 +85,7 @@ The signature of the input_fn accepted by export is changing to be consistent wi
string key to `Tensor` and targets is a `Tensor` that's currently not
used (and so can be `None`).
input_feature_key: Only used if `use_deprecated_input_fn` is false. String
key into the features dict returned by `input_fn` that corresponds to
key into the features dict returned by `input_fn` that corresponds toa
the raw `Example` strings `Tensor` that the exported model will take as
input. Can only be `None` if you're using a custom `signature_fn` that
does not use the first arg (examples).
@ -384,7 +384,7 @@ The signature of the input_fn accepted by export is changing to be consistent wi
string key to `Tensor` and targets is a `Tensor` that's currently not
used (and so can be `None`).
input_feature_key: Only used if `use_deprecated_input_fn` is false. String
key into the features dict returned by `input_fn` that corresponds to
key into the features dict returned by `input_fn` that corresponds toa
the raw `Example` strings `Tensor` that the exported model will take as
input. Can only be `None` if you're using a custom `signature_fn` that
does not use the first arg (examples).
@ -1018,7 +1018,7 @@ The signature of the input_fn accepted by export is changing to be consistent wi
string key to `Tensor` and targets is a `Tensor` that's currently not
used (and so can be `None`).
input_feature_key: Only used if `use_deprecated_input_fn` is false. String
key into the features dict returned by `input_fn` that corresponds to
key into the features dict returned by `input_fn` that corresponds toa
the raw `Example` strings `Tensor` that the exported model will take as
input. Can only be `None` if you're using a custom `signature_fn` that
does not use the first arg (examples).
@ -1355,7 +1355,7 @@ The signature of the input_fn accepted by export is changing to be consistent wi
string key to `Tensor` and targets is a `Tensor` that's currently not
used (and so can be `None`).
input_feature_key: Only used if `use_deprecated_input_fn` is false. String
key into the features dict returned by `input_fn` that corresponds to
key into the features dict returned by `input_fn` that corresponds toa
the raw `Example` strings `Tensor` that the exported model will take as
input. Can only be `None` if you're using a custom `signature_fn` that
does not use the first arg (examples).
@ -1996,7 +1996,7 @@ The signature of the input_fn accepted by export is changing to be consistent wi
string key to `Tensor` and targets is a `Tensor` that's currently not
used (and so can be `None`).
input_feature_key: Only used if `use_deprecated_input_fn` is false. String
key into the features dict returned by `input_fn` that corresponds to
key into the features dict returned by `input_fn` that corresponds toa
the raw `Example` strings `Tensor` that the exported model will take as
input. Can only be `None` if you're using a custom `signature_fn` that
does not use the first arg (examples).
@ -2347,7 +2347,7 @@ The signature of the input_fn accepted by export is changing to be consistent wi
string key to `Tensor` and targets is a `Tensor` that's currently not
used (and so can be `None`).
input_feature_key: Only used if `use_deprecated_input_fn` is false. String
key into the features dict returned by `input_fn` that corresponds to
key into the features dict returned by `input_fn` that corresponds toa
the raw `Example` strings `Tensor` that the exported model will take as
input. Can only be `None` if you're using a custom `signature_fn` that
does not use the first arg (examples).
@ -2744,7 +2744,7 @@ The signature of the input_fn accepted by export is changing to be consistent wi
string key to `Tensor` and targets is a `Tensor` that's currently not
used (and so can be `None`).
input_feature_key: Only used if `use_deprecated_input_fn` is false. String
key into the features dict returned by `input_fn` that corresponds to
key into the features dict returned by `input_fn` that corresponds toa
the raw `Example` strings `Tensor` that the exported model will take as
input. Can only be `None` if you're using a custom `signature_fn` that
does not use the first arg (examples).

View File

@ -163,7 +163,7 @@ The signature of the input_fn accepted by export is changing to be consistent wi
string key to `Tensor` and targets is a `Tensor` that's currently not
used (and so can be `None`).
input_feature_key: Only used if `use_deprecated_input_fn` is false. String
key into the features dict returned by `input_fn` that corresponds to
key into the features dict returned by `input_fn` that corresponds toa
the raw `Example` strings `Tensor` that the exported model will take as
input. Can only be `None` if you're using a custom `signature_fn` that
does not use the first arg (examples).

View File

@ -11,8 +11,8 @@ the full softmax loss.
At inference time, you can compute full softmax probabilities with the
expression `tf.nn.softmax(tf.matmul(inputs, tf.transpose(weights)) + biases)`.
See our [Candidate Sampling Algorithms Reference]
(../../extras/candidate_sampling.pdf)
See our
[Candidate Sampling Algorithms Reference](../../extras/candidate_sampling.pdf)
Also see Section 3 of [Jean et al., 2014](http://arxiv.org/abs/1412.2007)
([pdf](http://arxiv.org/pdf/1412.2007.pdf)) for the math.

View File

@ -632,8 +632,8 @@ Note that this is unrelated to the
The GraphDef version information of this graph.
For details on the meaning of each version, see [`GraphDef`]
(https://www.tensorflow.org/code/tensorflow/core/framework/graph.proto).
For details on the meaning of each version, see
[`GraphDef`](https://www.tensorflow.org/code/tensorflow/core/framework/graph.proto).
##### Returns:

View File

@ -70,7 +70,7 @@ The signature of the input_fn accepted by export is changing to be consistent wi
string key to `Tensor` and targets is a `Tensor` that's currently not
used (and so can be `None`).
input_feature_key: Only used if `use_deprecated_input_fn` is false. String
key into the features dict returned by `input_fn` that corresponds to
key into the features dict returned by `input_fn` that corresponds toa
the raw `Example` strings `Tensor` that the exported model will take as
input. Can only be `None` if you're using a custom `signature_fn` that
does not use the first arg (examples).

View File

@ -98,7 +98,7 @@ The signature of the input_fn accepted by export is changing to be consistent wi
string key to `Tensor` and targets is a `Tensor` that's currently not
used (and so can be `None`).
input_feature_key: Only used if `use_deprecated_input_fn` is false. String
key into the features dict returned by `input_fn` that corresponds to
key into the features dict returned by `input_fn` that corresponds toa
the raw `Example` strings `Tensor` that the exported model will take as
input. Can only be `None` if you're using a custom `signature_fn` that
does not use the first arg (examples).

View File

@ -29,8 +29,7 @@ Then, row_pooling_sequence should satisfy:
4. length(row_pooling_sequence) = output_row_length+1
For more details on fractional max pooling, see this paper:
[Benjamin Graham, Fractional Max-Pooling]
(http://arxiv.org/abs/1412.6071)
[Benjamin Graham, Fractional Max-Pooling](http://arxiv.org/abs/1412.6071)
##### Args:
@ -47,7 +46,7 @@ For more details on fractional max pooling, see this paper:
* <b>`pseudo_random`</b>: An optional `bool`. Defaults to `False`.
When set to True, generates the pooling sequence in a
pseudorandom fashion, otherwise, in a random fashion. Check paper [Benjamin
Graham, Fractional Max-Pooling] (http://arxiv.org/abs/1412.6071) for
Graham, Fractional Max-Pooling](http://arxiv.org/abs/1412.6071) for
difference between pseudorandom and random.
* <b>`overlapping`</b>: An optional `bool`. Defaults to `False`.
When set to True, it means when pooling, the values at the boundary

View File

@ -103,7 +103,7 @@ The signature of the input_fn accepted by export is changing to be consistent wi
string key to `Tensor` and targets is a `Tensor` that's currently not
used (and so can be `None`).
input_feature_key: Only used if `use_deprecated_input_fn` is false. String
key into the features dict returned by `input_fn` that corresponds to
key into the features dict returned by `input_fn` that corresponds toa
the raw `Example` strings `Tensor` that the exported model will take as
input. Can only be `None` if you're using a custom `signature_fn` that
does not use the first arg (examples).

View File

@ -118,7 +118,7 @@ The signature of the input_fn accepted by export is changing to be consistent wi
string key to `Tensor` and targets is a `Tensor` that's currently not
used (and so can be `None`).
input_feature_key: Only used if `use_deprecated_input_fn` is false. String
key into the features dict returned by `input_fn` that corresponds to
key into the features dict returned by `input_fn` that corresponds toa
the raw `Example` strings `Tensor` that the exported model will take as
input. Can only be `None` if you're using a custom `signature_fn` that
does not use the first arg (examples).

View File

@ -2,11 +2,10 @@
Computes and returns the noise-contrastive estimation training loss.
See [Noise-contrastive estimation: A new estimation principle for
unnormalized statistical models]
(http://www.jmlr.org/proceedings/papers/v9/gutmann10a/gutmann10a.pdf).
Also see our [Candidate Sampling Algorithms Reference]
(../../extras/candidate_sampling.pdf)
See
[Noise-contrastive estimation: A new estimation principle for unnormalized statistical models](http://www.jmlr.org/proceedings/papers/v9/gutmann10a/gutmann10a.pdf).
Also see our
[Candidate Sampling Algorithms Reference](../../extras/candidate_sampling.pdf)
Note: By default this uses a log-uniform (Zipfian) distribution for sampling,
so your labels must be sorted in order of decreasing frequency to achieve
@ -44,8 +43,7 @@ with an otherwise unused class.
where a sampled class equals one of the target classes. If set to
`True`, this is a "Sampled Logistic" loss instead of NCE, and we are
learning to generate log-odds instead of log probabilities. See
our [Candidate Sampling Algorithms Reference]
(../../extras/candidate_sampling.pdf).
our [Candidate Sampling Algorithms Reference](../../extras/candidate_sampling.pdf).
Default is False.
* <b>`partition_strategy`</b>: A string specifying the partitioning strategy, relevant
if `len(weights) > 1`. Currently `"div"` and `"mod"` are supported.

View File

@ -2,8 +2,8 @@
Parses `Example` protos into a `dict` of tensors.
Parses a number of serialized [`Example`]
(https://www.tensorflow.org/code/tensorflow/core/example/example.proto)
Parses a number of serialized
[`Example`](https://www.tensorflow.org/code/tensorflow/core/example/example.proto)
protos given in `serialized`.
`example_names` may contain descriptive names for the corresponding serialized

View File

@ -1,7 +1,6 @@
Optimizer that implements the RMSProp algorithm.
See the [paper]
(http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf).
See the [paper](http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf).
- - -

View File

@ -32,11 +32,10 @@ Convolutional Nets and Fully Connected CRFs](http://arxiv.org/abs/1412.7062).
The same operation is investigated further in [Multi-Scale Context Aggregation
by Dilated Convolutions](http://arxiv.org/abs/1511.07122). Previous works
that effectively use atrous convolution in different ways are, among others,
[OverFeat: Integrated Recognition, Localization and Detection using
Convolutional Networks](http://arxiv.org/abs/1312.6229) and [Fast Image
Scanning with Deep Max-Pooling Convolutional Neural Networks]
(http://arxiv.org/abs/1302.1700). Atrous convolution is also closely related
to the so-called noble identities in multi-rate signal processing.
[OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks](http://arxiv.org/abs/1312.6229)
and [Fast Image Scanning with Deep Max-Pooling Convolutional Neural Networks](http://arxiv.org/abs/1302.1700).
Atrous convolution is also closely related to the so-called noble identities in
multi-rate signal processing.
There are many different ways to implement atrous convolution (see the refs
above). The implementation here reduces

View File

@ -22,7 +22,7 @@ pooling region.
* <b>`pseudo_random`</b>: An optional `bool`. Defaults to `False`.
When set to True, generates the pooling sequence in a
pseudorandom fashion, otherwise, in a random fashion. Check paper [Benjamin
Graham, Fractional Max-Pooling] (http://arxiv.org/abs/1412.6071) for
Graham, Fractional Max-Pooling](http://arxiv.org/abs/1412.6071) for
difference between pseudorandom and random.
* <b>`overlapping`</b>: An optional `bool`. Defaults to `False`.
When set to True, it means when pooling, the values at the boundary

View File

@ -11,9 +11,8 @@ each component is divided by the weighted, squared sum of inputs within
sum(input[a, b, c, d - depth_radius : d + depth_radius + 1] ** 2)
output = input / (bias + alpha * sqr_sum) ** beta
For details, see [Krizhevsky et al., ImageNet classification with deep
convolutional neural networks (NIPS 2012)]
(http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks).
For details, see
[Krizhevsky et al., ImageNet classification with deep convolutional neural networks (NIPS 2012)](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks).
##### Args:

View File

@ -36,8 +36,7 @@ with tf.Session() as sess:
sess.run(...)
```
The [`ConfigProto`]
(https://www.tensorflow.org/code/tensorflow/core/protobuf/config.proto)
The [`ConfigProto`](https://www.tensorflow.org/code/tensorflow/core/protobuf/config.proto)
protocol buffer exposes various configuration options for a
session. For example, to create a session that uses soft constraints
for device placement, and log the resulting placement decisions,
@ -68,8 +67,8 @@ the session constructor.
* <b>`target`</b>: (Optional.) The execution engine to connect to.
Defaults to using an in-process engine. See [Distributed Tensorflow]
(https://www.tensorflow.org/how_tos/distributed/index.html)
Defaults to using an in-process engine. See
[Distributed Tensorflow](https://www.tensorflow.org/how_tos/distributed/index.html)
for more examples.
* <b>`graph`</b>: (Optional.) The `Graph` to be launched (described above).
* <b>`config`</b>: (Optional.) A [`ConfigProto`](https://www.tensorflow.org/code/tensorflow/core/protobuf/config.proto)

View File

@ -8,12 +8,9 @@ the same as `size`. To avoid distortions see
`method` can be one of:
* <b>`ResizeMethod.BILINEAR`</b>: [Bilinear interpolation.]
(https://en.wikipedia.org/wiki/Bilinear_interpolation)
* <b>`ResizeMethod.NEAREST_NEIGHBOR`</b>: [Nearest neighbor interpolation.]
(https://en.wikipedia.org/wiki/Nearest-neighbor_interpolation)
* <b>`ResizeMethod.BICUBIC`</b>: [Bicubic interpolation.]
(https://en.wikipedia.org/wiki/Bicubic_interpolation)
* <b>`ResizeMethod.BILINEAR`</b>: [Bilinear interpolation.](https://en.wikipedia.org/wiki/Bilinear_interpolation)
* <b>`ResizeMethod.NEAREST_NEIGHBOR`</b>: [Nearest neighbor interpolation.](https://en.wikipedia.org/wiki/Nearest-neighbor_interpolation)
* <b>`ResizeMethod.BICUBIC`</b>: [Bicubic interpolation.](https://en.wikipedia.org/wiki/Bicubic_interpolation)
* <b>`ResizeMethod.AREA`</b>: Area interpolation.
##### Args:

View File

@ -176,7 +176,7 @@ The signature of the input_fn accepted by export is changing to be consistent wi
string key to `Tensor` and targets is a `Tensor` that's currently not
used (and so can be `None`).
input_feature_key: Only used if `use_deprecated_input_fn` is false. String
key into the features dict returned by `input_fn` that corresponds to
key into the features dict returned by `input_fn` that corresponds toa
the raw `Example` strings `Tensor` that the exported model will take as
input. Can only be `None` if you're using a custom `signature_fn` that
does not use the first arg (examples).

View File

@ -117,7 +117,7 @@ The signature of the input_fn accepted by export is changing to be consistent wi
string key to `Tensor` and targets is a `Tensor` that's currently not
used (and so can be `None`).
input_feature_key: Only used if `use_deprecated_input_fn` is false. String
key into the features dict returned by `input_fn` that corresponds to
key into the features dict returned by `input_fn` that corresponds toa
the raw `Example` strings `Tensor` that the exported model will take as
input. Can only be `None` if you're using a custom `signature_fn` that
does not use the first arg (examples).

View File

@ -63,37 +63,37 @@ Then, select the correct binary to install:
```bash
# Ubuntu/Linux 64-bit, CPU only, Python 2.7
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.10.0-cp27-none-linux_x86_64.whl
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.11.0rc0-cp27-none-linux_x86_64.whl
# Ubuntu/Linux 64-bit, GPU enabled, Python 2.7
# Requires CUDA toolkit 7.5 and CuDNN v5. For other versions, see "Install from sources" below.
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.10.0-cp27-none-linux_x86_64.whl
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.11.0rc0-cp27-none-linux_x86_64.whl
# Mac OS X, CPU only, Python 2.7:
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.10.0-py2-none-any.whl
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.11.0rc0-py2-none-any.whl
# Mac OS X, GPU enabled, Python 2.7:
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow-0.10.0-py2-none-any.whl
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow-0.11.0rc0-py2-none-any.whl
# Ubuntu/Linux 64-bit, CPU only, Python 3.4
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.10.0-cp34-cp34m-linux_x86_64.whl
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.11.0rc0-cp34-cp34m-linux_x86_64.whl
# Ubuntu/Linux 64-bit, GPU enabled, Python 3.4
# Requires CUDA toolkit 7.5 and CuDNN v5. For other versions, see "Install from sources" below.
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.10.0-cp34-cp34m-linux_x86_64.whl
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.11.0rc0-cp34-cp34m-linux_x86_64.whl
# Ubuntu/Linux 64-bit, CPU only, Python 3.5
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.10.0-cp35-cp35m-linux_x86_64.whl
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.11.0rc0-cp35-cp35m-linux_x86_64.whl
# Ubuntu/Linux 64-bit, GPU enabled, Python 3.5
# Requires CUDA toolkit 7.5 and CuDNN v5. For other versions, see "Install from sources" below.
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.10.0-cp35-cp35m-linux_x86_64.whl
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.11.0rc0-cp35-cp35m-linux_x86_64.whl
# Mac OS X, CPU only, Python 3.4 or 3.5:
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.10.0-py3-none-any.whl
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.11.0rc0-py3-none-any.whl
# Mac OS X, GPU enabled, Python 3.4 or 3.5:
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow-0.10.0-py3-none-any.whl
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow-0.11.0rc0-py3-none-any.whl
```
Install TensorFlow:
@ -159,37 +159,37 @@ Now, install TensorFlow just as you would for a regular Pip installation. First
```bash
# Ubuntu/Linux 64-bit, CPU only, Python 2.7
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.10.0-cp27-none-linux_x86_64.whl
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.11.0rc0-cp27-none-linux_x86_64.whl
# Ubuntu/Linux 64-bit, GPU enabled, Python 2.7
# Requires CUDA toolkit 7.5 and CuDNN v5. For other versions, see "Install from sources" below.
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.10.0-cp27-none-linux_x86_64.whl
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.11.0rc0-cp27-none-linux_x86_64.whl
# Mac OS X, CPU only, Python 2.7:
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.10.0-py2-none-any.whl
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.11.0rc0-py2-none-any.whl
# Mac OS X, GPU enabled, Python 2.7:
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow-0.10.0-py2-none-any.whl
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow-0.11.0rc0-py2-none-any.whl
# Ubuntu/Linux 64-bit, CPU only, Python 3.4
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.10.0-cp34-cp34m-linux_x86_64.whl
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.11.0rc0-cp34-cp34m-linux_x86_64.whl
# Ubuntu/Linux 64-bit, GPU enabled, Python 3.4
# Requires CUDA toolkit 7.5 and CuDNN v5. For other versions, see "Install from sources" below.
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.10.0-cp34-cp34m-linux_x86_64.whl
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.11.0rc0-cp34-cp34m-linux_x86_64.whl
# Ubuntu/Linux 64-bit, CPU only, Python 3.5
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.10.0-cp35-cp35m-linux_x86_64.whl
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.11.0rc0-cp35-cp35m-linux_x86_64.whl
# Ubuntu/Linux 64-bit, GPU enabled, Python 3.5
# Requires CUDA toolkit 7.5 and CuDNN v5. For other versions, see "Install from sources" below.
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.10.0-cp35-cp35m-linux_x86_64.whl
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.11.0rc0-cp35-cp35m-linux_x86_64.whl
# Mac OS X, CPU only, Python 3.4 or 3.5:
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.10.0-py3-none-any.whl
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.11.0rc0-py3-none-any.whl
# Mac OS X, GPU enabled, Python 3.4 or 3.5:
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow-0.10.0-py3-none-any.whl
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow-0.11.0rc0-py3-none-any.whl
```
Finally install TensorFlow:
@ -298,37 +298,37 @@ select the correct binary to install:
```bash
# Ubuntu/Linux 64-bit, CPU only, Python 2.7
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.10.0-cp27-none-linux_x86_64.whl
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.11.0rc0-cp27-none-linux_x86_64.whl
# Ubuntu/Linux 64-bit, GPU enabled, Python 2.7
# Requires CUDA toolkit 7.5 and CuDNN v5. For other versions, see "Install from sources" below.
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.10.0-cp27-none-linux_x86_64.whl
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.11.0rc0-cp27-none-linux_x86_64.whl
# Mac OS X, CPU only, Python 2.7:
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.10.0-py2-none-any.whl
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.11.0rc0-py2-none-any.whl
# Mac OS X, GPU enabled, Python 2.7:
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow-0.10.0-py2-none-any.whl
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow-0.11.0rc0-py2-none-any.whl
# Ubuntu/Linux 64-bit, CPU only, Python 3.4
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.10.0-cp34-cp34m-linux_x86_64.whl
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.11.0rc0-cp34-cp34m-linux_x86_64.whl
# Ubuntu/Linux 64-bit, GPU enabled, Python 3.4
# Requires CUDA toolkit 7.5 and CuDNN v5. For other versions, see "Install from sources" below.
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.10.0-cp34-cp34m-linux_x86_64.whl
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.11.0rc0-cp34-cp34m-linux_x86_64.whl
# Ubuntu/Linux 64-bit, CPU only, Python 3.5
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.10.0-cp35-cp35m-linux_x86_64.whl
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.11.0rc0-cp35-cp35m-linux_x86_64.whl
# Ubuntu/Linux 64-bit, GPU enabled, Python 3.5
# Requires CUDA toolkit 7.5 and CuDNN v5. For other versions, see "Install from sources" below.
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.10.0-cp35-cp35m-linux_x86_64.whl
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.11.0rc0-cp35-cp35m-linux_x86_64.whl
# Mac OS X, CPU only, Python 3.4 or 3.5:
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.10.0-py3-none-any.whl
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.11.0rc0-py3-none-any.whl
# Mac OS X, GPU enabled, Python 3.4 or 3.5:
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow-0.10.0-py3-none-any.whl
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow-0.11.0rc0-py3-none-any.whl
```
Finally install TensorFlow:
@ -396,7 +396,7 @@ code.
code.
We also have tags with `latest` replaced by a released version (e.g.,
`0.10.0-gpu`).
`0.11.0-gpu`).
With Docker the installation is as follows:
@ -784,7 +784,7 @@ $ bazel build -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_pack
$ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
# The name of the .whl file will depend on your platform.
$ sudo pip install /tmp/tensorflow_pkg/tensorflow-0.10.0-py2-none-any.whl
$ sudo pip install /tmp/tensorflow_pkg/tensorflow-0.11.0rc0-py2-none-any.whl
```
## Setting up TensorFlow for Development

View File

@ -199,8 +199,8 @@ will end up basing its prediction on the background color, not the features of
the object you actually care about. To avoid this, try to take pictures in as
wide a variety of situations as you can, at different times, and with different
devices. If you want to know more about this problem, you can read about the
classic (and possibly apocryphal) [tank recognition problem]
(http://www.jefftk.com/p/detecting-tanks).
classic (and possibly apocryphal)
[tank recognition problem](http://www.jefftk.com/p/detecting-tanks).
You may also want to think about the categories you use. It might be worth
splitting big categories that cover a lot of different physical forms into

View File

@ -200,10 +200,9 @@ Quantized | Float
The advantages of this format are that it can represent arbitrary magnitudes of
ranges, they don't have to be symmetrical, it can represent signed and unsigned
values, and the linear spread makes doing multiplications straightforward. There
are alternatives like [Song Han's code books]
(http://arxiv.org/pdf/1510.00149.pdf) that can use lower bit depths by
non-linearly distributing the float values across the representation, but these
tend to be more expensive to calculate on.
are alternatives like [Song Han's code books](http://arxiv.org/pdf/1510.00149.pdf)
that can use lower bit depths by non-linearly distributing the float values
across the representation, but these tend to be more expensive to calculate on.
The advantage of having a strong and clear definition of the quantized format is
that it's always possible to convert back and forth from float for operations
@ -226,11 +225,11 @@ results from 8-bit inputs.
We've found that we can get extremely good performance on mobile and embedded
devices by using eight-bit arithmetic rather than floating-point. You can see
the framework we use to optimize matrix multiplications at [gemmlowp]
(https://github.com/google/gemmlowp). We still need to apply all the lessons
we've learned to the TensorFlow ops to get maximum performance on mobile, but
we're actively working on that. Right now, this quantized implementation is a
reasonably fast and accurate reference implementation that we're hoping will
enable wider support for our eight-bit models on a wider variety of devices. We
also hope that this demonstration will encourage the community to explore what's
possible with low-precision neural networks.
the framework we use to optimize matrix multiplications at
[gemmlowp](https://github.com/google/gemmlowp). We still need to apply all the
lessons we've learned to the TensorFlow ops to get maximum performance on
mobile, but we're actively working on that. Right now, this quantized
implementation is a reasonably fast and accurate reference implementation that
we're hoping will enable wider support for our eight-bit models on a wider
variety of devices. We also hope that this demonstration will encourage the
community to explore what's possible with low-precision neural networks.

Some files were not shown because too many files have changed in this diff Show More