Merge changes from github.

END_PUBLIC

I dropped the following commit because it doesn't compile.
I will follow up with Andrew to fix it or revert it.
Commit 003deb88b authored by osdamv<osdamv@gmail.com>
Committed by Vijay Vasudevan<vrv@google.com>:
Refactor and implementation of the camera API 1, it fixes #8736 (#10771)

List of commits in this CL:
---
Commit 446450369 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Use identity of param variable in cudnn_rnn.RNNParamsSaveable instead of parameter
variable directly. The RNNParamsSaveable is usually used in a graph which also
has a saver for the cudnn param variable itself, if the same op is used for
both, fails with a two savers for same op error.

PiperOrigin-RevId: 163431826

---
Commit d629a8316 authored by RJ Ryan<rjryan@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Increase bound on tf.contrib.signal.inverse_stft gradient error to avoid flakiness on macOS.

PiperOrigin-RevId: 163426631

---
Commit 253bcbb71 authored by Kay Zhu<kayzhu@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
[XLA] Use HloEvaluator for convolution in reference_util.

Also Speed up HloEvaluator's HandleConvolution in non-opt build, by moving calls
to HloInstruction::shape() out of the inner loop.

PiperOrigin-RevId: 163416183

---
Commit 569a00e68 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Update API to traffic in unique_ptrs rather than owning raw pointers

PiperOrigin-RevId: 163414320

---
Commit 31a77bc77 authored by Asim Shankar<ashankar@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Java: Update release to 1.3.0-rc1

PiperOrigin-RevId: 163413736

---
Commit 1ebbf4325 authored by Jonathan Hseu<vomjom@vomjom.net>
Committed by GitHub<noreply@github.com>:
Add missing grpc dependency (#11828)

---
Commit 905abb1f9 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Test asserts should have `expected` first.

PiperOrigin-RevId: 163409348

---
Commit d5cc143e2 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Increase timeout to deflake the test.

PiperOrigin-RevId: 163407824

---
Commit ce1c7f02a authored by Eli Bendersky<eliben@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Properly include logging header in xla_internal_test_main

PiperOrigin-RevId: 163405986

---
Commit 22241cd42 authored by joetoth<joetoth@gmail.com>
Committed by Vijay Vasudevan<vrv@google.com>:
External leveldb link changed (#11833)

table_format.txt was renamed to table_format.md
---
Commit 6b7314de4 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Consolidating the code to fill the partition's function library
into one place. Previously, Partition() and MasterSession::RegisterPartition()
both fills in the partitioned graph's function library.

PiperOrigin-RevId: 163400992

---
Commit 28373cfe7 authored by Frank Chen<frankchn@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Adds preliminary support for Cloud TPUs with Cluster Resolvers. This aims to allow users to have a better experienec when specifying one or multiple Cloud TPUs for their training jobs by allowing users to use names rather than IP addresses.

PiperOrigin-RevId: 163393443

---
Commit e5353c941 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Don't prune nodes that have reference inputs.

PiperOrigin-RevId: 163390862

---
Commit 226510834 authored by Asim Shankar<ashankar@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
C API: Groundwork for experimenting with TF_Tensor in device memory.

TF_Tensor objects are always backed by host memory. This commit lays
the groundwork for allowing TF_Tensor objects to refer to tensor data
on device (e.g., GPU) memory.

PiperOrigin-RevId: 163388079

---
Commit 613bf1c7c authored by Yuefeng Zhou<yuefengz@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
fix asan test failure in SingleMachineTest::ReleaseMemoryAfterDestruction.

PiperOrigin-RevId: 163386941

---
Commit 4653d37a3 authored by Eli Bendersky<eliben@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
[XLA] Change type to appease GPU builds.

PiperOrigin-RevId: 163384927

---
Commit 9f131bd15 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Internal change

PiperOrigin-RevId: 163378484

---
Commit 8bc0236c8 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
PiperOrigin-RevId: 163366493

---
Commit 3b97f1f9b authored by Yangzihao Wang<yangzihao@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Change to only run one round of matmul benchmark.

PiperOrigin-RevId: 163364341

---
Commit a4a3a3335 authored by Yun Peng<pcloudy@google.com>
Committed by Vijay Vasudevan<vrv@google.com>:
Fix ./configure on Windows (#11775)

* Fix ./configure on Windows

* Disable bitwise_ops_test on Windows

---
Commit ae3119d16 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Small changes to op framework.

PiperOrigin-RevId: 163361071

---
Commit f40189d26 authored by qjivy<ji.qiu@spreadtrum.com>
Committed by Vijay Vasudevan<vrv@google.com>:
PR again: Enable building label_image with jpeg/gif/png decoder for Android.  (#11475)

* Enable building label_image with jpeg/gif/png decoder for Android.
Add dependency "android_tesnorflow_image_op" to label_image, which
is not overlapped with android_tensorflow_kernels.

* Running buildifier to reformat the BUILD files for
sanity check.

---
Commit 599165861 authored by KB Sriram<kbsriram@gmail.com>
Committed by Vijay Vasudevan<vrv@google.com>:
Add the Constant operator class (#11559)

Create a custom operator class to create constants in the Graph,
and introduce the Operator marker annotation to identify
operator classes.

Please see #7149 for the master tracking issue.
---
Commit 86ca3506f authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Further BUILD cleanup

PiperOrigin-RevId: 163360750

---
Commit 376bb063b authored by Pete Warden<petewarden@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Look inside functions to see which node types are used.

PiperOrigin-RevId: 163360375

---
Commit 2139e7d8b authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
[tf.contrib.data] map expects a nested structure.

Fixes #11786

PiperOrigin-RevId: 163359134

---
Commit d09304fca authored by Jonathan Hseu<vomjom@vomjom.net>
Committed by Vijay Vasudevan<vrv@google.com>:
Upgrade gRPC (#11768)

* BUILD rule modifications

* More build fixes

* Code changes

* More code fixes

* Working tests

* CMake build

* Fix pprof

* Fix header includes

* CMake fix test

* Bazel clean

* Fix verbs

* More verbs fixes

* bazel clean for XLA

* Windows build fix test

* Add openssl/rand.h

* New cmake build command

* --config Release

---
Commit 3cd828474 authored by David Norman<DavidNorman@users.noreply.github.com>
Committed by Vijay Vasudevan<vrv@google.com>:
Fix error with default python path selection (#11814)

* Fix error with default python path selection

* Move setting of environment var outside if / else

---
Commit ddd8e21b7 authored by Eli Bendersky<eliben@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
[XLA] Consolidate all similar main()s in tests into a single target.

PiperOrigin-RevId: 163354724

---
Commit a36bca25b authored by Tayo Oguntebi<tayo@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Remove ShapeWithoutPadding() utility function, as it is no longer needed.

PiperOrigin-RevId: 163353430

---
Commit b26f9cd44 authored by David Norman<DavidNorman@users.noreply.github.com>
Committed by Vijay Vasudevan<vrv@google.com>:
Ensure that the multi-instruction fuse can take shared inputs (#11748)

* Ensure that the multi-instruction fuse can take shared inputs

Note that the fuse action only works when the shared input / constant
appears after all of its consumers in the list of instructions.

* Add a comment describing the test

---
Commit 34cbf161d authored by Jiri Simsa<jsimsa@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Update Dataset API documentation.

PiperOrigin-RevId: 163349457

---
Commit 2381ce5c3 authored by Abdullah Alrasheed<a.rasheed@tc-sa.com>
Committed by Vijay Vasudevan<vrv@google.com>:
DOC: Fix typo. (#11813)

you could could be I/O bottlenecked.
TO:
you could be I/O bottlenecked.
---
Commit e4a5c5356 authored by Toby Boyd<tobyboyd@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
["Variable", "VariableV2", "VarHandleOp"] is the default for ps_ops=None

PiperOrigin-RevId: 163344629

---
Commit 722f6f361 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Fix TensorForest's saveable object names so loading a savedmodel works.

PiperOrigin-RevId: 163332598

---
Commit cda80a785 authored by Eric Liu<ioeric@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
[tpu profiler] Dump HLO graphs in profile responses to the log directory.

PiperOrigin-RevId: 163318992

---
Commit cea9ef6f5 authored by horance<horance-liu@users.noreply.github.com>
Committed by Vijay Vasudevan<vrv@google.com>:
Refactoring device name utils (#11797)

* remove duplicated code for full_name and legacy_name for DeviceNameUtils

* replace tabs

* Real->Device

---
Commit 1f7c0f917 authored by Kongsea<kongsea@gmail.com>
Committed by Vijay Vasudevan<vrv@google.com>:
Refine docstrings (#11800)

---
Commit dd1f0cddd authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Supports lookup devices by fullname either in the canonical form or the
legacy form. This makes DeviceSet behaves the same as DeviceMgr's
FindDevice method.

PiperOrigin-RevId: 163300346

---
Commit 631a364cd authored by Kay Zhu<kayzhu@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
[XLA] Add Reduce, DynamicSlice and DynamicSliceUpdate to HloEvaluator.

- Reduce is disabled explicitly for constant folding, as not all types of
embedded computation can be currently supported by the evaluator.

- Added support to evaluate HloModule to HloEvaluator.

- Minor signature change to Evaluate().

PiperOrigin-RevId: 163299238

---
Commit a52470172 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Sets the incarnation number even when the attribute is set.

PiperOrigin-RevId: 163299121

---
Commit a49fe0366 authored by Suharsh Sivakumar<suharshs@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Remove platform bridge for grpc_response_reader.

PiperOrigin-RevId: 163295986

---
Commit 4404aa7cb authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
[XLA] Add TODO comment explaining why the IsScalar check exists.

PiperOrigin-RevId: 163292777

---
Commit 43036ac16 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Remove unnecessary break statements.

PiperOrigin-RevId: 163291947

---
Commit fd5de4690 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
[XLA] Add regression test for a corner case using Reduce that currently fails with the GPU backend.

PiperOrigin-RevId: 163287986

---
Commit 32e198f2d authored by Chris Leary<leary@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
[TF:XLA] Add tf.cross support.

See #11788

PiperOrigin-RevId: 163287731

---
Commit 88abddbc3 authored by Alan Yee<alyee@ucsd.edu>
Committed by Vijay Vasudevan<vrv@google.com>:
Update README.md (#11793)

Remove bad practices of sudo pip and install use safer pip install commands
---
Commit 9b30dc3a8 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Remove final mentions of `get_shape` in docstring.

PiperOrigin-RevId: 163282839

---
Commit 423c1eea0 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
BREAKING CHANGE: Fix semantic error in how maybe_batch* handles sparse tensors.

PiperOrigin-RevId: 163276613

---
Commit 6028c071b authored by Justin Lebar<jlebar@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Highlight incoming/outgoing edges on hover in HLO graphviz dumps, and other improvements.

Other improvements:

 - Don't show tooltips for nodes and clusters.  Previously we'd show a
   tooltip containing a pointer value expressed as decimal.  Not so
   useful.

 - Show tooltips on edges with the to/from node names.

 - Fix bug wherein if we had

   - a node at the "edge" of the graph (so its operands aren't included
     unless they're referenced by another node),
   - with all of its operands included in the graph save one or more
     constants, and
   - those constants weren't referenced by any nodes not at the edge of
     the graph,

   we would incorrectly draw the node as "grayed out", indicating that
   one of its operands (namely, its constant operand) wasn't present in
   the graph.

   This is wrong because constants are inlined into their users, so they
   should always count as "displayed" for the purposes of determining
   whether a node is grayed out.

PiperOrigin-RevId: 163276108

---
Commit ce7a355bd authored by Joshua V. Dillon<jvdillon@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Update contrib/distributions/estimator_test build dependency.

PiperOrigin-RevId: 163272464

---
Commit 1b8458a1c authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Shorten docstring line.

PiperOrigin-RevId: 163269709

---
Commit 69e323cc6 authored by Asim Shankar<ashankar@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Fix comment ypo

PiperOrigin-RevId: 163266376

---
Commit 08790e73d authored by Chris Leary<leary@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
[XLA] Fix a bug in cloning outfeeds, carried the wrong shape.

PiperOrigin-RevId: 163265592

---
Commit 1bad826d6 authored by Yangzihao Wang<yangzihao@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Rollback of GPU kernel implementation of transpose for tensors with one small dimension.
END_PUBLIC

BEGIN_PUBLIC
BEGIN_PUBLIC
Automated g4 rollback of changelist 162525519

PiperOrigin-RevId: 163490703
This commit is contained in:
Vijay Vasudevan 2017-07-28 10:58:56 -07:00 committed by TensorFlower Gardener
parent efc63f6248
commit a1fba7f5ac
150 changed files with 17418 additions and 2973 deletions

70
CODE_OF_CONDUCT.md Normal file
View File

@ -0,0 +1,70 @@
# TensorFlow Code of Conduct
In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic address, without explicit permission
* Conduct which could reasonably be considered inappropriate for the forum in which it occurs.
All TensorFlow forums and spaces are meant for professional interactions, and any behavior which could reasonably be considered inappropriate in a professional setting is unacceptable.
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
## Scope
This Code of Conduct applies to all content on tensorflow.org, TensorFlows GitHub organization, or any other official TensorFlow web presence allowing for community interactions, as well as at all official TensorFlow events, whether offline or online.
The Code of Conduct also applies within project spaces and in public spaces whenever an individual is representing TensorFlow or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed or de facto representative at an online or offline event.
## Conflict Resolution
Conflicts in an open source project can take many forms, from someone having a bad day and using harsh and hurtful language in the issue queue, to more serious instances such as sexist/racist statements or threats of violence, and everything in between.
If the behaviour is threatening or harassing, or for other reasons requires immediate escalation, please see below.
However, for the vast majority of issues, we aim to empower individuals to first resolve conflicts themselves, asking for help when needed, and only after that fails to escalate further. This approach gives people more control over the outcome of their dispute.
If you are experiencing or witnessing conflict, we ask you to use the following escalation strategy to address the conflict:
1. Address the perceived conflict directly with those involved, preferably in a real-time medium.
2. If this fails, get a third party (e.g. a mutual friend, and/or someone with background on the issue, but not involved in conflict) to intercede.
3. If you are still unable to resolve the conflict, and you believe it rises to harassment or another code of conduct violation, report it.
## Reporting Violations
Violations of the Code of Conduct can be reported to TensorFlows Project Steward at conduct@tensorflow.org. The Project Steward will determine whether the Code of Conduct was violated, and will issue an appropriate sanction, possibly including a written warning or expulsion from the project, project sponsored spaces, or project forums. We ask that you make a good-faith effort to resolve your conflict via the conflict resolution policy before submitting a report.
Violations of the Code of Conduct can occur in any setting, even those unrelated to the project. We will only consider complaints about conduct that has occurred within one year of the report.
## Enforcement
If the Project Steward receives a report alleging a violation of the Code of Conduct, the Project Steward will notify the accused of the report, and provide them an opportunity to discuss the report before a sanction is issued. The Project Steward will do their utmost to keep the reporter anonymous. If the act is ongoing (such as someone engaging in harassment), or involves a threat to anyone's safety (e.g. threats of violence), the Project Steward may issue sanctions without notice.
## Attribution
This Code of Conduct is adapted from the Contributor Covenant, version 1.4, available at http://contributor-covenant.org/version/1/4, and includes some aspects of the Geek Feminism Code of Conduct and the Drupal Code of Conduct.

View File

@ -1,6 +1,6 @@
Please go to Stack Overflow for help and support:
http://stackoverflow.com/questions/tagged/tensorflow
https://stackoverflow.com/questions/tagged/tensorflow
If you open a GitHub issue, here is our policy:

View File

@ -9,7 +9,7 @@
| [![Build Status](https://ci.tensorflow.org/buildStatus/icon?job=tensorflow-master-cpu)](https://ci.tensorflow.org/job/tensorflow-master-cpu) | [![Build Status](https://ci.tensorflow.org/buildStatus/icon?job=tensorflow-master-linux-gpu)](https://ci.tensorflow.org/job/tensorflow-master-linux-gpu) | [![Build Status](https://ci.tensorflow.org/buildStatus/icon?job=tensorflow-master-mac)](https://ci.tensorflow.org/job/tensorflow-master-mac) | [![Build Status](https://ci.tensorflow.org/buildStatus/icon?job=tensorflow-master-win-cmake-py)](https://ci.tensorflow.org/job/tensorflow-master-win-cmake-py) | [![Build Status](https://ci.tensorflow.org/buildStatus/icon?job=tensorflow-master-android)](https://ci.tensorflow.org/job/tensorflow-master-android) |
**TensorFlow** is an open source software library for numerical computation using
data flow graphs. Nodes in the graph represent mathematical operations, while
data flow graphs. The graph nodes represent mathematical operations, while
the graph edges represent the multidimensional data arrays (tensors) that flow
between them. This flexible architecture lets you deploy computation to one
or more CPUs or GPUs in a desktop, server, or mobile device without rewriting
@ -21,25 +21,26 @@ organization for the purposes of conducting machine learning and deep neural
networks research. The system is general enough to be applicable in a wide
variety of other domains, as well.
**If you'd like to contribute to TensorFlow, be sure to review the [contribution
**If you want to contribute to TensorFlow, be sure to review the [contribution
guidelines](CONTRIBUTING.md).**
**We use [GitHub issues](https://github.com/tensorflow/tensorflow/issues) for
tracking requests and bugs, but please see
[Community](https://www.tensorflow.org/community/) for general questions
and discussion.**
tracking requests and bugs. So please see
[TensorFlow Discuss](https://groups.google.com/a/tensorflow.org/forum/#!forum/discuss) for general questions
and discussion, and please direct specific questions to [Stack Overflow](https://stackoverflow.com/questions/tagged/tensorflow).**
## Installation
*See [Installing TensorFlow](https://www.tensorflow.org/install/) for instructions on how to install our release binaries or how to build from source.*
*See [Installing TensorFlow](https://www.tensorflow.org/install) for instructions on how to install our release binaries or how to build from source.*
People who are a little more adventurous can also try our nightly binaries:
* Linux CPU-only: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=cpu-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.2.1-cp27-none-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=cpu-slave)) / [Python 3.4](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=cpu-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.2.1-cp34-cp34m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=cpu-slave/)) / [Python 3.5](https://ci.tensorflow.org/view/Nightly/job/nightly-python35-linux-cpu/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.2.1-cp35-cp35m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-python35-linux-cpu/))
* Linux GPU: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-linux/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.2.1-cp27-none-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-linux/)) / [Python 3.4](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-linux/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.2.1-cp34-cp34m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-linux/)) / [Python 3.5](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3.5,label=gpu-linux/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.2.1-cp35-cp35m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3.5,label=gpu-linux/))
* Mac CPU-only: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=mac-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.2.1-py2-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=mac-slave/)) / [Python 3](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=mac-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.2.1-py3-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=mac-slave/))
* Mac GPU: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-mac/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.2.1-py2-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-mac/)) / [Python 3](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-mac/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.2.1-py3-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-mac/))
* Windows CPU-only: [Python 3.5 64-bit](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows,PY=35/lastSuccessfulBuild/artifact/cmake_build/tf_python/dist/tensorflow-1.2.1-cp35-cp35m-win_amd64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows,PY=35/)) / [Python 3.6 64-bit](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows,PY=36/lastSuccessfulBuild/artifact/cmake_build/tf_python/dist/tensorflow-1.2.1-cp36-cp36m-win_amd64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows,PY=36/))
* Windows GPU: [Python 3.5 64-bit](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows-gpu,PY=35/lastSuccessfulBuild/artifact/cmake_build/tf_python/dist/tensorflow_gpu-1.2.1-cp35-cp35m-win_amd64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows-gpu,PY=35/)) / [Python 3.6 64-bit](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows-gpu,PY=36/lastSuccessfulBuild/artifact/cmake_build/tf_python/dist/tensorflow_gpu-1.2.1-cp36-cp36m-win_amd64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows-gpu,PY=36/))
* Linux CPU-only: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=cpu-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.3.0rc0-cp27-none-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=cpu-slave)) / [Python 3.4](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=cpu-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.3.0rc0-cp34-cp34m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=cpu-slave/)) / [Python 3.5](https://ci.tensorflow.org/view/Nightly/job/nightly-python35-linux-cpu/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.3.0rc0-cp35-cp35m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-python35-linux-cpu/))
* Linux GPU: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-linux/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.3.0rc0-cp27-none-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-linux/)) / [Python 3.4](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-linux/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.3.0rc0-cp34-cp34m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-linux/)) / [Python 3.5](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3.5,label=gpu-linux/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.3.0rc0-cp35-cp35m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3.5,label=gpu-linux/))
* Mac CPU-only: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=mac-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.3.0rc0-py2-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=mac-slave/)) / [Python 3](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=mac-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.3.0rc0-py3-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=mac-slave/))
* Mac GPU: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-mac/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.3.0rc0-py2-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-mac/)) / [Python 3](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-mac/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.3.0rc0-py3-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-mac/))
* Windows CPU-only: [Python 3.5 64-bit](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows,PY=35/lastSuccessfulBuild/artifact/cmake_build/tf_python/dist/tensorflow-1.3.0rc0-cp35-cp35m-win_amd64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows,PY=35/)) / [Python 3.6 64-bit](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows,PY=36/lastSuccessfulBuild/artifact/cmake_build/tf_python/dist/tensorflow-1.3.0rc0-cp36-cp36m-win_amd64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows,PY=36/))
* Windows GPU: [Python 3.5 64-bit](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows-gpu,PY=35/lastSuccessfulBuild/artifact/cmake_build/tf_python/dist/tensorflow_gpu-1.3.0rc0-cp35-cp35m-win_amd64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows-gpu,PY=35/)) / [Python 3.6 64-bit](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows-gpu,PY=36/lastSuccessfulBuild/artifact/cmake_build/tf_python/dist/tensorflow_gpu-1.3.0rc0-cp36-cp36m-win_amd64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows-gpu,PY=36/))
* Android: [demo APK](https://ci.tensorflow.org/view/Nightly/job/nightly-android/lastSuccessfulBuild/artifact/out/tensorflow_demo.apk), [native libs](http://ci.tensorflow.org/view/Nightly/job/nightly-android/lastSuccessfulBuild/artifact/out/native/)
([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-android/))
@ -55,7 +56,7 @@ $ python
'Hello, TensorFlow!'
>>> a = tf.constant(10)
>>> b = tf.constant(32)
>>> sess.run(a+b)
>>> sess.run(a + b)
42
>>>
```
@ -63,8 +64,9 @@ $ python
## For more information
* [TensorFlow website](https://www.tensorflow.org)
* [TensorFlow whitepaper](http://download.tensorflow.org/paper/whitepaper2015.pdf)
* [TensorFlow White Papers](https://www.tensorflow.org/about/bib)
* [TensorFlow Model Zoo](https://github.com/tensorflow/models)
* [TensorFlow MOOC on Udacity](https://www.udacity.com/course/deep-learning--ud730)
* [TensorFlow course at Stanford](https://web.stanford.edu/class/cs20si)
The TensorFlow community has created amazing things with TensorFlow, please see the [resources section of tensorflow.org](https://www.tensorflow.org/about/#community) for an incomplete list.
Learn more about the TensorFlow community at the [community page of tensorflow.org](https://www.tensorflow.org/community) for a few ways to participate.

View File

@ -1,3 +1,106 @@
# Release 1.3.0
## Major Features and Improvements
* Added canned estimators to Tensorflow library. List of added estimators: `DNNClassifier`, `DNNRegressor`, `LinearClassifer`, `LinearRegressor`, `DNNLinearCombinedClassifier`, `DNNLinearCombinedRegressor`.
* All our prebuilt binaries have been built with cuDNN 6.
* Adds a file cache to the GCS filesystem with configurable max staleness for file contents. This permits caching of file contents across close/open boundaries.
* Added an axis parameter to `tf.gather`.
* Added a `constant_values` keyword argument to `tf.pad`.
* Adds `Dataset.interleave` transformation.
* Add `ConcatenateDataset` to concatenate two datasets.
* Added Mobilenet support to TensorFlow for Poets training script.
* Adds a block cache to the GCS filesystem with configurable block size and count.
* SinhArcSinh bijector added.
* Added `Dataset.list_files` API.
* Introduces new operations and Python bindings for the Cloud TPU.
* Adding TensorFlow-iOS CocoaPod for symmetry with tensorflow-android.
* Introduces base implementations of ClusterResolvers.
* Unify memory representations of TensorShape and PartialTensorShape. As a consequence, tensors now have a maximum of 254 dimensions, not 255.
* Changed references to LIBXSMM to use version 1.8.1.
* TensorFlow Debugger (tfdbg): Display summaries of numeric tensor values with the `-s` flag to command `print_tensor` or `pt`.
* Initial release of the statistical distribution library `tf.distributions`.
* GPU kernels and speed improvements for for unary `tf.where` and `tf.nn.top_k`.
* Monotonic Attention wrappers added to `tf.contrib.seq2seq`.
## Breaking Changes to the API
* `tf.RewriterConfig` was removed from the Python API after being available in 1.2 release candidates (it was never in an actual release). Graph rewriting is still available, just not as `tf.RewriterConfig`. Instead add an explicit import.
* Breaking change to `tf.contrib.data.Dataset` APIs that expect a nested structure. Lists are now converted to `tf.Tensor` implicitly. You may need to change uses of lists to tuples in existing code. In addition, dicts are now supported as a nested structure.
## Changes to contrib APIs
* Adds tf.contrib.nn.rank_sampled_softmax_loss, a sampled-softmax variant that can improve rank loss.
* `tf.contrib.metrics`.{streaming_covariance,streaming_pearson_correlation} modified to return nan when they have seen less or equal to 1 unit of weight.
* Adds time series models to contrib. See contrib/timeseries/README.md for details.
* Adds FULLY_CONNECTED Op to tensorflow/contrib/lite/schema.fbs
## Bug Fixes and Other Changes
* Fixes 'strides' and 'begin' dtype mismatch when slicing using int64 Tensor index in python.
* Improved convolution padding documentation.
* Add a tag constant, gpu, to present graph with GPU support.
* `saved_model.utils` now support SparseTensors transparently.
* A more efficient implementation of non-max suppression.
* Add support for the shrinkage-type L2 to FtrlOptimizer in addition to the online L2 it already supports.
* Fix negative variance in moments calculation.
* Expand UniqueOp Benchmark Tests to cover more collision cases.
* Improves stability of GCS filesystem on Mac.
* Add time estimation to HloCostAnalysis.
* Fixed the bug in Estimator that params in constructor was not a deepcopy of the user provided one. This bugs inadvertently enabled user to mutate the params after the creation of Estimator, leading to potentially undefined behavior.
* Added None check for save_path in `saver.restore`.
* Register devices under their legacy names in device_mgr to ease the transition to clusterspec-propagated configurations.
* VectorExponential added to distributions.
* Add a bitwise module with bitwise_and, bitwise_or, bitwise_xor, and invert functions.
* Add fixed-grid ODE integration routines.
* Allow passing bounds to ScipyOptimizerInterface.
* Correctness fixes for fft_length parameter to `tf.spectral.rfft` & `tf.spectral.irfft`.
* Exported model signatures using the 'predict' method will no longer have their input and output keys silently ignored and rewritten to 'inputs' and 'outputs'. If a model was exported with different names before 1.2, and is now served with tensorflow/serving, it will accept requests using 'inputs' and 'outputs'. Starting at 1.2, such a model will accept the keys specified during export. Therefore, inference requests using 'inputs' and 'outputs' may start to fail. To fix this, either update any inference clients to send requests with the actual input and output keys used by the trainer code, or conversely, update the trainer code to name the input and output Tensors 'inputs' and 'outputs', respectively. Signatures using the 'classify' and 'regress' methods are not affected by this change; they will continue to standardize their input and output keys as before.
* Add in-memory caching to the Dataset API.
* Set default end_of_sequence variable in datasets iterators to false.
* [Performance] Increase performance of `tf.layers.con2d` when setting use_bias=True by 2x by using nn.bias_add.
* Update iOS examples to use CocoaPods, and moved to tensorflow/examples/ios.
* Adds a family= attribute in `tf.summary` ops to allow controlling the tab name used in Tensorboard for organizing summaries.
* When GPU is configured, do not require --config=cuda, instead, automatically build for GPU if this is requested in the configure script.
* Fix incorrect sampling of small probabilities in CPU/GPU multinomial.
* Add a list_devices() API on sessions to list devices within a cluster. Additionally, this change augment the ListDevices master API to support specifying a session.
* Allow uses of over-parameterized separable convolution.
* TensorForest multi-regression bug fix.
* Framework now supports armv7, cocoapods.org now displays correct page.
* Script to create iOS framework for CocoaPods.
* Android releases of TensorFlow are now pushed to jcenter for easier integration into apps. See https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/android/README.md for more details.
* Fixed a bug that prevented tfdbg from functioning with multi-GPU setups.
* Fixed a bug that prevented tfdbg from working with `tf.Session.make_callable`.
## Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
4F2E4A2E, Adriano Carmezim, Adrià Arrufat, Alan Yee, Alex Lattas, Alex Rothberg,
Alexandr Baranezky, Ali Siddiqui, Andreas Solleder, Andrei Costinescu, Andrew Hundt,
Androbin, Andy Kernahan, Anish Shah, Anthony Platanios, Arvinds-Ds, b1rd, Baptiste
Arnaud, Ben Mabey, Benedikt Linse, Beomsu Kim, Bo Wang, Boyuan Deng, Brett Koonce,
Bruno Rosa, Carl Thomé, Changming Sun, Chase Roberts, Chirag Bhatia, Chris Antaki,
Chris Hoyean Song, Chris Tava, Christos Nikolaou, Croath Liu, cxx, Czxck001, Daniel
Ylitalo, Danny Goodman, Darren Garvey, David Brailovsky, David Norman, DavidNorman,
davidpham87, ddurham2, Dhruv, DimanNe, Drew Hintz, Dustin Tran, Earthson Lu, ethiraj,
Fabian Winnen, Fei Sun, Freedom" Koan-Sin Tan, Fritz Obermeyer, Gao, Xiang, Gautam,
Guenther Schmuelling, Gyu-Ho Lee, Hauke Brammer, horance, Humanity123, J Alammar,
Jayeol Chun, Jeroen BéDorf, Jianfei Wang, jiefangxuanyan, Jing Jun Yin, Joan Puigcerver,
Joel Hestness, Johannes Mayer, John Lawson, Johnson145, Jon Malmaud, Jonathan Alvarez-Gutierrez,
Juang, Yi-Lin, Julian Viereck, Kaarthik Sivashanmugam, Karl Lessard, karl@kubx.ca, Kevin
Carbone, Kevin Van Der Burgt, Kongsea, ksellesk, lanhin, Lef Ioannidis, Liangliang He,
Louis Tiao, Luke Iwanski, LáSzló Csomor, magixsno, Mahmoud Abuzaina, Marcel Hlopko, Mark
Neumann, Maxwell Paul Brickner, mdfaijul, MichaëL Defferrard, Michał JastrzęBski, Michele
Colombo, Mike Brodie, Mosnoi Ion, mouradmourafiq, myPrecious, Nayana Thorat,
Neeraj Kashyap, Nelson Liu, Niranjan Hasabnis, Olivier Moindrot, orome, Pankaj Gupta, Paul
Van Eck, peeyush18, Peng Yu, Pierre, preciousdp11, qjivy, Raingo, raoqiyu, ribx, Richard S.
Imaoka, Rishabh Patel, Robert Walecki, Rockford Wei, Ryan Kung, Sahil Dua, Sandip Giri, Sayed
Hadi Hashemi, sgt101, Shitian Ni, Shuolongbj, Siim PõDer, Simon Perkins, sj6077, SOLARIS,
Spotlight0xff, Steffen Eberbach, Stephen Fox, superryanguo, Sven Mayer, Tapan Prakash,
Tiago Morais Morgado, Till Hoffmann, Tj Rana, Vadim Markovtsev, vhasanov, Wei Wu,
windead, Yan (Asta) Li, Yan Chen, Yann Henon, Yi Wang, Yong Tang, yorkie, Yuan (Terry)
Tang, Yuxin Wu, zhengjiajin, zhongzyd, 黄璞
We are also grateful to all who filed issues or helped resolve them, asked and
answered questions, and were part of inspiring discussions.
# Release 1.2.1
## Bug Fixes and Other Changes

View File

@ -32,6 +32,9 @@ load("//tensorflow:workspace.bzl", "tf_workspace")
# name="androidndk",
# path="<PATH_TO_NDK>",
# # This needs to be 14 or higher to compile TensorFlow.
# # Please specify API level to >= 21 to build for 64-bit
# # archtectures or the Android NDK will automatically select biggest
# # API level that it supports without notice.
# # Note that the NDK version is not the API level.
# api_level=14)

81
arm_compiler.BUILD Normal file
View File

@ -0,0 +1,81 @@
package(default_visibility = ["//visibility:public"])
filegroup(
name = "gcc",
srcs = [
"bin/arm-linux-gnueabihf-gcc",
],
)
filegroup(
name = "ar",
srcs = [
"bin/arm-linux-gnueabihf-ar",
],
)
filegroup(
name = "ld",
srcs = [
"bin/arm-linux-gnueabihf-ld",
],
)
filegroup(
name = "nm",
srcs = [
"bin/arm-linux-gnueabihf-nm",
],
)
filegroup(
name = "objcopy",
srcs = [
"bin/arm-linux-gnueabihf-objcopy",
],
)
filegroup(
name = "objdump",
srcs = [
"bin/arm-linux-gnueabihf-objdump",
],
)
filegroup(
name = "strip",
srcs = [
"bin/arm-linux-gnueabihf-strip",
],
)
filegroup(
name = "as",
srcs = [
"bin/arm-linux-gnueabihf-as",
],
)
filegroup(
name = "compiler_pieces",
srcs = glob([
"arm-linux-gnueabihf/**",
"libexec/**",
"lib/gcc/arm-linux-gnueabihf/**",
"include/**",
]),
)
filegroup(
name = "compiler_components",
srcs = [
":ar",
":as",
":gcc",
":ld",
":nm",
":objcopy",
":objdump",
":strip",
],
)

4
configure vendored
View File

@ -8,7 +8,7 @@ if [ -z "$PYTHON_BIN_PATH" ]; then
fi
# Set all env variables
$PYTHON_BIN_PATH configure.py
"$PYTHON_BIN_PATH" configure.py
echo "Configuration finished"
echo "Configuration finished"

View File

@ -175,7 +175,7 @@ def setup_python(environ_cp):
if not python_lib_path:
python_lib_paths = get_python_path(environ_cp)
if environ_cp.get('USE_DEFAULT_PYTHON_LIB_PATH') == '1':
environ_cp['PYTHON_LIB_PATH'] = python_lib_paths[0]
python_lib_path = python_lib_paths[0]
else:
print('Found possible Python library paths:\n%s' %
'\n'.join(python_lib_paths))
@ -185,7 +185,7 @@ def setup_python(environ_cp):
% python_lib_paths[0])
if not python_lib_path:
python_lib_path = default_python_lib_path
environ_cp['PYTHON_LIB_PATH'] = python_lib_path
environ_cp['PYTHON_LIB_PATH'] = python_lib_path
python_major_version = sys.version_info[0]
# Convert python path to Windows style before writing into bazel.rc
@ -240,7 +240,7 @@ def run_gen_git_source(environ_cp):
Args:
environ_cp: copy of the os.environ.
"""
cmd = '%s tensorflow/tools/git/gen_git_source.py --configure %s' % (
cmd = '"%s" tensorflow/tools/git/gen_git_source.py --configure %s' % (
environ_cp.get('PYTHON_BIN_PATH'), os.getcwd())
os.system(cmd)
@ -379,7 +379,7 @@ def check_bazel_version(min_version):
min_version: string for minimum bazel version.
"""
try:
curr_version = run_shell('bazel version')
curr_version = run_shell('bazel --batch version')
except subprocess.CalledProcessError:
print('Cannot find bazel. Please install bazel.')
sys.exit(0)

View File

@ -63,6 +63,24 @@ config_setting(
visibility = ["//visibility:public"],
)
config_setting(
name = "android_mips",
values = {
"crosstool_top": "//external:android/crosstool",
"cpu": "mips",
},
visibility = ["//visibility:public"],
)
config_setting(
name = "android_mips64",
values = {
"crosstool_top": "//external:android/crosstool",
"cpu": "mips64",
},
visibility = ["//visibility:public"],
)
config_setting(
name = "darwin",
values = {"cpu": "darwin"},

View File

@ -86,6 +86,15 @@ Status EluGradHelper(const Scope& scope, const Operation& op,
}
REGISTER_GRADIENT_OP("Elu", EluGradHelper);
Status SeluGradHelper(const Scope& scope, const Operation& op,
const std::vector<Output>& grad_inputs,
std::vector<Output>* grad_outputs) {
auto dx = internal::SeluGrad(scope, grad_inputs[0], op.output(0));
grad_outputs->push_back(dx);
return scope.status();
}
REGISTER_GRADIENT_OP("Selu", SeluGradHelper);
} // anonymous namespace
} // namespace ops
} // namespace tensorflow

View File

@ -103,5 +103,15 @@ TEST_F(NNGradTest, EluGrad) {
RunTest(x, x_init_value, y, shape);
}
TEST_F(NNGradTest, SeluGrad) {
TensorShape shape({5, 2});
auto x = Placeholder(scope_, DT_FLOAT, Placeholder::Shape(shape));
auto y = Selu(scope_, x);
Tensor x_init_value = test::AsTensor<float>(
{-0.9f, -0.7f, -0.5f, -0.3f, -0.1f, 0.1f, 0.3f, 0.5f, 0.7f, 0.9f},
{5, 2});
RunTest(x, x_init_value, y, shape);
}
} // namespace
} // namespace tensorflow

View File

@ -177,6 +177,7 @@ op { name: "MaxPoolGradWithArgmax" hide: true }
op { name: "ReluGrad" hide: true }
op { name: "Relu6Grad" hide: true }
op { name: "EluGrad" hide: true }
op { name: "SeluGrad" hide: true }
op { name: "SoftplusGrad" hide: true }
op { name: "SoftsignGrad" hide: true }
op { name: "FractionalAvgPoolGrad" hide: true }

View File

@ -18,7 +18,7 @@ cc_library(
"//tensorflow/compiler/xla/service",
"//third_party/eigen3",
"@local_config_cuda//cuda:cuda_headers",
"@protobuf//:protobuf_headers",
"@protobuf_archive//:protobuf_headers",
],
)

View File

@ -47,13 +47,14 @@ static se::DeviceMemoryBase AllocateOutputBuffer(
} else {
int64 size(xla::ShapeUtil::ByteSizeOf(shape, sizeof(void*)));
void** buf = reinterpret_cast<void**>(executor->Allocate(size));
void** buf_rc = buf;
for (int64 n = 0; n < xla::ShapeUtil::TupleElementCount(shape); n++) {
se::DeviceMemoryBase out =
AllocateSingleOutput(executor, literal.tuple_literals(n));
*buf++ = out.opaque();
}
return se::DeviceMemoryBase(buf, size);
return se::DeviceMemoryBase(buf_rc, size);
}
}

View File

@ -113,6 +113,14 @@ class BinaryOpsTest(XLATestCase):
np.array([-.6, -.4, -.2, 0, .2, .4], dtype=dtype),
expected=np.array([0.4, 1.2, 2.4, 4, 5, 6], dtype=dtype))
self._testBinary(
gen_nn_ops._selu_grad,
np.array([1, 2, 3, 4, 5, 6], dtype=dtype),
np.array([-.6, -.4, -.2, .2, .4, .6], dtype=dtype),
expected=np.array(
[1.158099340847, 2.7161986816948, 4.67429802254,
4.202803949422, 5.2535049367774, 6.30420592413], dtype=dtype))
self._testBinary(
gen_nn_ops._relu_grad,
np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], dtype=dtype),

View File

@ -1434,6 +1434,23 @@ TEST_F(OpTest, EluGrad) {
});
}
TEST_F(OpTest, Selu) {
Repeatedly([this]() {
return ExpectTfAndXlaOutputsAreClose(
OpTestBuilder("Selu").RandomInput(DT_FLOAT).Attr("T", DT_FLOAT));
});
}
TEST_F(OpTest, SeluGrad) {
Repeatedly([this]() {
auto dims = RandomDims();
return ExpectTfAndXlaOutputsAreClose(OpTestBuilder("SeluGrad")
.RandomInput(DT_FLOAT, dims)
.RandomInput(DT_FLOAT, dims)
.Attr("T", DT_FLOAT));
});
}
TEST_F(OpTest, Equal) {
Repeatedly([this]() {
DataType type = Choose<DataType>({DT_INT32, DT_FLOAT});

View File

@ -229,6 +229,11 @@ class UnaryOpsTest(XLATestCase):
np.array([[-1, 0, 1]], dtype=dtype),
expected=np.array([[-0.63212056, 0, 1]], dtype=dtype))
self._assertOpOutputMatchesExpected(
nn_ops.selu,
np.array([[-1, 0, 1]], dtype=dtype),
expected=np.array([[-1.11133074, 0., 1.05070099]], dtype=dtype))
self._assertOpOutputMatchesExpected(
nn_ops.relu,
np.array([[-1, 1]], dtype=dtype),

View File

@ -61,5 +61,49 @@ class EluGradOp : public XlaOpKernel {
REGISTER_XLA_OP(Name("Elu"), EluOp);
REGISTER_XLA_OP(Name("EluGrad"), EluGradOp);
class SeluOp : public XlaOpKernel {
public:
explicit SeluOp(OpKernelConstruction* ctx) : XlaOpKernel(ctx) {}
// Computes the max of the scalar input x and 0.
void Compile(XlaOpKernelContext* ctx) override {
xla::ComputationBuilder* b = ctx->builder();
const auto zero = XlaHelpers::Zero(b, input_type(0));
const auto one = XlaHelpers::One(b, input_type(0));
const auto scale = XlaHelpers::FloatLiteral(b, input_type(0),
1.0507009873554804934193349852946);
const auto scale_alpha = XlaHelpers::FloatLiteral(b, input_type(0),
1.7580993408473768599402175208123);
const auto pred = b->Gt(ctx->Input(0), zero);
const auto expm1 = b->Sub(b->Exp(ctx->Input(0)), one);
ctx->SetOutput(0, b->Select(pred, b->Mul(scale, ctx->Input(0)),
b->Mul(scale_alpha, expm1)));
}
};
class SeluGradOp : public XlaOpKernel {
public:
explicit SeluGradOp(OpKernelConstruction* ctx) : XlaOpKernel(ctx) {}
// Return the lhs (incoming gradient) if the rhs (input feature) > 0,
// otherwise return lhs * (1 + rhs).
void Compile(XlaOpKernelContext* ctx) override {
xla::ComputationBuilder* b = ctx->builder();
const auto zero = XlaHelpers::Zero(b, input_type(0));
const auto one = XlaHelpers::One(b, input_type(0));
const auto scale = XlaHelpers::FloatLiteral(b, input_type(0),
1.0507009873554804934193349852946);
const auto scale_alpha = XlaHelpers::FloatLiteral(b, input_type(0),
1.7580993408473768599402175208123);
const auto grad = ctx->Input(0);
const auto activation = ctx->Input(1);
const auto lin_grad = b->Mul(grad, scale);
const auto exp_grad = b->Mul(grad, b->Add(activation, scale_alpha));
const auto pred = b->Gt(activation, zero);
ctx->SetOutput(0, b->Select(pred, lin_grad, exp_grad));
}
};
REGISTER_XLA_OP(Name("Selu"), SeluOp);
REGISTER_XLA_OP(Name("SeluGrad"), SeluGradOp);
} // namespace
} // namespace tensorflow

View File

@ -204,12 +204,12 @@ TEST_F(RandomShapePartitionIteratorTest, RandomShapeAndPartitions) {
// Choose random dimensions for R4 shape.
Shape shape = ShapeUtil::MakeShapeWithLayout(F32, RandR4Dims(), {3, 2, 1, 0});
// Choose random number of outer dimensions to partition.
const int num_outer_dims_to_partiton = 1 + (Rand() % 3);
// Choose random outer dimension partiton counts.
std::vector<int64> dim_sizes(num_outer_dims_to_partiton);
std::vector<int64> dim_partition_counts(num_outer_dims_to_partiton);
const int num_outer_dims_to_partition = 1 + (Rand() % 3);
// Choose random outer dimension partition counts.
std::vector<int64> dim_sizes(num_outer_dims_to_partition);
std::vector<int64> dim_partition_counts(num_outer_dims_to_partition);
int64 total_dim_size = 1;
for (int i = 0; i < num_outer_dims_to_partiton; ++i) {
for (int i = 0; i < num_outer_dims_to_partition; ++i) {
const int64 dimension = shape.layout().minor_to_major(
shape.layout().minor_to_major_size() - 1 - i);
dim_sizes[i] = shape.dimensions(dimension);
@ -220,7 +220,7 @@ TEST_F(RandomShapePartitionIteratorTest, RandomShapeAndPartitions) {
}
// Iterate through all partition: for each partition record covered
// index ranges by dimension.
std::vector<std::map<int64, int64>> ranges(num_outer_dims_to_partiton);
std::vector<std::map<int64, int64>> ranges(num_outer_dims_to_partition);
ShapePartitionIterator partition_iterator(shape, dim_partition_counts);
const int64 partition_count = partition_iterator.GetTotalPartitionCount();
for (int64 i = 0; i < partition_count; ++i) {

View File

@ -521,6 +521,42 @@ XLA_TEST_F(FusionTest, DISABLED_ON_CPU(ReduceWindow)) {
*ExecuteAndTransfer(std::move(hlo_module), {}));
}
// When a constant (or other op) which has multiple users is imported
// into a fusion, it should remain shared, rather than being duplicated
// within the fusion.
XLA_TEST_F(FusionTest, SharedConstant) {
auto hlo_module = CreateNewModule();
auto builder = HloComputation::Builder(TestName());
auto const0 = builder.AddInstruction(
HloInstruction::CreateConstant(Literal::CreateR1<int32>({0})));
auto const1 = builder.AddInstruction(
HloInstruction::CreateConstant(Literal::CreateR1<int32>({2})));
auto add1 = builder.AddInstruction(HloInstruction::CreateBinary(
ShapeUtil::MakeShape(S32, {1}), HloOpcode::kAdd, const1, const0));
auto add2 = builder.AddInstruction(HloInstruction::CreateBinary(
ShapeUtil::MakeShape(S32, {1}), HloOpcode::kAdd, const1, add1));
auto add3 = builder.AddInstruction(HloInstruction::CreateBinary(
ShapeUtil::MakeShape(S32, {1}), HloOpcode::kAdd, const1, add2));
auto add4 = builder.AddInstruction(HloInstruction::CreateBinary(
ShapeUtil::MakeShape(S32, {1}), HloOpcode::kAdd, const1, add3));
hlo_module->AddEntryComputation(builder.Build())
->CreateFusionInstruction(
{add4, add3, add2, add1, const1},
HloInstruction::FusionKind::kLoop);
HloComputation* entry_comp = hlo_module->entry_computation();
// entry computation contains the constant(0) and the fusion
EXPECT_EQ(entry_comp->instructions().size(), 2);
// fused instruction contains the constant(2), the parameter, and 4 adds
EXPECT_EQ(entry_comp->root_instruction()->fused_instructions().size(), 6);
LiteralTestUtil::ExpectEqual(*Literal::CreateR1<int32>({8}),
*ExecuteAndTransfer(std::move(hlo_module), {}));
}
XLA_TEST_F(FusionTest, Add2D) { TestElementwise2D<float, 2>(HloOpcode::kAdd); }
XLA_TEST_F(FusionTest, Subtract2D) {

View File

@ -1,15 +1,15 @@
"""Wrapper around cc_proto_library used inside the XLA codebase."""
load("@protobuf//:protobuf.bzl", "cc_proto_library")
load("@protobuf_archive//:protobuf.bzl", "cc_proto_library")
# xla_proto_library() is a convenience wrapper around cc_proto_library.
def xla_proto_library(name, srcs=[], deps=[], visibility=None, testonly=0):
cc_proto_library(name=name,
srcs=srcs,
deps=deps,
cc_libs = ["@protobuf//:protobuf"],
protoc="@protobuf//:protoc",
default_runtime="@protobuf//:protobuf",
cc_libs = ["@protobuf_archive//:protobuf"],
protoc="@protobuf_archive//:protoc",
default_runtime="@protobuf_archive//:protobuf",
testonly=testonly,
visibility=visibility,)

View File

@ -11,6 +11,7 @@ load(
"//tensorflow:tensorflow.bzl",
"tf_copts",
"if_android",
"if_android_mips",
)
exports_files([
@ -85,7 +86,7 @@ cc_binary(
"-Wl,--gc-sections",
"-Wl,--version-script", # This line must be directly followed by LINKER_SCRIPT.
LINKER_SCRIPT,
]),
]) + if_android_mips(["-latomic"]),
linkshared = 1,
linkstatic = 1,
tags = [

View File

@ -27,6 +27,7 @@ import java.nio.ByteBuffer;
import java.nio.DoubleBuffer;
import java.nio.FloatBuffer;
import java.nio.IntBuffer;
import java.nio.LongBuffer;
import java.util.ArrayList;
import java.util.List;
import org.tensorflow.DataType;
@ -226,6 +227,16 @@ public class TensorFlowInferenceInterface {
addFeed(inputName, Tensor.create(dims, IntBuffer.wrap(src)));
}
/**
* Given a source array with shape {@link dims} and content {@link src}, copy the contents into
* the input Tensor with name {@link inputName}. The source array {@link src} must have at least
* as many elements as that of the destination Tensor. If {@link src} has more elements than the
* destination has capacity, the copy is truncated.
*/
public void feed(String inputName, long[] src, long... dims) {
addFeed(inputName, Tensor.create(dims, LongBuffer.wrap(src)));
}
/**
* Given a source array with shape {@link dims} and content {@link src}, copy the contents into
* the input Tensor with name {@link inputName}. The source array {@link src} must have at least
@ -270,6 +281,17 @@ public class TensorFlowInferenceInterface {
addFeed(inputName, Tensor.create(dims, src));
}
/**
* Given a source buffer with shape {@link dims} and content {@link src}, both stored as
* <b>direct</b> and <b>native ordered</b> java.nio buffers, copy the contents into the input
* Tensor with name {@link inputName}. The source buffer {@link src} must have at least as many
* elements as that of the destination Tensor. If {@link src} has more elements than the
* destination has capacity, the copy is truncated.
*/
public void feed(String inputName, LongBuffer src, long... dims) {
addFeed(inputName, Tensor.create(dims, src));
}
/**
* Given a source buffer with shape {@link dims} and content {@link src}, both stored as
* <b>direct</b> and <b>native ordered</b> java.nio buffers, copy the contents into the input
@ -310,6 +332,15 @@ public class TensorFlowInferenceInterface {
fetch(outputName, IntBuffer.wrap(dst));
}
/**
* Read from a Tensor named {@link outputName} and copy the contents into a Java array. {@link
* dst} must have length greater than or equal to that of the source Tensor. This operation will
* not affect dst's content past the source Tensor's size.
*/
public void fetch(String outputName, long[] dst) {
fetch(outputName, LongBuffer.wrap(dst));
}
/**
* Read from a Tensor named {@link outputName} and copy the contents into a Java array. {@link
* dst} must have length greater than or equal to that of the source Tensor. This operation will
@ -348,6 +379,16 @@ public class TensorFlowInferenceInterface {
getTensor(outputName).writeTo(dst);
}
/**
* Read from a Tensor named {@link outputName} and copy the contents into the <b>direct</b> and
* <b>native ordered</b> java.nio buffer {@link dst}. {@link dst} must have capacity greater than
* or equal to that of the source Tensor. This operation will not affect dst's content past the
* source Tensor's size.
*/
public void fetch(String outputName, LongBuffer dst) {
getTensor(outputName).writeTo(dst);
}
/**
* Read from a Tensor named {@link outputName} and copy the contents into the <b>direct</b> and
* <b>native ordered</b> java.nio buffer {@link dst}. {@link dst} must have capacity greater than

View File

@ -45,7 +45,7 @@ else()
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
BUILD_IN_SOURCE 1
PATCH_COMMAND ${CMAKE_COMMAND} -E copy_if_different ${CMAKE_CURRENT_SOURCE_DIR}/patches/fft2d/CMakeLists.txt ${fft2d_BUILD}/src/fft2d/CMakeLists.txt
INSTALL_DIR $(fft2d_INSTALL)
INSTALL_DIR ${fft2d_INSTALL}
INSTALL_COMMAND echo
BUILD_COMMAND $(MAKE))

View File

@ -17,7 +17,7 @@ include (ExternalProject)
set(GRPC_INCLUDE_DIRS ${CMAKE_CURRENT_BINARY_DIR}/grpc/src/grpc/include)
set(GRPC_URL https://github.com/grpc/grpc.git)
set(GRPC_BUILD ${CMAKE_CURRENT_BINARY_DIR}/grpc/src/grpc)
set(GRPC_TAG 3bc78cd0b5bd784a235c01612d634b1ec5f8fb97)
set(GRPC_TAG 781fd6f6ea03645a520cd5c675da67ab61f87e4b)
if(WIN32)
set(grpc_STATIC_LIBRARIES
@ -38,7 +38,10 @@ ExternalProject_Add(grpc
GIT_TAG ${GRPC_TAG}
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
BUILD_IN_SOURCE 1
# TODO(jhseu): Remove this PATCH_COMMAND once grpc removes the dependency
# on "grpc" from the "grpc++_unsecure" rule.
PATCH_COMMAND ${CMAKE_COMMAND} -E copy_if_different ${CMAKE_CURRENT_SOURCE_DIR}/patches/grpc/CMakeLists.txt ${GRPC_BUILD}
BUILD_COMMAND ${CMAKE_COMMAND} --build . --config Release --target grpc++_unsecure
INSTALL_COMMAND ""
CMAKE_CACHE_ARGS
-DCMAKE_BUILD_TYPE:STRING=Release
@ -46,5 +49,13 @@ ExternalProject_Add(grpc
-DPROTOBUF_INCLUDE_DIRS:STRING=${PROTOBUF_INCLUDE_DIRS}
-DPROTOBUF_LIBRARIES:STRING=${protobuf_STATIC_LIBRARIES}
-DZLIB_ROOT:STRING=${ZLIB_INSTALL}
-DgRPC_SSL_PROVIDER:STRING=NONE
)
# grpc/src/core/ext/census/tracing.c depends on the existence of openssl/rand.h.
ExternalProject_Add_Step(grpc copy_rand
COMMAND ${CMAKE_COMMAND} -E copy
${CMAKE_SOURCE_DIR}/patches/grpc/rand.h ${GRPC_BUILD}/include/openssl/rand.h
DEPENDEES patch
DEPENDERS build
)

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,14 @@
/* Copyright 2017 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/

View File

@ -61,6 +61,7 @@ if (tensorflow_ENABLE_GPU)
file(GLOB_RECURSE tf_core_gpu_srcs
"${tensorflow_source_dir}/tensorflow/core/common_runtime/gpu/*.cc"
"${tensorflow_source_dir}/tensorflow/core/platform/default/gpu/cupti_wrapper.cc"
"${tensorflow_source_dir}/tensorflow/core/platform/default/gpu_tracer.cc"
"${tensorflow_source_dir}/tensorflow/core/common_runtime/gpu_device_factory.cc"
"${tensorflow_source_dir}/tensorflow/core/grappler/devices.h"
"${tensorflow_source_dir}/tensorflow/core/grappler/devices.cc"

View File

@ -155,6 +155,10 @@ if (NOT tensorflow_ENABLE_GPU)
"${tensorflow_source_dir}/tensorflow/core/platform/cuda_libdevice_path.*"
"${tensorflow_source_dir}/tensorflow/core/platform/default/cuda_libdevice_path.*")
list(REMOVE_ITEM tf_core_platform_srcs ${tf_core_platform_gpu_srcs})
else()
file(GLOB tf_core_platform_srcs_exclude
"${tensorflow_source_dir}/tensorflow/core/platform/default/gpu_tracer.cc")
list(REMOVE_ITEM tf_core_platform_srcs ${tf_core_platform_srcs_exclude})
endif()
file(GLOB tf_core_platform_exclude_srcs

View File

@ -4,7 +4,7 @@
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
@ -14,15 +14,20 @@
# ==============================================================================
"""Non-core alias for the deprecated tf.X_summary ops.
For TensorFlow 1.0, we have re-organized the TensorFlow summary ops into a
For TensorFlow 1.0, we have reorganized the TensorFlow summary ops into a
submodule, and made some semantic tweaks. The first thing to note is that we
moved the APIs around as follows:
tf.scalar_summary -> tf.summary.scalar
tf.histogram_summary -> tf.summary.histogram
tf.audio_summary -> tf.summary.audio
tf.image_summary -> tf.summary.image
tf.merge_summary -> tf.summary.merge
tf.scalar_summary -> tf.summary.scalar
tf.histogram_summary -> tf.summary.histogram
tf.audio_summary -> tf.summary.audio
tf.image_summary -> tf.summary.image
tf.merge_summary -> tf.summary.merge
tf.merge_all_summaries -> tf.summary.merge_all
We think this is a cleaner API and will improve long-term discoverability and
@ -35,14 +40,14 @@ Previously, the tag was allowed to be any unique string, and had no relation
to the summary op generating it, and no relation to the TensorFlow name system.
This made it very difficult to write re-usable code that would add summary
ops to the graph. If you had a function that would add summary ops, you would
need to manually pass in a name scope to that function to create de-duplicated
need to manually pass in a name scope to that function to create deduplicated
tags, otherwise your program would fail with a runtime error due to tag
collision.
The new summary APIs under tf.summary throw away the "tag" as an independent
concept; instead, the first argument is the node name. This means that summary
tags now automatically inherit the surrounding TF name scope, and automatically
are deduplicated if there is a conflict. However, now the only allowed
concept; instead, the first argument is the node name. So summary tags now
automatically inherit the surrounding TF name scope, and automatically
are deduplicated if there is a conflict. Now however, the only allowed
characters are alphanumerics, underscores, and forward slashes. To make
migration easier, the new APIs automatically convert illegal characters to
underscores.
@ -75,7 +80,7 @@ to the new summary ops:
tf.summary.scalar requires a single scalar name and scalar value. In most
cases, you can create tf.summary.scalars in a loop to get the same behavior
As before, TensorBoard will group charts by the top-level name scope. This may
As before, TensorBoard groups charts by the top-level name scope. This may
be inconvenient, since in the new summary ops the summary will inherit that
name scope without user control. We plan to add more grouping mechanisms to
TensorBoard, so it will be possible to specify the TensorBoard group for

View File

@ -12,7 +12,7 @@ cc_library(
":clustering_ops",
":masked_matmul_ops",
":wals_solver_ops",
"@protobuf//:protobuf_headers",
"@protobuf_archive//:protobuf_headers",
],
)
@ -22,7 +22,7 @@ cc_library(
deps = [
"//tensorflow/core:framework_headers_lib",
"//third_party/eigen3",
"@protobuf//:protobuf_headers",
"@protobuf_archive//:protobuf_headers",
],
alwayslink = 1,
)
@ -33,7 +33,7 @@ cc_library(
deps = [
"//tensorflow/core:framework_headers_lib",
"//third_party/eigen3",
"@protobuf//:protobuf_headers",
"@protobuf_archive//:protobuf_headers",
],
alwayslink = 1,
)
@ -45,7 +45,7 @@ cc_library(
"//tensorflow/core:framework_headers_lib",
"//tensorflow/core/kernels:bounds_check",
"//third_party/eigen3",
"@protobuf//:protobuf_headers",
"@protobuf_archive//:protobuf_headers",
],
alwayslink = 1,
)

View File

@ -17,7 +17,7 @@ cc_library(
],
deps = [
"//tensorflow/core:framework_headers_lib",
"@protobuf//:protobuf_headers",
"@protobuf_archive//:protobuf_headers",
],
)

View File

@ -105,6 +105,9 @@ tf_custom_op_library(
"kernels/single_image_random_dot_stereograms_ops.cc",
"ops/single_image_random_dot_stereograms_ops.cc",
],
deps = [
"@protobuf_archive//:protobuf",
],
)
tf_gen_op_libs(

View File

@ -13,7 +13,7 @@ cc_library(
deps = [
"//tensorflow/core:framework_headers_lib",
"//third_party/eigen3",
"@protobuf//:protobuf_headers",
"@protobuf_archive//:protobuf_headers",
],
alwayslink = 1,
)

View File

@ -619,11 +619,12 @@ class Conv3DTranspose(tf_convolutional_layers.Conv3D, Layer):
filters: Integer, the dimensionality of the output space
(i.e. the number of output filters in the convolution).
kernel_size: An integer or tuple/list of 3 integers, specifying the
width and height of the 3D convolution window.
depth, height and width of the 3D convolution window.
Can be a single integer to specify the same value for
all spatial dimensions.
strides: An integer or tuple/list of 3 integers,
specifying the strides of the convolution along the width and height.
specifying the strides of the convolution along the depth, height
and width.
Can be a single integer to specify the same value for
all spatial dimensions.
Specifying any stride value != 1 is incompatible with specifying

View File

@ -14,7 +14,7 @@ cc_library(
"//tensorflow/core:framework_headers_lib",
"//third_party/eigen3",
"@farmhash_archive//:farmhash",
"@protobuf//:protobuf_headers",
"@protobuf_archive//:protobuf_headers",
],
alwayslink = 1,
)

View File

@ -73,6 +73,10 @@ See the @{$python/contrib.learn} guide.
@@read_batch_examples
@@read_batch_features
@@read_batch_record_features
@@read_keyed_batch_examples
@@read_keyed_batch_examples_shared_queue
@@read_keyed_batch_features
@@read_keyed_batch_features_shared_queue
@@InputFnOps
@@ProblemType

View File

@ -26,12 +26,12 @@ from tensorflow.contrib.layers.python.layers import optimizers
from tensorflow.contrib.learn.python.learn.datasets import base
from tensorflow.contrib.learn.python.learn.estimators import logistic_regressor
from tensorflow.contrib.learn.python.learn.estimators import metric_key
from tensorflow.contrib.losses.python.losses import loss_ops
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import init_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.ops.losses import losses
from tensorflow.python.platform import test
@ -55,7 +55,7 @@ def _logistic_regression_model_fn(features, labels, mode):
# AUC/precision/recall/etc will change meaningfully even on a toy dataset.
biases_initializer=init_ops.constant_initializer(-10.0))
predictions = math_ops.sigmoid(logits)
loss = loss_ops.sigmoid_cross_entropy(logits, labels)
loss = losses.sigmoid_cross_entropy(labels, logits)
train_op = optimizers.optimize_loss(
loss, variables.get_global_step(), optimizer='Adagrad', learning_rate=0.1)
return predictions, loss, train_op

View File

@ -20,14 +20,14 @@ from __future__ import division
from __future__ import print_function
from tensorflow.contrib.framework import deprecated
from tensorflow.contrib.losses.python.losses import loss_ops
from tensorflow.python.framework import ops
from tensorflow.python.ops import array_ops as array_ops_
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import nn
from tensorflow.python.ops.losses import losses
@deprecated('2016-12-01', 'Use `tf.contrib.losses.mean_squared_error` '
@deprecated('2016-12-01', 'Use `tf.losses.mean_squared_error` '
'and explicit logits computation.')
def mean_squared_error_regressor(tensor_in, labels, weights, biases, name=None):
"""Returns prediction and loss for mean squared error regression."""
@ -36,10 +36,10 @@ def mean_squared_error_regressor(tensor_in, labels, weights, biases, name=None):
predictions = nn.xw_plus_b(tensor_in, weights, biases)
if len(labels.get_shape()) == 1 and len(predictions.get_shape()) == 2:
predictions = array_ops_.squeeze(predictions, squeeze_dims=[1])
return predictions, loss_ops.mean_squared_error(predictions, labels)
return predictions, losses.mean_squared_error(labels, predictions)
@deprecated('2016-12-01', 'Use `tf.contrib.losses.softmax_cross_entropy` '
@deprecated('2016-12-01', 'Use `tf.losses.softmax_cross_entropy` '
'and explicit logits computation.')
def softmax_classifier(tensor_in,
labels,
@ -72,4 +72,4 @@ def softmax_classifier(tensor_in,
logits = nn.xw_plus_b(tensor_in, weights, biases)
if class_weight is not None:
logits = math_ops.multiply(logits, class_weight)
return nn.softmax(logits), loss_ops.softmax_cross_entropy(logits, labels)
return nn.softmax(logits), losses.softmax_cross_entropy(labels, logits)

View File

@ -1,7 +1,13 @@
# TensorFlow contrib losses.
## Deprecated
This module is deprecated. Instructions for updating: Use tf.losses instead.
## losses
Note: By default all the losses are collected into the GraphKeys.LOSSES collection.
Loss operations for use in training models, typically with signature like the
following:

View File

@ -301,7 +301,7 @@ def absolute_difference(predictions, labels=None, weights=1.0, scope=None):
@deprecated("2016-12-30",
"Use tf.losses.sigmoid_cross_entropy instead. Note that the order "
"of the predictions and labels arguments was changed.")
"of the predictions and labels arguments has been changed.")
def sigmoid_cross_entropy(
logits, multi_class_labels, weights=1.0, label_smoothing=0, scope=None):
"""Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits.
@ -436,7 +436,7 @@ def sparse_softmax_cross_entropy(logits, labels, weights=1.0, scope=None):
@deprecated("2016-12-30",
"Use tf.losses.log_loss instead. Note that the order of the "
"predictions and labels arguments was changed.")
"predictions and labels arguments has been changed.")
def log_loss(predictions, labels=None, weights=1.0, epsilon=1e-7, scope=None):
"""Adds a Log Loss term to the training procedure.
@ -477,7 +477,8 @@ def log_loss(predictions, labels=None, weights=1.0, epsilon=1e-7, scope=None):
@deprecated("2016-12-30",
"Use tf.losses.hinge_loss instead. Note that the order of the "
"predictions and labels arguments were changed.")
"logits and labels arguments has been changed, and to stay "
"unweighted, reduction=Reduction.NONE")
def hinge_loss(logits, labels=None, scope=None):
"""Method that returns the loss tensor for hinge loss.
@ -488,8 +489,8 @@ def hinge_loss(logits, labels=None, scope=None):
scope: The scope for the operations performed in computing the loss.
Returns:
A `Tensor` of same shape as `logits` and `labels` representing the loss
values across the batch.
An unweighted `Tensor` of same shape as `logits` and `labels` representing the
loss values across the batch.
Raises:
ValueError: If the shapes of `logits` and `labels` don't match.
@ -541,7 +542,7 @@ def mean_squared_error(predictions, labels=None, weights=1.0, scope=None):
@deprecated("2016-12-30",
"Use tf.losses.mean_pairwise_squared_error instead. Note that the "
"order of the predictions and labels arguments was changed.")
"order of the predictions and labels arguments has been changed.")
def mean_pairwise_squared_error(
predictions, labels=None, weights=1.0, scope=None):
"""Adds a pairwise-errors-squared loss to the training procedure.

View File

@ -202,7 +202,7 @@ ifeq ($(TARGET),LINUX)
endif
# If we're cross-compiling for the Raspberry Pi, use the right gcc.
ifeq ($(TARGET),PI)
CXXFLAGS += -D__ANDROID_TYPES_SLIM__
CXXFLAGS += -D__ANDROID_TYPES_SLIM__ -DRASPBERRY_PI
LDFLAGS := -Wl,--no-whole-archive
LIBS += -ldl -lpthread
LIBFLAGS += -Wl,--allow-multiple-definition -Wl,--whole-archive

View File

@ -143,8 +143,10 @@ tensorflow/core/kernels/cwise_op_minimum.cc
tensorflow/core/kernels/cwise_op_maximum.cc
tensorflow/core/kernels/cwise_op_logical_not.cc
tensorflow/core/kernels/cwise_op_logical_and.cc
tensorflow/core/kernels/cwise_op_logical_or.cc
tensorflow/core/kernels/cwise_op_log.cc
tensorflow/core/kernels/cwise_op_less.cc
tensorflow/core/kernels/cwise_op_less_equal.cc
tensorflow/core/kernels/cwise_op_isfinite.cc
tensorflow/core/kernels/cwise_op_invert.cc
tensorflow/core/kernels/cwise_op_greater_equal.cc

View File

@ -53,7 +53,7 @@ cc_library(
":tree_utils",
"//tensorflow/core:framework_headers_lib",
"//third_party/eigen3",
"@protobuf//:protobuf_headers",
"@protobuf_archive//:protobuf_headers",
],
alwayslink = 1,
)
@ -356,7 +356,7 @@ cc_library(
deps = [
"//tensorflow/core:framework_headers_lib",
"//third_party/eigen3",
"@protobuf//:protobuf_headers",
"@protobuf_archive//:protobuf_headers",
],
)

View File

@ -88,7 +88,7 @@ cc_library(
deps = [
"//tensorflow/core:framework_headers_lib",
"//third_party/eigen3",
"@protobuf//:protobuf_headers",
"@protobuf_archive//:protobuf_headers",
],
)

View File

@ -43,20 +43,22 @@ VerbsService::Stub::Stub(
const std::shared_ptr< ::grpc::ChannelInterface>& channel)
: channel_(channel),
rpcmethod_GetRemoteAddress_(grpcVerbsService_method_names[0],
::grpc::RpcMethod::NORMAL_RPC, channel) {}
::grpc::internal::RpcMethod::NORMAL_RPC,
channel) {}
::grpc::Status VerbsService::Stub::GetRemoteAddress(
::grpc::ClientContext* context, const GetRemoteAddressRequest& request,
GetRemoteAddressResponse* response) {
return ::grpc::BlockingUnaryCall(channel_.get(), rpcmethod_GetRemoteAddress_,
context, request, response);
return ::grpc::internal::BlockingUnaryCall(
channel_.get(), rpcmethod_GetRemoteAddress_, context, request, response);
}
VerbsService::AsyncService::AsyncService() {
for (int i = 0; i < 1; ++i) {
AddMethod(new ::grpc::RpcServiceMethod(grpcVerbsService_method_names[i],
::grpc::RpcMethod::NORMAL_RPC,
nullptr));
AddMethod(new ::grpc::internal::RpcServiceMethod(
grpcVerbsService_method_names[i],
::grpc::internal::RpcMethod::NORMAL_RPC,
nullptr));
::grpc::Service::MarkMethodAsync(i);
}
}

View File

@ -61,7 +61,7 @@ class VerbsService GRPC_FINAL {
private:
std::shared_ptr< ::grpc::ChannelInterface> channel_;
const ::grpc::RpcMethod rpcmethod_GetRemoteAddress_;
const ::grpc::internal::RpcMethod rpcmethod_GetRemoteAddress_;
};
static std::unique_ptr<Stub> NewStub(
const std::shared_ptr< ::grpc::ChannelInterface>& channel,

View File

@ -64,6 +64,7 @@ load(
"//tensorflow:tensorflow.bzl",
"full_path",
"if_android",
"if_not_android_mips_and_mips64",
"if_ios",
"if_linux_x86_64",
"if_not_mobile",
@ -932,9 +933,7 @@ filegroup(
cc_library(
name = "android_tensorflow_lib_lite",
srcs = if_android(["//tensorflow/core:android_srcs"]),
copts = tf_copts() + [
"-Os",
],
copts = tf_copts() + if_not_android_mips_and_mips64(["-Os"]),
linkopts = ["-lz"],
tags = [
"manual",
@ -1429,6 +1428,86 @@ cc_library(
],
)
cc_library(
name = "android_jpeg_internal",
srcs = [
"lib/jpeg/jpeg_handle.cc",
"lib/jpeg/jpeg_mem.cc",
"platform/jpeg.h",
],
hdrs = [
"lib/core/stringpiece.h",
"lib/jpeg/jpeg_handle.h",
"lib/jpeg/jpeg_mem.h",
"platform/default/dynamic_annotations.h",
"platform/default/integral_types.h",
"platform/default/logging.h",
"platform/dynamic_annotations.h",
"platform/logging.h",
"platform/macros.h",
"platform/mem.h",
"platform/platform.h",
"platform/types.h",
],
copts = tf_copts(),
linkopts = ["-ldl"],
deps = [
"//tensorflow/core/platform/default/build_config:jpeg",
],
)
cc_library(
name = "android_gif_internal",
srcs = [
"lib/gif/gif_io.cc",
"platform/gif.h",
],
hdrs = [
"lib/core/stringpiece.h",
"lib/gif/gif_io.h",
"lib/gtl/cleanup.h",
"platform/default/dynamic_annotations.h",
"platform/default/integral_types.h",
"platform/default/logging.h",
"platform/dynamic_annotations.h",
"platform/logging.h",
"platform/macros.h",
"platform/mem.h",
"platform/platform.h",
"platform/types.h",
],
copts = tf_copts(),
linkopts = ["-ldl"],
deps = [
"//tensorflow/core/platform/default/build_config:gif",
],
)
cc_library(
name = "android_png_internal",
srcs = [
"lib/png/png_io.cc",
"platform/png.h",
],
hdrs = [
"lib/core/casts.h",
"lib/core/stringpiece.h",
"lib/png/png_io.h",
"platform/cpu_info.h",
"platform/default/integral_types.h",
"platform/default/logging.h",
"platform/logging.h",
"platform/macros.h",
"platform/platform.h",
"platform/types.h",
],
copts = tf_copts(),
linkopts = ["-ldl"],
deps = [
"@png_archive//:png",
],
)
proto_text_hdrs_and_srcs = tf_generate_proto_text_sources(
name = "proto_text_srcs_all",
srcs = CORE_PROTO_SRCS,

View File

@ -49,74 +49,75 @@ MasterService::Stub::Stub(
const std::shared_ptr< ::grpc::ChannelInterface>& channel)
: channel_(channel),
rpcmethod_CreateSession_(grpcMasterService_method_names[0],
::grpc::RpcMethod::NORMAL_RPC, channel),
::grpc::internal::RpcMethod::NORMAL_RPC, channel),
rpcmethod_ExtendSession_(grpcMasterService_method_names[1],
::grpc::RpcMethod::NORMAL_RPC, channel),
::grpc::internal::RpcMethod::NORMAL_RPC, channel),
rpcmethod_PartialRunSetup_(grpcMasterService_method_names[2],
::grpc::RpcMethod::NORMAL_RPC, channel),
::grpc::internal::RpcMethod::NORMAL_RPC, channel),
rpcmethod_RunStep_(grpcMasterService_method_names[3],
::grpc::RpcMethod::NORMAL_RPC, channel),
::grpc::internal::RpcMethod::NORMAL_RPC, channel),
rpcmethod_CloseSession_(grpcMasterService_method_names[4],
::grpc::RpcMethod::NORMAL_RPC, channel),
::grpc::internal::RpcMethod::NORMAL_RPC, channel),
rpcmethod_ListDevices_(grpcMasterService_method_names[5],
::grpc::RpcMethod::NORMAL_RPC, channel),
::grpc::internal::RpcMethod::NORMAL_RPC, channel),
rpcmethod_Reset_(grpcMasterService_method_names[6],
::grpc::RpcMethod::NORMAL_RPC, channel) {}
::grpc::internal::RpcMethod::NORMAL_RPC, channel) {}
::grpc::Status MasterService::Stub::CreateSession(
::grpc::ClientContext* context, const CreateSessionRequest& request,
CreateSessionResponse* response) {
return ::grpc::BlockingUnaryCall(channel_.get(), rpcmethod_CreateSession_,
return ::grpc::internal::BlockingUnaryCall(channel_.get(), rpcmethod_CreateSession_,
context, request, response);
}
::grpc::Status MasterService::Stub::ExtendSession(
::grpc::ClientContext* context, const ExtendSessionRequest& request,
ExtendSessionResponse* response) {
return ::grpc::BlockingUnaryCall(channel_.get(), rpcmethod_ExtendSession_,
return ::grpc::internal::BlockingUnaryCall(channel_.get(), rpcmethod_ExtendSession_,
context, request, response);
}
::grpc::Status MasterService::Stub::PartialRunSetup(
::grpc::ClientContext* context, const PartialRunSetupRequest& request,
PartialRunSetupResponse* response) {
return ::grpc::BlockingUnaryCall(channel_.get(), rpcmethod_PartialRunSetup_,
return ::grpc::internal::BlockingUnaryCall(channel_.get(), rpcmethod_PartialRunSetup_,
context, request, response);
}
::grpc::Status MasterService::Stub::RunStep(::grpc::ClientContext* context,
const RunStepRequest& request,
RunStepResponse* response) {
return ::grpc::BlockingUnaryCall(channel_.get(), rpcmethod_RunStep_, context,
return ::grpc::internal::BlockingUnaryCall(channel_.get(), rpcmethod_RunStep_, context,
request, response);
}
::grpc::Status MasterService::Stub::CloseSession(
::grpc::ClientContext* context, const CloseSessionRequest& request,
CloseSessionResponse* response) {
return ::grpc::BlockingUnaryCall(channel_.get(), rpcmethod_CloseSession_,
return ::grpc::internal::BlockingUnaryCall(channel_.get(), rpcmethod_CloseSession_,
context, request, response);
}
::grpc::Status MasterService::Stub::ListDevices(
::grpc::ClientContext* context, const ListDevicesRequest& request,
ListDevicesResponse* response) {
return ::grpc::BlockingUnaryCall(channel_.get(), rpcmethod_ListDevices_,
return ::grpc::internal::BlockingUnaryCall(channel_.get(), rpcmethod_ListDevices_,
context, request, response);
}
::grpc::Status MasterService::Stub::Reset(::grpc::ClientContext* context,
const ResetRequest& request,
ResetResponse* response) {
return ::grpc::BlockingUnaryCall(channel_.get(), rpcmethod_Reset_, context,
return ::grpc::internal::BlockingUnaryCall(channel_.get(), rpcmethod_Reset_, context,
request, response);
}
MasterService::AsyncService::AsyncService() {
for (int i = 0; i < 7; ++i) {
AddMethod(new ::grpc::RpcServiceMethod(grpcMasterService_method_names[i],
::grpc::RpcMethod::NORMAL_RPC,
nullptr));
AddMethod(new ::grpc::internal::RpcServiceMethod(
grpcMasterService_method_names[i],
::grpc::internal::RpcMethod::NORMAL_RPC,
nullptr));
::grpc::Service::MarkMethodAsync(i);
}
}

View File

@ -53,7 +53,7 @@ namespace grpc {
// definition in "//tensorflow/core/protobuf/master_service.proto",
// and the gRPC generated stub and service classes.
// See that file for the definition of methods and messages.
class MasterService GRPC_FINAL {
class MasterService final {
public:
class StubInterface {
public:
@ -80,40 +80,40 @@ class MasterService GRPC_FINAL {
const ResetRequest& request,
ResetResponse* response) = 0;
};
class Stub GRPC_FINAL : public StubInterface {
class Stub final : public StubInterface {
public:
Stub(const std::shared_ptr< ::grpc::ChannelInterface>& channel);
::grpc::Status CreateSession(::grpc::ClientContext* context,
const CreateSessionRequest& request,
CreateSessionResponse* response) GRPC_OVERRIDE;
CreateSessionResponse* response) override;
::grpc::Status ExtendSession(::grpc::ClientContext* context,
const ExtendSessionRequest& request,
ExtendSessionResponse* response) GRPC_OVERRIDE;
ExtendSessionResponse* response) override;
::grpc::Status PartialRunSetup(
::grpc::ClientContext* context, const PartialRunSetupRequest& request,
PartialRunSetupResponse* response) GRPC_OVERRIDE;
PartialRunSetupResponse* response) override;
::grpc::Status RunStep(::grpc::ClientContext* context,
const RunStepRequest& request,
RunStepResponse* response) GRPC_OVERRIDE;
RunStepResponse* response) override;
::grpc::Status CloseSession(::grpc::ClientContext* context,
const CloseSessionRequest& request,
CloseSessionResponse* response) GRPC_OVERRIDE;
CloseSessionResponse* response) override;
::grpc::Status ListDevices(::grpc::ClientContext* context,
const ListDevicesRequest& request,
ListDevicesResponse* response) GRPC_OVERRIDE;
ListDevicesResponse* response) override;
::grpc::Status Reset(::grpc::ClientContext* context,
const ResetRequest& request,
ResetResponse* response) GRPC_OVERRIDE;
ResetResponse* response) override;
private:
std::shared_ptr< ::grpc::ChannelInterface> channel_;
const ::grpc::RpcMethod rpcmethod_CreateSession_;
const ::grpc::RpcMethod rpcmethod_ExtendSession_;
const ::grpc::RpcMethod rpcmethod_PartialRunSetup_;
const ::grpc::RpcMethod rpcmethod_RunStep_;
const ::grpc::RpcMethod rpcmethod_CloseSession_;
const ::grpc::RpcMethod rpcmethod_ListDevices_;
const ::grpc::RpcMethod rpcmethod_Reset_;
const ::grpc::internal::RpcMethod rpcmethod_CreateSession_;
const ::grpc::internal::RpcMethod rpcmethod_ExtendSession_;
const ::grpc::internal::RpcMethod rpcmethod_PartialRunSetup_;
const ::grpc::internal::RpcMethod rpcmethod_RunStep_;
const ::grpc::internal::RpcMethod rpcmethod_CloseSession_;
const ::grpc::internal::RpcMethod rpcmethod_ListDevices_;
const ::grpc::internal::RpcMethod rpcmethod_Reset_;
};
static std::unique_ptr<Stub> NewStub(
const std::shared_ptr< ::grpc::ChannelInterface>& channel,

View File

@ -17,6 +17,7 @@ limitations under the License.
#define THIRD_PARTY_TENSORFLOW_CORE_DISTRIBUTED_RUNTIME_RPC_GRPC_SERIALIZATION_TRAITS_H_
#include "grpc++/impl/codegen/proto_utils.h"
#include "grpc++/support/slice.h"
namespace grpc {
@ -24,7 +25,7 @@ namespace tensorflow_helper {
const int kGrpcBufferWriterMaxBufferLength = 8192;
class GrpcBufferWriter GRPC_FINAL
class GrpcBufferWriter final
: public ::grpc::protobuf::io::ZeroCopyOutputStream {
public:
explicit GrpcBufferWriter(grpc_byte_buffer** bp, int block_size)
@ -33,35 +34,35 @@ class GrpcBufferWriter GRPC_FINAL
slice_buffer_ = &(*bp)->data.raw.slice_buffer;
}
~GrpcBufferWriter() GRPC_OVERRIDE {
~GrpcBufferWriter() override {
if (have_backup_) {
g_core_codegen_interface->gpr_slice_unref(backup_slice_);
g_core_codegen_interface->grpc_slice_unref(backup_slice_);
}
}
bool Next(void** data, int* size) GRPC_OVERRIDE {
bool Next(void** data, int* size) override {
if (have_backup_) {
slice_ = backup_slice_;
have_backup_ = false;
} else {
slice_ = g_core_codegen_interface->gpr_slice_malloc(block_size_);
slice_ = g_core_codegen_interface->grpc_slice_malloc(block_size_);
}
*data = GPR_SLICE_START_PTR(slice_);
*data = GRPC_SLICE_START_PTR(slice_);
// On win x64, int is only 32bit
GPR_CODEGEN_ASSERT(GPR_SLICE_LENGTH(slice_) <= INT_MAX);
byte_count_ += * size = (int)GPR_SLICE_LENGTH(slice_);
g_core_codegen_interface->gpr_slice_buffer_add(slice_buffer_, slice_);
GPR_CODEGEN_ASSERT(GRPC_SLICE_LENGTH(slice_) <= INT_MAX);
byte_count_ += * size = (int)GRPC_SLICE_LENGTH(slice_);
g_core_codegen_interface->grpc_slice_buffer_add(slice_buffer_, slice_);
return true;
}
void BackUp(int count) GRPC_OVERRIDE {
g_core_codegen_interface->gpr_slice_buffer_pop(slice_buffer_);
void BackUp(int count) override {
g_core_codegen_interface->grpc_slice_buffer_pop(slice_buffer_);
if (count == block_size_) {
backup_slice_ = slice_;
} else {
backup_slice_ = g_core_codegen_interface->gpr_slice_split_tail(
&slice_, GPR_SLICE_LENGTH(slice_) - count);
g_core_codegen_interface->gpr_slice_buffer_add(slice_buffer_, slice_);
backup_slice_ = g_core_codegen_interface->grpc_slice_split_tail(
&slice_, GRPC_SLICE_LENGTH(slice_) - count);
g_core_codegen_interface->grpc_slice_buffer_add(slice_buffer_, slice_);
}
// It's dangerous to keep an inlined grpc_slice as the backup slice, since
// on a following Next() call, a reference will be returned to this slice
@ -71,18 +72,18 @@ class GrpcBufferWriter GRPC_FINAL
byte_count_ -= count;
}
grpc::protobuf::int64 ByteCount() const GRPC_OVERRIDE { return byte_count_; }
grpc::protobuf::int64 ByteCount() const override { return byte_count_; }
private:
const int block_size_;
int64_t byte_count_;
gpr_slice_buffer* slice_buffer_;
grpc_slice_buffer* slice_buffer_;
bool have_backup_;
gpr_slice backup_slice_;
gpr_slice slice_;
grpc_slice backup_slice_;
grpc_slice slice_;
};
class GrpcBufferReader GRPC_FINAL
class GrpcBufferReader final
: public ::grpc::protobuf::io::ZeroCopyInputStream {
typedef void (CoreCodegenInterface::*OldReaderInitAPI)(
grpc_byte_buffer_reader* reader, grpc_byte_buffer* buffer);
@ -104,13 +105,13 @@ class GrpcBufferReader GRPC_FINAL
ReaderInit(&CoreCodegenInterface::grpc_byte_buffer_reader_init, &reader_,
buffer);
}
~GrpcBufferReader() GRPC_OVERRIDE {
~GrpcBufferReader() override {
g_core_codegen_interface->grpc_byte_buffer_reader_destroy(&reader_);
}
bool Next(const void** data, int* size) GRPC_OVERRIDE {
bool Next(const void** data, int* size) override {
if (backup_count_ > 0) {
*data = GPR_SLICE_START_PTR(slice_) + GPR_SLICE_LENGTH(slice_) -
*data = GRPC_SLICE_START_PTR(slice_) + GRPC_SLICE_LENGTH(slice_) -
backup_count_;
GPR_CODEGEN_ASSERT(backup_count_ <= INT_MAX);
*size = (int)backup_count_;
@ -121,17 +122,17 @@ class GrpcBufferReader GRPC_FINAL
&slice_)) {
return false;
}
g_core_codegen_interface->gpr_slice_unref(slice_);
*data = GPR_SLICE_START_PTR(slice_);
g_core_codegen_interface->grpc_slice_unref(slice_);
*data = GRPC_SLICE_START_PTR(slice_);
// On win x64, int is only 32bit
GPR_CODEGEN_ASSERT(GPR_SLICE_LENGTH(slice_) <= INT_MAX);
byte_count_ += * size = (int)GPR_SLICE_LENGTH(slice_);
GPR_CODEGEN_ASSERT(GRPC_SLICE_LENGTH(slice_) <= INT_MAX);
byte_count_ += * size = (int)GRPC_SLICE_LENGTH(slice_);
return true;
}
void BackUp(int count) GRPC_OVERRIDE { backup_count_ = count; }
void BackUp(int count) override { backup_count_ = count; }
bool Skip(int count) GRPC_OVERRIDE {
bool Skip(int count) override {
const void* data;
int size;
while (Next(&data, &size)) {
@ -146,7 +147,7 @@ class GrpcBufferReader GRPC_FINAL
return false;
}
grpc::protobuf::int64 ByteCount() const GRPC_OVERRIDE {
grpc::protobuf::int64 ByteCount() const override {
return byte_count_ - backup_count_;
}
@ -154,7 +155,7 @@ class GrpcBufferReader GRPC_FINAL
int64_t byte_count_;
int64_t backup_count_;
grpc_byte_buffer_reader reader_;
gpr_slice slice_;
grpc_slice slice_;
};
} // namespace tensorflow_helper
@ -175,12 +176,12 @@ class UnlimitedSizeProtoSerializationTraits {
return Status(StatusCode::INTERNAL, "Message length was negative");
} else if (byte_size <=
tensorflow_helper::kGrpcBufferWriterMaxBufferLength) {
gpr_slice slice = g_core_codegen_interface->gpr_slice_malloc(byte_size);
grpc_slice slice = g_core_codegen_interface->grpc_slice_malloc(byte_size);
GPR_CODEGEN_ASSERT(
GPR_SLICE_END_PTR(slice) ==
msg.SerializeWithCachedSizesToArray(GPR_SLICE_START_PTR(slice)));
GRPC_SLICE_END_PTR(slice) ==
msg.SerializeWithCachedSizesToArray(GRPC_SLICE_START_PTR(slice)));
*bp = g_core_codegen_interface->grpc_raw_byte_buffer_create(&slice, 1);
g_core_codegen_interface->gpr_slice_unref(slice);
g_core_codegen_interface->grpc_slice_unref(slice);
return g_core_codegen_interface->ok();
} else {
tensorflow_helper::GrpcBufferWriter writer(

View File

@ -38,9 +38,9 @@ static void unref_tensorbuffer(void* raw) {
void EncodeRecvTensorResponseToByteBuffer(const RecvTensorResponse& proto,
::grpc::ByteBuffer* result) {
size_t len = proto.ByteSizeLong();
gpr_slice s = gpr_slice_malloc(len);
grpc_slice s = grpc_slice_malloc(len);
proto.SerializeWithCachedSizesToArray(
reinterpret_cast<uint8*>(GPR_SLICE_START_PTR(s)));
reinterpret_cast<uint8*>(GRPC_SLICE_START_PTR(s)));
::grpc::Slice slice(s, ::grpc::Slice::STEAL_REF);
*result = ::grpc::ByteBuffer(&slice, 1);
}
@ -68,12 +68,12 @@ void EncodeRecvTensorResponseToByteBuffer(const RecvTensorResponse& proto,
// E: <actual data for val's representation>
//
// If the tensor data is up to "kLargeTensorBytes", then A
// through E will all be encoded into "*result" in a single gpr_slice.
// through E will all be encoded into "*result" in a single grpc_slice.
//
// If the tensor data is larger than "kLargeTensorBytes", then A through
// D2 will be encoded in one gpr_slice, and E will be encoded in a second
// gpr_slice that points to the backing store for the tensor data, to avoid
// copying the tensor data (and the gpr_slice setup will be arrange so as
// D2 will be encoded in one grpc_slice, and E will be encoded in a second
// grpc_slice that points to the backing store for the tensor data, to avoid
// copying the tensor data (and the grpc_slice setup will be arrange so as
// to dereference the underlying tensor data buffer when it is no longer
// needed in the "*result" ByteBuffer).
static int VarLengthEncodingSize(uint32 tag, size_t bytes) {
@ -209,11 +209,11 @@ void EncodeTensorToByteBuffer(bool is_dead, const Tensor& val,
int num_slices = 0;
{
size_t slice_len = e.size() + (tensor_data_is_large ? 0 : tdata.size());
gpr_slice s0 = gpr_slice_malloc(slice_len);
memcpy(GPR_SLICE_START_PTR(s0), e.data(), e.size());
grpc_slice s0 = grpc_slice_malloc(slice_len);
memcpy(GRPC_SLICE_START_PTR(s0), e.data(), e.size());
if (!tensor_data_is_large) {
// (E)
memcpy(GPR_SLICE_START_PTR(s0) + e.size(), tdata.data(), tdata.size());
memcpy(GRPC_SLICE_START_PTR(s0) + e.size(), tdata.data(), tdata.size());
}
slices[0] = ::grpc::Slice(s0, ::grpc::Slice::STEAL_REF);
num_slices += 1;
@ -230,7 +230,7 @@ void EncodeTensorToByteBuffer(bool is_dead, const Tensor& val,
// hypothetical grpc_slice-related changes (e.g. the
// implementation could decide to destroy 0-length slices
// eagerly). In practice, this does not happen with the current
// implementation, and the gpr_slice interface at the moment does
// implementation, and the grpc_slice interface at the moment does
// not allow us to do the Tensor-unreferencing in the right way
// (since the Tensor pointer is different than the backing store
// array pointer).
@ -245,13 +245,13 @@ void EncodeTensorToByteBuffer(bool is_dead, const Tensor& val,
const TensorBuffer* buf = DMAHelper::buffer(&val);
buf->Ref();
gpr_slice s1 = gpr_slice_new(
grpc_slice s1 = grpc_slice_new(
const_cast<void*>(static_cast<const void*>(tdata.data())),
tdata.size(), do_nothing);
slices[1] = ::grpc::Slice(s1, ::grpc::Slice::STEAL_REF);
gpr_slice s2 =
gpr_slice_new(const_cast<TensorBuffer*>(buf), 0, unref_tensorbuffer);
grpc_slice s2 =
grpc_slice_new(const_cast<TensorBuffer*>(buf), 0, unref_tensorbuffer);
slices[2] = ::grpc::Slice(s2, ::grpc::Slice::STEAL_REF);
num_slices += 2;
}

View File

@ -80,9 +80,9 @@ grpc::protobuf::int64 GrpcByteBufferSource::ByteCount() const {
void GrpcUnparseProto(const protobuf::Message& src, grpc::ByteBuffer* dst) {
// TODO(sanjay): For bigger protos, serialize into a ZeroCopyOutputStream.
size_t len = src.ByteSizeLong();
gpr_slice s = gpr_slice_malloc(len);
grpc_slice s = grpc_slice_malloc(len);
src.SerializeWithCachedSizesToArray(
reinterpret_cast<uint8*>(GPR_SLICE_START_PTR(s)));
reinterpret_cast<uint8*>(GRPC_SLICE_START_PTR(s)));
::grpc::Slice slice(s, ::grpc::Slice::STEAL_REF);
::grpc::ByteBuffer buffer(&slice, 1);
// TODO(sanjay): Use Swap() when grpc version we are using is new enough.

View File

@ -38,7 +38,7 @@ grpc::ByteBuffer MakeBuffer(const string& str, int num_slices) {
const size_t per_slice = (str.size() + num_slices - 1) / num_slices;
for (size_t pos = 0; pos < str.size();) {
const size_t n = std::min(str.size() - pos, per_slice);
auto slice = gpr_slice_from_copied_buffer(&str[pos], n);
auto slice = grpc_slice_from_copied_buffer(&str[pos], n);
slices.push_back(::grpc::Slice(slice, ::grpc::Slice::STEAL_REF));
pos += n;
}

View File

@ -58,9 +58,9 @@ namespace grpc {
WorkerService::AsyncService::AsyncService() {
for (int i = 0; i < kGrpcNumWorkerMethods; ++i) {
AddMethod(new ::grpc::RpcServiceMethod(
AddMethod(new ::grpc::internal::RpcServiceMethod(
GrpcWorkerMethodName(static_cast<GrpcWorkerMethod>(i)),
::grpc::RpcMethod::NORMAL_RPC, nullptr));
::grpc::internal::RpcMethod::NORMAL_RPC, nullptr));
::grpc::Service::MarkMethodAsync(i);
}
}

View File

@ -130,7 +130,7 @@ namespace grpc {
// definition in "//tensorflow/core/protobuf/worker_service.proto",
// and the gRPC generated stub and service classes.
// See the proto file for the definition of methods and messages.
class WorkerService GRPC_FINAL {
class WorkerService final {
public:
class AsyncService : public ::grpc::Service {
public:

View File

@ -33,7 +33,7 @@ void InitializePending(const Graph* graph, std::vector<int>* pending) {
const int id = node->id();
int num_in_edges = 0;
if (IsMerge(node)) {
// For forward executon order, Merge nodes are special. We process
// For forward execution order, Merge nodes are special. We process
// them only once when one of its inputs is processed.
for (const Edge* edge : node->in_edges()) {
if (edge->IsControlEdge()) {
@ -122,7 +122,7 @@ Microseconds SlackAnalysis::ComputeAlap(std::vector<Microseconds>* alap_times) {
std::vector<int> pending_count;
pending_count.resize(graph_->num_node_ids());
for (const Node* n : graph_->nodes()) {
// For reverse executon order, Switch nodes are special. We process
// For reverse execution order, Switch nodes are special. We process
// them only once when one of its outputs is processed.
if (IsSwitch(n)) {
int32 num_control_edges = 0;

View File

@ -490,6 +490,9 @@ std::pair<NodeDef*, NodeDef*> BuildSwapPair(NodeDef* node, int input_to_swap,
(*swap_in_node->mutable_attr())["_class"].mutable_list()->add_s(coloc_group);
(*node->mutable_attr())["_class"].mutable_list()->add_s(coloc_group);
const DataType input_type = node->attr().at("T").type();
(*swap_in_node->mutable_attr())["T"].set_type(input_type);
(*swap_out_node->mutable_attr())["T"].set_type(input_type);
return std::make_pair(swap_out_node, swap_in_node);
}

View File

@ -4307,9 +4307,11 @@ filegroup(
"cwise_op_invert.cc",
"cwise_op_isfinite.cc",
"cwise_op_less.cc",
"cwise_op_less_equal.cc",
"cwise_op_log.cc",
"cwise_op_logical_and.cc",
"cwise_op_logical_not.cc",
"cwise_op_logical_or.cc",
"cwise_op_maximum.cc",
"cwise_op_minimum.cc",
"cwise_op_mul_1.cc",
@ -4534,6 +4536,32 @@ cc_library(
alwayslink = 1,
)
cc_library(
name = "android_tensorflow_image_op",
srcs = [
"decode_image_op.cc",
],
copts = tf_copts(),
linkopts = select({
"//tensorflow:android": [
"-ldl",
],
"//conditions:default": [],
}),
tags = [
"manual",
"notap",
],
visibility = ["//visibility:public"],
deps = [
"//tensorflow/core:android_gif_internal",
"//tensorflow/core:android_jpeg_internal",
"//tensorflow/core:android_png_internal",
"//tensorflow/core:android_tensorflow_lib_lite",
],
alwayslink = 1,
)
# Quantization-specific OpKernels
tf_kernel_library(

View File

@ -24,6 +24,9 @@ limitations under the License.
namespace tensorflow {
typedef Eigen::ThreadPoolDevice CPUDevice;
#ifdef TENSORFLOW_USE_SYCL
typedef Eigen::SyclDevice SYCLDevice;
#endif // TENSORFLOW_USE_SYCL
enum DenseUpdateType { ADD, SUB, ASSIGN };
@ -59,6 +62,32 @@ struct DenseUpdate<CPUDevice, T, ASSIGN> {
}
};
#ifdef TENSORFLOW_USE_SYCL
template <typename T>
struct DenseUpdate<SYCLDevice, T, ADD> {
void operator()(const SYCLDevice& d, typename TTypes<T>::Flat params,
typename TTypes<T>::ConstFlat update) {
params.device(d) += update;
}
};
template <typename T>
struct DenseUpdate<SYCLDevice, T, SUB> {
void operator()(const SYCLDevice& d, typename TTypes<T>::Flat params,
typename TTypes<T>::ConstFlat update) {
params.device(d) -= update;
}
};
template <typename T>
struct DenseUpdate<SYCLDevice, T, ASSIGN> {
void operator()(const SYCLDevice& d, typename TTypes<T>::Flat params,
typename TTypes<T>::ConstFlat update) {
params.device(d) = update;
}
};
#endif // TENSORFLOW_USE_SYCL
} // end namespace functor
} // end namespace tensorflow

View File

@ -42,7 +42,7 @@ class GroupByWindowDatasetOp : public UnaryDatasetOpKernel {
void MakeDataset(OpKernelContext* ctx, DatasetBase* input,
DatasetBase** output) override {
int64 window_size;
int64 window_size = 0;
OP_REQUIRES_OK(
ctx, ParseScalarArgument<int64>(ctx, "window_size", &window_size));
OP_REQUIRES(

View File

@ -17,11 +17,11 @@ limitations under the License.
#define EIGEN_USE_THREADS
#include "third_party/eigen3/unsupported/Eigen/CXX11/Tensor"
#include "tensorflow/core/framework/op_kernel.h"
#include "tensorflow/core/framework/tensor.h"
#include "tensorflow/core/framework/tensor_shape.h"
#include "tensorflow/core/kernels/bounds_check.h"
#include "third_party/eigen3/unsupported/Eigen/CXX11/Tensor"
namespace tensorflow {
@ -29,12 +29,29 @@ template <typename T, typename TARGET_T>
class InTopK : public OpKernel {
public:
explicit InTopK(OpKernelConstruction* context) : OpKernel(context) {
OP_REQUIRES_OK(context, context->GetAttr("k", &k_));
if (context->num_inputs() == 2) {
OP_REQUIRES_OK(context, context->GetAttr("k", &k_));
}
}
void Compute(OpKernelContext* context) override {
const auto& predictions_in = context->input(0);
const auto& targets_in = context->input(1);
int64 k_val = k_;
if (context->num_inputs() == 3) {
const auto& k_in = context->input(2);
OP_REQUIRES(context, TensorShapeUtils::IsScalar(k_in.shape()),
errors::InvalidArgument("k must be 0-D, got shape ",
k_in.shape().DebugString()));
if (k_in.dtype() == DT_INT32) {
k_val = k_in.scalar<int32>()();
} else {
k_val = k_in.scalar<int64>()();
}
}
OP_REQUIRES(context, predictions_in.dims() == 2,
errors::InvalidArgument("predictions must be 2-dimensional"));
OP_REQUIRES(context, targets_in.dims() == 1,
@ -73,7 +90,7 @@ class InTopK : public OpKernel {
}
}
}
out(b) = cannot_say ? false : (more_probable_classes < k_);
out(b) = cannot_say ? false : (more_probable_classes < k_val);
}
}
@ -82,10 +99,35 @@ class InTopK : public OpKernel {
};
REGISTER_KERNEL_BUILDER(
Name("InTopK").Device(DEVICE_CPU).TypeConstraint<int32>("T"),
Name("InTopK").Device(DEVICE_CPU)
.HostMemory("predictions")
.HostMemory("targets")
.HostMemory("precision")
.TypeConstraint<int32>("T"),
InTopK<float, int32>);
REGISTER_KERNEL_BUILDER(
Name("InTopK").Device(DEVICE_CPU).TypeConstraint<int64>("T"),
Name("InTopK").Device(DEVICE_CPU)
.HostMemory("predictions")
.HostMemory("targets")
.HostMemory("precision")
.TypeConstraint<int64>("T"),
InTopK<float, int64>);
REGISTER_KERNEL_BUILDER(
Name("InTopKV2").Device(DEVICE_CPU)
.HostMemory("predictions")
.HostMemory("targets")
.HostMemory("k")
.HostMemory("precision")
.TypeConstraint<int32>("T"),
InTopK<float, int32>);
REGISTER_KERNEL_BUILDER(
Name("InTopKV2").Device(DEVICE_CPU)
.HostMemory("predictions")
.HostMemory("targets")
.HostMemory("k")
.HostMemory("precision")
.TypeConstraint<int64>("T"),
InTopK<float, int64>);
} // namespace tensorflow

View File

@ -823,9 +823,9 @@ void QuantizedAddUsingEigen(const Eigen::ThreadPoolDevice& device,
const int64 input_element_count = input.NumElements();
const int64 smaller_input_element_count = smaller_input.NumElements();
QuantizedToFloatStruct<T1> smaller_input_q2f(smaller_input_min,
QuantizedToFloatStruct<T1> input_q2f(input_min, input_max);
QuantizedToFloatStruct<T2> smaller_input_q2f(smaller_input_min,
smaller_input_max);
QuantizedToFloatStruct<T2> input_q2f(input_min, input_max);
FloatToQuantizedStruct<T3> f2q(*output_min, *output_max);
auto smaller_input_float =

View File

@ -50,15 +50,21 @@ typedef Eigen::SyclDevice SYCLDevice;
TF_CALL_REAL_NUMBER_TYPES(REGISTER_RELU_KERNELS);
#undef REGISTER_RELU_KERNELS
#define REGISTER_ELU_KERNELS(type) \
REGISTER_KERNEL_BUILDER( \
Name("Elu").Device(DEVICE_CPU).TypeConstraint<type>("T"), \
EluOp<CPUDevice, type>); \
REGISTER_KERNEL_BUILDER( \
Name("EluGrad").Device(DEVICE_CPU).TypeConstraint<type>("T"), \
EluGradOp<CPUDevice, type>)
#define REGISTER_ELU_KERNELS(type) \
REGISTER_KERNEL_BUILDER( \
Name("Elu").Device(DEVICE_CPU).TypeConstraint<type>("T"), \
EluOp<CPUDevice, type>); \
REGISTER_KERNEL_BUILDER( \
Name("EluGrad").Device(DEVICE_CPU).TypeConstraint<type>("T"), \
EluGradOp<CPUDevice, type>); \
REGISTER_KERNEL_BUILDER( \
Name("Selu").Device(DEVICE_CPU).TypeConstraint<type>("T"), \
SeluOp<CPUDevice, type>); \
REGISTER_KERNEL_BUILDER( \
Name("SeluGrad").Device(DEVICE_CPU).TypeConstraint<type>("T"), \
SeluGradOp<CPUDevice, type>)
// Elu only makes sense with float or double.
// Elu and Selu only make sense with float or double.
TF_CALL_GPU_NUMBER_TYPES(REGISTER_ELU_KERNELS);
#undef REGISTER_ELU_KERNELS
@ -103,7 +109,23 @@ namespace functor {
const GPUDevice& d, typename TTypes<T>::ConstTensor gradients, \
typename TTypes<T>::ConstTensor activations, \
typename TTypes<T>::Tensor backprops); \
extern template struct EluGrad<GPUDevice, T>;
extern template struct EluGrad<GPUDevice, T>; \
\
template <> \
void Selu<GPUDevice, T>::operator()( \
const GPUDevice& d, \
typename TTypes<T>::ConstTensor features, \
typename TTypes<T>::Tensor activations); \
extern template struct Selu<GPUDevice, T>; \
\
template <> \
void SeluGrad<GPUDevice, T>::operator()( \
const GPUDevice& d, typename TTypes<T>::ConstTensor gradients, \
typename TTypes<T>::ConstTensor activations, \
typename TTypes<T>::Tensor backprops); \
extern template struct SeluGrad<GPUDevice, T>;
TF_CALL_GPU_NUMBER_TYPES(DECLARE_GPU_SPEC);
} // namespace functor
@ -127,7 +149,15 @@ TF_CALL_GPU_NUMBER_TYPES(DECLARE_GPU_SPEC);
EluOp<GPUDevice, type>); \
REGISTER_KERNEL_BUILDER( \
Name("EluGrad").Device(DEVICE_GPU).TypeConstraint<type>("T"), \
EluGradOp<GPUDevice, type>)
EluGradOp<GPUDevice, type>); \
REGISTER_KERNEL_BUILDER( \
Name("Selu").Device(DEVICE_GPU).TypeConstraint<type>("T"), \
SeluOp<GPUDevice, type>); \
REGISTER_KERNEL_BUILDER( \
Name("SeluGrad").Device(DEVICE_GPU).TypeConstraint<type>("T"), \
SeluGradOp<GPUDevice, type>)
TF_CALL_GPU_NUMBER_TYPES(REGISTER_GPU_KERNELS);
#undef REGISTER_GPU_KERNELS
@ -154,7 +184,15 @@ TF_CALL_GPU_NUMBER_TYPES(REGISTER_GPU_KERNELS);
EluOp<SYCLDevice, type>); \
REGISTER_KERNEL_BUILDER( \
Name("EluGrad").Device(DEVICE_SYCL).TypeConstraint<type>("T"), \
EluGradOp<SYCLDevice, type>)
EluGradOp<SYCLDevice, type>); \
REGISTER_KERNEL_BUILDER( \
Name("Selu").Device(DEVICE_SYCL).TypeConstraint<type>("T"), \
SeluOp<SYCLDevice, type>); \
REGISTER_KERNEL_BUILDER( \
Name("SeluGrad").Device(DEVICE_SYCL).TypeConstraint<type>("T"), \
SeluGradOp<SYCLDevice, type>)
TF_CALL_GPU_NUMBER_TYPES_NO_HALF(REGISTER_SYCL_KERNELS);
#undef REGISTER_SYCL_KERNELS

View File

@ -173,6 +173,48 @@ void EluGradOp<Device, T>::OperateNoTemplate(OpKernelContext* context,
output->flat<T>());
}
template <typename Device, typename T>
class SeluOp : public UnaryElementWiseOp<T, SeluOp<Device, T>> {
public:
using UnaryElementWiseOp<T, SeluOp<Device, T>>::UnaryElementWiseOp;
void Operate(OpKernelContext* context, const Tensor& input, Tensor* output) {
functor::Selu<Device, T> functor;
functor(context->eigen_device<Device>(), input.flat<T>(),
output->flat<T>());
}
};
template <typename Device, typename T>
class SeluGradOp : public BinaryElementWiseOp<T, SeluGradOp<Device, T>> {
public:
using BinaryElementWiseOp<T, SeluGradOp<Device, T>>::BinaryElementWiseOp;
void OperateNoTemplate(OpKernelContext* context, const Tensor& g,
const Tensor& a, Tensor* output);
// INPUTS:
// g (gradients): backpropagated gradients
// a (outputs): outputs of the SeluOp()
// OUTPUT:
// gradients to backprop
template <int NDIMS>
void Operate(OpKernelContext* context, const Tensor& g, const Tensor& a,
Tensor* output) {
OperateNoTemplate(context, g, a, output);
}
};
template <typename Device, typename T>
void SeluGradOp<Device, T>::OperateNoTemplate(OpKernelContext* context,
const Tensor& g, const Tensor& a,
Tensor* output) {
if (!ReluHelpers::ValidateSameSize(context, g, a)) return;
functor::SeluGrad<Device, T> functor;
functor(context->eigen_device<Device>(), g.flat<T>(), a.flat<T>(),
output->flat<T>());
}
} // namespace tensorflow
#undef EIGEN_USE_THREADS

View File

@ -125,6 +125,46 @@ struct EluGrad {
}
};
// Functor used by SeluOp to do the computations.
template <typename Device, typename T>
struct Selu {
// Computes Selu activation.
//
// features: any shape.
// activations: same shape as "features".
void operator()(const Device& d, typename TTypes<T>::ConstTensor features,
typename TTypes<T>::Tensor activations) {
// features.constant(?)
const auto scale = static_cast<T>(1.0507009873554804934193349852946);
const auto scale_alpha = static_cast<T>(1.7580993408473768599402175208123);
const auto one = static_cast<T>(1);
const auto zero = static_cast<T>(0);
activations.device(d) =
(features < zero)
.select(scale_alpha * (features.exp() - features.constant(one)),
scale * features);
}
};
// Functor used by SeluGradOp to do the computations.
template <typename Device, typename T>
struct SeluGrad {
// Computes SeluGrad backprops.
//
// gradients: gradients backpropagated to the Selu op.
// activations: outputs of the Selu op.
// backprops: gradients to backpropagate to the Selu inputs.
void operator()(const Device& d, typename TTypes<T>::ConstTensor gradients,
typename TTypes<T>::ConstTensor activations,
typename TTypes<T>::Tensor backprops) {
const auto scale = static_cast<T>(1.0507009873554804934193349852946);
const auto scale_alpha = static_cast<T>(1.7580993408473768599402175208123);
backprops.device(d) =
(activations < static_cast<T>(0)).select(
gradients * (activations + scale_alpha), gradients * scale);
}
};
} // namespace functor
} // namespace tensorflow

View File

@ -35,7 +35,9 @@ typedef Eigen::GpuDevice GPUDevice;
template struct functor::Relu6<GPUDevice, T>; \
template struct functor::Relu6Grad<GPUDevice, T>; \
template struct functor::Elu<GPUDevice, T>; \
template struct functor::EluGrad<GPUDevice, T>;
template struct functor::EluGrad<GPUDevice, T>; \
template struct functor::Selu<GPUDevice, T>; \
template struct functor::SeluGrad<GPUDevice, T>;
TF_CALL_GPU_NUMBER_TYPES(DEFINE_GPU_KERNELS);

View File

@ -5,4 +5,4 @@ The table format is similar to the table format for the LevelDB
open source key/value store, with the exception that our tables
do not support "filter" meta blocks (Bloom Filters). See:
https://github.com/google/leveldb/blob/master/doc/table_format.txt
https://github.com/google/leveldb/blob/master/doc/table_format.md

View File

@ -22,17 +22,28 @@ limitations under the License.
namespace tensorflow {
namespace random {
std::mt19937_64* InitRng() {
namespace {
std::mt19937_64* InitRngWithRandomSeed() {
std::random_device device("/dev/urandom");
return new std::mt19937_64(device());
}
std::mt19937_64 InitRngWithDefaultSeed() { return std::mt19937_64(); }
} // anonymous namespace
uint64 New64() {
static std::mt19937_64* rng = InitRng();
static std::mt19937_64* rng = InitRngWithRandomSeed();
static mutex mu;
mutex_lock l(mu);
return (*rng)();
}
uint64 New64DefaultSeed() {
static std::mt19937_64 rng = InitRngWithDefaultSeed();
static mutex mu;
mutex_lock l(mu);
return rng();
}
} // namespace random
} // namespace tensorflow

View File

@ -25,6 +25,10 @@ namespace random {
// in different processes.
uint64 New64();
// Return a 64-bit random value. Uses
// std::mersenne_twister_engine::default_seed as seed value.
uint64 New64DefaultSeed();
} // namespace random
} // namespace tensorflow

View File

@ -1779,6 +1779,33 @@ backprops: The gradients: `gradients * (outputs + 1)` if outputs < 0,
`gradients` otherwise.
)doc");
REGISTER_OP("Selu")
.Input("features: T")
.Output("activations: T")
.Attr("T: {half, float, double}")
.SetShapeFn(shape_inference::UnchangedShape)
.Doc(R"doc(
Computes scaled exponential linear: `scale * alpha * (exp(features) - 1)`
if < 0, `scale * features` otherwise.
See [Self-Normalizing Neural Networks](https://arxiv.org/abs/1706.02515)
)doc");
REGISTER_OP("SeluGrad")
.Input("gradients: T")
.Input("outputs: T")
.Output("backprops: T")
.Attr("T: {half, float, double}")
.SetShapeFn(shape_inference::MergeBothInputsShapeFn)
.Doc(R"doc(
Computes gradients for the scaled exponential linear (Selu) operation.
gradients: The backpropagated gradients to the corresponding Selu operation.
outputs: The outputs of the corresponding Selu operation.
backprops: The gradients: `gradients * (outputs + scale * alpha)`
if outputs < 0, `scale * gradients` otherwise.
)doc");
REGISTER_OP("Softplus")
.Input("features: T")
.Output("activations: T")
@ -1979,6 +2006,49 @@ precision: Computed Precision at `k` as a `bool Tensor`.
)doc");
// This is the same as `InTopK`, but takes `k` as in input rather than an attr.
REGISTER_OP("InTopKV2")
.Input("predictions: float")
.Input("targets: T")
.Input("k: T")
.Output("precision: bool")
.Attr("T: {int32, int64} = DT_INT32")
.SetShapeFn([](InferenceContext* c) {
ShapeHandle predictions;
ShapeHandle targets;
TF_RETURN_IF_ERROR(c->WithRank(c->input(0), 2, &predictions));
TF_RETURN_IF_ERROR(c->WithRank(c->input(1), 1, &targets));
DimensionHandle batch_size;
TF_RETURN_IF_ERROR(
c->Merge(c->Dim(predictions, 0), c->Dim(targets, 0), &batch_size));
c->set_output(0, c->Vector(batch_size));
return Status::OK();
})
.Doc(R"doc(
Says whether the targets are in the top `K` predictions.
This outputs a `batch_size` bool array, an entry `out[i]` is `true` if the
prediction for the target class is among the top `k` predictions among
all predictions for example `i`. Note that the behavior of `InTopK` differs
from the `TopK` op in its handling of ties; if multiple classes have the
same prediction value and straddle the top-`k` boundary, all of those
classes are considered to be in the top `k`.
More formally, let
\\(predictions_i\\) be the predictions for all classes for example `i`,
\\(targets_i\\) be the target class for example `i`,
\\(out_i\\) be the output for example `i`,
$$out_i = predictions_{i, targets_i} \in TopKIncludingTies(predictions_i)$$
predictions: A `batch_size` x `classes` tensor.
targets: A `batch_size` vector of class ids.
k: Number of top elements to look at for computing precision.
precision: Computed precision at `k` as a `bool Tensor`.
)doc");
namespace {
Status TopKShapeFn(InferenceContext* c) {

View File

@ -412,7 +412,8 @@ TEST(NNOpsTest, Dilation2DBackpropFilter_ShapeFn) {
TEST(NNOpsTest, MergeBothInputs_ShapeFn) {
for (const char* op_name :
{"ReluGrad", "Relu6Grad", "EluGrad", "SoftplusGrad", "SoftsignGrad"}) {
{"ReluGrad", "Relu6Grad", "EluGrad", "SeluGrad", "SoftplusGrad",
"SoftsignGrad"}) {
ShapeInferenceTestOp op(op_name);
INFER_OK(op, "?;?", "in0|in1");

View File

@ -23383,6 +23383,60 @@ op {
summary: "Computes the eigen decomposition of one or more square self-adjoint matrices."
description: "Computes the eigenvalues and (optionally) eigenvectors of each inner matrix in\n`input` such that `input[..., :, :] = v[..., :, :] * diag(e[..., :])`.\n\n```python\n# a is a tensor.\n# e is a tensor of eigenvalues.\n# v is a tensor of eigenvectors.\ne, v = self_adjoint_eig(a)\ne = self_adjoint_eig(a, compute_v=False)\n```"
}
op {
name: "Selu"
input_arg {
name: "features"
type_attr: "T"
}
output_arg {
name: "activations"
type_attr: "T"
}
attr {
name: "T"
type: "type"
allowed_values {
list {
type: DT_HALF
type: DT_FLOAT
type: DT_DOUBLE
}
}
}
summary: "Computes scaled exponential linear: `scale * alpha * (exp(features) - 1)` if < 0, `scale * features` otherwise."
description: "See [Self-Normalizing Neural Networks](https://arxiv.org/abs/1706.02515)"
}
op {
name: "SeluGrad"
input_arg {
name: "gradients"
description: "The backpropagated gradients to the corresponding Selu operation."
type_attr: "T"
}
input_arg {
name: "outputs"
description: "The outputs of the corresponding Selu operation."
type_attr: "T"
}
output_arg {
name: "backprops"
description: "The gradients: `gradients * (outputs + scale * alpha)` if outputs < 0,\n`scale * gradients` otherwise."
type_attr: "T"
}
attr {
name: "T"
type: "type"
allowed_values {
list {
type: DT_HALF
type: DT_FLOAT
type: DT_DOUBLE
}
}
}
summary: "Computes gradients for the scaled exponential linear (Selu) operation."
}
op {
name: "SerializeManySparse"
input_arg {

View File

@ -1,7 +1,7 @@
# Platform-specific build configurations.
load("@protobuf//:protobuf.bzl", "cc_proto_library")
load("@protobuf//:protobuf.bzl", "py_proto_library")
load("@protobuf_archive//:protobuf.bzl", "cc_proto_library")
load("@protobuf_archive//:protobuf.bzl", "py_proto_library")
load("//tensorflow:tensorflow.bzl", "if_not_mobile")
load("//tensorflow:tensorflow.bzl", "if_not_windows")
@ -44,15 +44,15 @@ def tf_proto_library_cc(name, srcs = [], has_services = None,
cc_proto_library(
name = name + "_cc",
srcs = srcs,
deps = tf_deps(protodeps, "_cc") + ["@protobuf//:cc_wkt_protos"],
cc_libs = cc_libs + ["@protobuf//:protobuf"],
deps = tf_deps(protodeps, "_cc") + ["@protobuf_archive//:cc_wkt_protos"],
cc_libs = cc_libs + ["@protobuf_archive//:protobuf"],
copts = if_not_windows([
"-Wno-unknown-warning-option",
"-Wno-unused-but-set-variable",
"-Wno-sign-compare",
]),
protoc = "@protobuf//:protoc",
default_runtime = "@protobuf//:protobuf",
protoc = "@protobuf_archive//:protoc",
default_runtime = "@protobuf_archive//:protobuf",
use_grpc_plugin = use_grpc_plugin,
testonly = testonly,
visibility = visibility,
@ -65,9 +65,9 @@ def tf_proto_library_py(name, srcs=[], protodeps=[], deps=[], visibility=[],
name = name + "_py",
srcs = srcs,
srcs_version = srcs_version,
deps = deps + tf_deps(protodeps, "_py") + ["@protobuf//:protobuf_python"],
protoc = "@protobuf//:protoc",
default_runtime = "@protobuf//:protobuf_python",
deps = deps + tf_deps(protodeps, "_py") + ["@protobuf_archive//:protobuf_python"],
protoc = "@protobuf_archive//:protoc",
default_runtime = "@protobuf_archive//:protobuf_python",
visibility = visibility,
testonly = testonly,
)

View File

@ -28,7 +28,7 @@ def tf_additional_verbs_deps():
"//tensorflow:with_verbs_support": [
"//tensorflow/contrib/verbs:verbs_server_lib",
"//tensorflow/contrib/verbs:grpc_verbs_client",
],
],
"//conditions:default": [],
})

View File

@ -20,7 +20,7 @@ limitations under the License.
#if defined(PLATFORM_GOOGLE)
#include "tensorflow/core/platform/google/build_config/gif.h"
#elif (defined(PLATFORM_POSIX) && !defined(IS_MOBILE_PLATFORM)) || defined(PLATFORM_WINDOWS)
#elif defined(PLATFORM_POSIX)|| defined(PLATFORM_WINDOWS) ||defined(PLATFORM_POSIX_ANDROID)
#include <gif_lib.h>
#else
#error Define the appropriate PLATFORM_<foo> macro for this platform

View File

@ -20,7 +20,7 @@ limitations under the License.
#if defined(PLATFORM_GOOGLE)
#include "tensorflow/core/platform/google/build_config/jpeg.h"
#elif (defined(PLATFORM_POSIX) && !defined(IS_MOBILE_PLATFORM)) || defined(PLATFORM_WINDOWS)
#elif defined(PLATFORM_POSIX)|| defined(PLATFORM_WINDOWS) ||defined(PLATFORM_POSIX_ANDROID)
#include <stddef.h>
#include <stdio.h>
#include <stdlib.h>

View File

@ -43,9 +43,10 @@ limitations under the License.
#elif defined(__arm__)
#define PLATFORM_POSIX
// Since there's no macro for the Raspberry Pi, assume we're on a mobile
// platform if we're compiling for the ARM CPU.
// Require an outside macro to tell us if we're building for Raspberry Pi.
#if !defined(RASPBERRY_PI)
#define IS_MOBILE_PLATFORM
#endif // !defined(RASPBERRY_PI)
#else
// If no platform specified, use:

View File

@ -20,7 +20,7 @@ limitations under the License.
#if defined(PLATFORM_GOOGLE)
#include "tensorflow/core/platform/google/build_config/png.h"
#elif (defined(PLATFORM_POSIX) && !defined(IS_MOBILE_PLATFORM)) || defined(PLATFORM_WINDOWS)
#elif defined(PLATFORM_POSIX)|| defined(PLATFORM_WINDOWS) ||defined(PLATFORM_POSIX_ANDROID)
#include <png.h>
#else
#error Define the appropriate PLATFORM_<foo> macro for this platform

View File

@ -102,6 +102,8 @@ message OptimizerOptions {
L0 = -1;
}
// Overall optimization level. The actual optimizations applied will be the
// logical OR of the flags that this level implies and any flags already set.
Level opt_level = 3;
// Control the use of the compiler/jit. Experimental.

View File

@ -19,12 +19,12 @@ limitations under the License.
// TensorFlow uses semantic versioning, see http://semver.org/.
#define TF_MAJOR_VERSION 1
#define TF_MINOR_VERSION 2
#define TF_PATCH_VERSION 1
#define TF_MINOR_VERSION 3
#define TF_PATCH_VERSION 0
// TF_VERSION_SUFFIX is non-empty for pre-releases (e.g. "-alpha", "-alpha.1",
// "-beta", "-rc", "-rc.1")
#define TF_VERSION_SUFFIX ""
#define TF_VERSION_SUFFIX "-rc0"
#define TF_STR_HELPER(x) #x
#define TF_STR(x) TF_STR_HELPER(x)

View File

@ -85,28 +85,29 @@ static bool ConsumeNumber(StringPiece* in, int* val) {
}
}
/* static */
string DeviceNameUtils::FullName(const string& job, int replica, int task,
const string& type, int id) {
// Returns a fully qualified device name given the parameters.
static string DeviceName(const string& job, int replica, int task,
const string& device_prefix, const string& device_type,
int id) {
CHECK(IsJobName(job)) << job;
CHECK_LE(0, replica);
CHECK_LE(0, task);
CHECK(!type.empty());
CHECK(!device_type.empty());
CHECK_LE(0, id);
return strings::StrCat("/job:", job, "/replica:", replica, "/task:", task,
"/device:", type, ":", id);
device_prefix, device_type, ":", id);
}
/* static */
string DeviceNameUtils::FullName(const string& job, int replica, int task,
const string& type, int id) {
return DeviceName(job, replica, task, "/device:", type, id);
}
/* static */
string DeviceNameUtils::LegacyName(const string& job, int replica, int task,
const string& type, int id) {
CHECK(IsJobName(job)) << job;
CHECK_LE(0, replica);
CHECK_LE(0, task);
CHECK(!type.empty());
CHECK_LE(0, id);
return strings::StrCat("/job:", job, "/replica:", replica, "/task:", task,
"/", str_util::Lowercase(type), ":", id);
return DeviceName(job, replica, task, "/", str_util::Lowercase(type), id);
}
bool DeviceNameUtils::ParseFullName(StringPiece fullname, ParsedName* p) {

View File

@ -1,8 +1,12 @@
# Losses (contrib)
## Deprecated
This module is deprecated. Instructions for updating: Use @{tf.losses} instead.
## Loss operations for use in neural networks.
Note: By default all the losses are collected into the `GraphKeys.LOSSES`
Note: By default, all the losses are collected into the `GraphKeys.LOSSES`
collection.
All of the loss functions take a pair of predictions and ground truth labels,

View File

@ -8,7 +8,7 @@ Note: Functions taking `Tensor` arguments can also take anything accepted by
## Activation Functions
The activation ops provide different types of nonlinearities for use in neural
networks. These include smooth nonlinearities (`sigmoid`, `tanh`, `elu`,
networks. These include smooth nonlinearities (`sigmoid`, `tanh`, `elu`, `selu`,
`softplus`, and `softsign`), continuous but not everywhere differentiable
functions (`relu`, `relu6`, `crelu` and `relu_x`), and random regularization
(`dropout`).
@ -20,6 +20,7 @@ shape as the input tensor.
* @{tf.nn.relu6}
* @{tf.nn.crelu}
* @{tf.nn.elu}
* @{tf.nn.selu}
* @{tf.nn.softplus}
* @{tf.nn.softsign}
* @{tf.nn.dropout}

View File

@ -35,7 +35,7 @@ at [tensorflow.org/versions/master](https://www.tensorflow.org/versions/master).
If you want documentation changes to appear at root, you will need to also
contribute that change to the current stable binary branch (and/or
[cherrypick](https://www.google.com/url?sa=D&q=http%3A%2F%2Fstackoverflow.com%2Fquestions%2F9339429%2Fwhat-does-cherry-picking-a-commit-with-git-mean)).
[cherrypick](https://stackoverflow.com/questions/9339429/what-does-cherry-picking-a-commit-with-git-mean)).
## Reference vs. non-reference documentation

View File

@ -10,3 +10,5 @@ This section contains the following documents:
TensorFlow source code or documentation, please read this guide.
* @{$style_guide$TensorFlow Style Guide}, which identifies coding style
conventions that TensorFlow developers and users should follow.
* @{$benchmarks$Benchmarks}, Benchmarks, a guide for defining and
running a TensorFlow benchmark.

View File

@ -63,7 +63,7 @@ There are actually two different formats that a ProtoBuf can be saved in.
TextFormat is a human-readable form, which makes it nice for debugging and
editing, but can get large when there's numerical data like weights stored in
it. You can see a small example of that in
[graph_run_run2.pbtxt](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tensorboard/demo/data/graph_run_run2.pbtxt).
[graph_run_run2.pbtxt](https://github.com/tensorflow/tensorboard/blob/master/tensorboard/demo/data/graph_run_run2.pbtxt).
Binary format files are a lot smaller than their text equivalents, even though
they're not as readable for us. In this script, we ask the user to supply a

View File

@ -35,7 +35,7 @@ enable TensorFlow for C:
OS="linux" # Change to "darwin" for Mac OS
TARGET_DIRECTORY="/usr/local"
curl -L \
"https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-${OS}-x86_64-1.2.1.tar.gz" |
"https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-${OS}-x86_64-1.3.0-rc0.tar.gz" |
sudo tar -C $TARGET_DIRECTORY -xz
The `tar` command extracts the TensorFlow C library into the `lib`

View File

@ -35,7 +35,7 @@ steps to install this library and enable TensorFlow for Go:
TF_TYPE="cpu" # Change to "gpu" for GPU support
TARGET_DIRECTORY='/usr/local'
curl -L \
"https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-$(go env GOOS)-x86_64-1.2.1.tar.gz" |
"https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-$(go env GOOS)-x86_64-1.3.0-rc0.tar.gz" |
sudo tar -C $TARGET_DIRECTORY -xz
The `tar` command extracts the TensorFlow C library into the `lib`

View File

@ -34,7 +34,7 @@ following to the project's `pom.xml` to use the TensorFlow Java APIs:
<dependency>
<groupId>org.tensorflow</groupId>
<artifactId>tensorflow</artifactId>
<version>1.2.1</version>
<version>1.3.0-rc0</version>
</dependency>
```
@ -63,7 +63,7 @@ As an example, these steps will create a Maven project that uses TensorFlow:
<dependency>
<groupId>org.tensorflow</groupId>
<artifactId>tensorflow</artifactId>
<version>1.2.1</version>
<version>1.3.0-rc0</version>
</dependency>
</dependencies>
</project>
@ -122,7 +122,7 @@ refer to the simpler instructions above instead.
Take the following steps to install TensorFlow for Java on Linux or Mac OS:
1. Download
[libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.2.1.jar),
[libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.3.0-rc0.jar),
which is the TensorFlow Java Archive (JAR).
2. Decide whether you will run TensorFlow for Java on CPU(s) only or with
@ -141,7 +141,7 @@ Take the following steps to install TensorFlow for Java on Linux or Mac OS:
OS=$(uname -s | tr '[:upper:]' '[:lower:]')
mkdir -p ./jni
curl -L \
"https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-${TF_TYPE}-${OS}-x86_64-1.2.1.tar.gz" |
"https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-${TF_TYPE}-${OS}-x86_64-1.3.0-rc0.tar.gz" |
tar -xz -C ./jni
### Install on Windows
@ -149,10 +149,10 @@ Take the following steps to install TensorFlow for Java on Linux or Mac OS:
Take the following steps to install TensorFlow for Java on Windows:
1. Download
[libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.2.1.jar),
[libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.3.0-rc0.jar),
which is the TensorFlow Java Archive (JAR).
2. Download the following Java Native Interface (JNI) file appropriate for
[TensorFlow for Java on Windows](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-cpu-windows-x86_64-1.2.1.zip).
[TensorFlow for Java on Windows](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-cpu-windows-x86_64-1.3.0-rc0.zip).
3. Extract this .zip file.
@ -200,7 +200,7 @@ must be part of your `classpath`. For example, you can include the
downloaded `.jar` in your `classpath` by using the `-cp` compilation flag
as follows:
<pre><b>javac -cp libtensorflow-1.2.1.jar HelloTF.java</b></pre>
<pre><b>javac -cp libtensorflow-1.3.0-rc0.jar HelloTF.java</b></pre>
### Running
@ -214,11 +214,11 @@ two files are available to the JVM:
For example, the following command line executes the `HelloTF` program on Linux
and Mac OS X:
<pre><b>java -cp libtensorflow-1.2.1.jar:. -Djava.library.path=./jni HelloTF</b></pre>
<pre><b>java -cp libtensorflow-1.3.0-rc0.jar:. -Djava.library.path=./jni HelloTF</b></pre>
And the following command line executes the `HelloTF` program on Windows:
<pre><b>java -cp libtensorflow-1.2.1.jar;. -Djava.library.path=jni HelloTF</b></pre>
<pre><b>java -cp libtensorflow-1.3.0-rc0.jar;. -Djava.library.path=jni HelloTF</b></pre>
If the program prints <tt>Hello from <i>version</i></tt>, you've successfully
installed TensorFlow for Java and are ready to use the API. If the program

View File

@ -172,7 +172,7 @@ Take the following steps to install TensorFlow with Virtualenv:
virtualenv environment:
<pre>(tensorflow)$ <b>pip3 install --upgrade \
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.2.1-cp34-cp34m-linux_x86_64.whl</b></pre>
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.3.0rc0-cp34-cp34m-linux_x86_64.whl</b></pre>
If you encounter installation problems, see
[Common Installation Problems](#common_installation_problems).
@ -277,7 +277,7 @@ take the following steps:
<pre>
$ <b>sudo pip3 install --upgrade \
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.2.1-cp34-cp34m-linux_x86_64.whl</b>
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.3.0rc0-cp34-cp34m-linux_x86_64.whl</b>
</pre>
If this step fails, see
@ -464,7 +464,7 @@ Take the following steps to install TensorFlow in an Anaconda environment:
<pre>
(tensorflow)$ <b>pip install --ignore-installed --upgrade \
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.2.1-cp34-cp34m-linux_x86_64.whl</b></pre>
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.3.0rc0-cp34-cp34m-linux_x86_64.whl</b></pre>
<a name="ValidateYourInstallation"></a>
@ -632,14 +632,14 @@ This section documents the relevant values for Linux installations.
CPU only:
<pre>
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.2.1-cp27-none-linux_x86_64.whl
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.3.0rc0-cp27-none-linux_x86_64.whl
</pre>
GPU support:
<pre>
https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.2.1-cp27-none-linux_x86_64.whl
https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.3.0rc0-cp27-none-linux_x86_64.whl
</pre>
Note that GPU support requires the NVIDIA hardware and software described in
@ -651,14 +651,14 @@ Note that GPU support requires the NVIDIA hardware and software described in
CPU only:
<pre>
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.2.1-cp34-cp34m-linux_x86_64.whl
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.3.0rc0-cp34-cp34m-linux_x86_64.whl
</pre>
GPU support:
<pre>
https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.2.1-cp34-cp34m-linux_x86_64.whl
https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.3.0rc0-cp34-cp34m-linux_x86_64.whl
</pre>
Note that GPU support requires the NVIDIA hardware and software described in
@ -670,14 +670,14 @@ Note that GPU support requires the NVIDIA hardware and software described in
CPU only:
<pre>
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.2.1-cp35-cp35m-linux_x86_64.whl
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.3.0rc0-cp35-cp35m-linux_x86_64.whl
</pre>
GPU support:
<pre>
https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.2.1-cp35-cp35m-linux_x86_64.whl
https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.3.0rc0-cp35-cp35m-linux_x86_64.whl
</pre>
@ -689,14 +689,14 @@ Note that GPU support requires the NVIDIA hardware and software described in
CPU only:
<pre>
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.2.1-cp36-cp36m-linux_x86_64.whl
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.3.0rc0-cp36-cp36m-linux_x86_64.whl
</pre>
GPU support:
<pre>
https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.2.1-cp36-cp36m-linux_x86_64.whl
https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.3.0rc0-cp36-cp36m-linux_x86_64.whl
</pre>

View File

@ -109,7 +109,7 @@ Take the following steps to install TensorFlow with Virtualenv:
TensorFlow in the active Virtualenv is as follows:
<pre> $ <b>pip3 install --upgrade \
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.2.1-py2-none-any.whl</b></pre>
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.3.0rc0-py2-none-any.whl</b></pre>
If you encounter installation problems, see
[Common Installation Problems](#common-installation-problems).
@ -230,7 +230,7 @@ take the following steps:
issue the following command:
<pre> $ <b>sudo pip3 install --upgrade \
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.2.1-py2-none-any.whl</b> </pre>
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.3.0rc0-py2-none-any.whl</b> </pre>
If the preceding command fails, see
[installation problems](#common-installation-problems).
@ -339,7 +339,7 @@ Take the following steps to install TensorFlow in an Anaconda environment:
TensorFlow for Python 2.7:
<pre> (tensorflow)$ <b>pip install --ignore-installed --upgrade \
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.2.1-py2-none-any.whl</b></pre>
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.3.0rc0-py2-none-any.whl</b></pre>
<a name="ValidateYourInstallation"></a>
@ -512,7 +512,7 @@ This section documents the relevant values for Mac OS installations.
<pre>
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.2.1-py2-none-any.whl
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.3.0rc0-py2-none-any.whl
</pre>
@ -520,7 +520,7 @@ https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.2.1-py2-none-any.
<pre>
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.2.1-py3-none-any.whl
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.3.0rc0-py3-none-any.whl
</pre>

View File

@ -342,10 +342,10 @@ Invoke `pip install` to install that pip package.
The filename of the `.whl` file depends on your platform.
For example, the following command will install the pip package
for TensorFlow 1.2.1 on Linux:
for TensorFlow 1.3.0rc0 on Linux:
<pre>
$ <b>sudo pip install /tmp/tensorflow_pkg/tensorflow-1.2.1-py2-none-any.whl</b>
$ <b>sudo pip install /tmp/tensorflow_pkg/tensorflow-1.3.0rc0-py2-none-any.whl</b>
</pre>
## Validate your installation

View File

@ -115,12 +115,12 @@ Take the following steps to install TensorFlow in an Anaconda environment:
environment. To install the CPU-only version of TensorFlow, enter the
following command:
<pre>(tensorflow)C:\> <b>pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-1.2.1-cp35-cp35m-win_amd64.whl</b> </pre>
<pre>(tensorflow)C:\> <b>pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-1.3.0rc0-cp35-cp35m-win_amd64.whl</b> </pre>
To install the GPU version of TensorFlow, enter the following command
(on a single line):
<pre>(tensorflow)C:\> <b>pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/windows/gpu/tensorflow_gpu-1.2.1-cp35-cp35m-win_amd64.whl</b> </pre>
<pre>(tensorflow)C:\> <b>pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/windows/gpu/tensorflow_gpu-1.3.0rc0-cp35-cp35m-win_amd64.whl</b> </pre>
## Validate your installation

View File

@ -104,7 +104,7 @@ with tf.device('/cpu:0'):
Under some circumstances, both the CPU and GPU can be starved for data by the
I/O system. If you are using many small files to form your input data set, you
may be limited by the speed of your filesystem. If your training loop runs
faster when using SSDs vs HDDs for storing your input data, you could could be
faster when using SSDs vs HDDs for storing your input data, you could be
I/O bottlenecked.
If this is the case, you should pre-process your input data, creating a few

View File

@ -93,7 +93,7 @@ curl http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.t
tar xzf /tmp/inceptionv3.tgz -C /tmp/
bazel build tensorflow/tools/graph_transforms:transform_graph
bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
--in_graph=/tmp/classify_image_graph_def.pb \
--inputs="Mul" --in_graph=/tmp/classify_image_graph_def.pb \
--outputs="softmax" --out_graph=/tmp/quantized_graph.pb \
--transforms='add_default_attributes strip_unused_nodes(type=float, shape="1,299,299,3")
remove_nodes(op=Identity, op=CheckNumerics) fold_constants(ignore_errors=true)
@ -108,12 +108,6 @@ versus 91MB). You can still run this model using exactly the same inputs and
outputs though, and you should get equivalent results. Here's an example:
```sh
# Note: You need to add the dependencies of the quantization operation to the
# cc_binary in the BUILD file of the label_image program:
#
# //tensorflow/contrib/quantization:cc_ops
# //tensorflow/contrib/quantization/kernels:quantized_ops
bazel build tensorflow/examples/label_image:label_image
bazel-bin/tensorflow/examples/label_image/label_image \
--image=<input-image> \

View File

@ -106,8 +106,8 @@ Here's a list of columns available in the Census Income dataset:
: : : military, private, etc.). :
| fnlwgt | Continuous | The number of people the census |
: : : takers believe that observation :
: : : represents (sample weight). This :
: : : variable will not be used. :
: : : represents (sample weight). Final :
: : : weight will not be used. :
| education | Categorical | The highest level of education |
: : : achieved for that individual. :
| education_num | Continuous | The highest level of education in |

View File

@ -124,7 +124,7 @@ it and the Android NDK and SDK must be installed on your system.
##### Edit WORKSPACE
The Android entries in [`<workspace_root>/WORKSPACE`](../../../WORKSPACE#L19-L32)
The Android entries in [`<workspace_root>/WORKSPACE`](../../../WORKSPACE#L19-L36)
must be uncommented with the paths filled in appropriately depending on where
you installed the NDK and SDK. Otherwise an error such as:
"The external label '//external:android/sdk' is not bound to anything" will

View File

@ -31,6 +31,9 @@ cc_binary(
# Jpg, gif, and png related code won't be included
"//tensorflow/cc:cc_ops",
"//tensorflow/core:android_tensorflow_lib",
# cc:android_tensorflow_image_op is for including jpeg/gif/png
# decoder to enable real-image evaluation on Android
"//tensorflow/core/kernels:android_tensorflow_image_op",
],
"//conditions:default": [
"//tensorflow/cc:cc_ops",

View File

@ -7,28 +7,28 @@ See the [Quickstart tutorial](https://www.tensorflow.org/get_started/estimator)
for an introduction to the API.
To run most of these examples, you need to install the `scikit learn` library
(`sudo pip install sklearn`). Some examples use the `pandas` library for data
processing (`sudo pip install pandas`).
(`pip install -U scikit-learn`). Some examples use the `pandas` library for data
processing (`pip install -U pandas`).
## Basics
* [Deep Neural Network Regression with Boston Data]( https://www.tensorflow.org/code/tensorflow/examples/learn/boston.py)
* [Deep Neural Network Classification with Iris Data]( https://www.tensorflow.org/code/tensorflow/examples/learn/iris.py)
* [Building a Custom Model]( https://www.tensorflow.org/code/tensorflow/examples/learn/iris_custom_model.py)
* [Building a Model Using Different GPU Configurations]( https://www.tensorflow.org/code/tensorflow/examples/learn/iris_run_config.py)
* [Deep Neural Network Regression with Boston Data](https://www.tensorflow.org/code/tensorflow/examples/learn/boston.py)
* [Deep Neural Network Classification with Iris Data](https://www.tensorflow.org/code/tensorflow/examples/learn/iris.py)
* [Building a Custom Model](https://www.tensorflow.org/code/tensorflow/examples/learn/iris_custom_model.py)
* [Building a Model Using Different GPU Configurations](https://www.tensorflow.org/code/tensorflow/examples/learn/iris_run_config.py)
## Techniques
* [Deep Neural Network with Customized Decay Function]( https://www.tensorflow.org/code/tensorflow/examples/learn/iris_custom_decay_dnn.py)
* [Deep Neural Network with Customized Decay Function](https://www.tensorflow.org/code/tensorflow/examples/learn/iris_custom_decay_dnn.py)
## Specialized Models
* [Building a Random Forest Model]( https://www.tensorflow.org/code/tensorflow/examples/learn/random_forest_mnist.py)
* [Building a Wide & Deep Model]( https://www.tensorflow.org/code/tensorflow/examples/learn/wide_n_deep_tutorial.py)
* [Building a Residual Network Model]( https://www.tensorflow.org/code/tensorflow/examples/learn/resnet.py)
* [Building a Random Forest Model](https://www.tensorflow.org/code/tensorflow/examples/learn/random_forest_mnist.py)
* [Building a Wide & Deep Model](https://www.tensorflow.org/code/tensorflow/examples/learn/wide_n_deep_tutorial.py)
* [Building a Residual Network Model](https://www.tensorflow.org/code/tensorflow/examples/learn/resnet.py)
## Text classification
* [Text Classification Using Recurrent Neural Networks on Words]( https://www.tensorflow.org/code/tensorflow/examples/learn/text_classification.py)
* [Text Classification Using Convolutional Neural Networks on Words]( https://www.tensorflow.org/code/tensorflow/examples/learn/text_classification_cnn.py)
* [Text Classification Using Recurrent Neural Networks on Characters]( https://www.tensorflow.org/code/tensorflow/examples/learn/text_classification_character_rnn.py)
* [Text Classification Using Convolutional Neural Networks on Characters]( https://www.tensorflow.org/code/tensorflow/examples/learn/text_classification_character_cnn.py)
* [Text Classification Using Recurrent Neural Networks on Words](https://www.tensorflow.org/code/tensorflow/examples/learn/text_classification.py)
* [Text Classification Using Convolutional Neural Networks on Words](https://www.tensorflow.org/code/tensorflow/examples/learn/text_classification_cnn.py)
* [Text Classification Using Recurrent Neural Networks on Characters](https://www.tensorflow.org/code/tensorflow/examples/learn/text_classification_character_rnn.py)
* [Text Classification Using Convolutional Neural Networks on Characters](https://www.tensorflow.org/code/tensorflow/examples/learn/text_classification_character_cnn.py)

Some files were not shown because too many files have changed in this diff Show More