Merge changes from github.
PiperOrigin-RevId: 186073337
This commit is contained in:
parent
128572c316
commit
0e6f39d1bd
@ -41,7 +41,7 @@ TensorFlow coding style.
|
|||||||
#### General guidelines and philosophy for contribution
|
#### General guidelines and philosophy for contribution
|
||||||
|
|
||||||
* Include unit tests when you contribute new features, as they help to
|
* Include unit tests when you contribute new features, as they help to
|
||||||
a) prove that your code works correctly, b) guard against future breaking
|
a) prove that your code works correctly, and b) guard against future breaking
|
||||||
changes to lower the maintenance cost.
|
changes to lower the maintenance cost.
|
||||||
* Bug fixes also generally require unit tests, because the presence of bugs
|
* Bug fixes also generally require unit tests, because the presence of bugs
|
||||||
usually indicates insufficient test coverage.
|
usually indicates insufficient test coverage.
|
||||||
@ -51,7 +51,7 @@ TensorFlow coding style.
|
|||||||
non-backward-compatible API changes without a major release. Reviewers of your
|
non-backward-compatible API changes without a major release. Reviewers of your
|
||||||
pull request will comment on any API compatibility issues.
|
pull request will comment on any API compatibility issues.
|
||||||
* When you contribute a new feature to TensorFlow, the maintenance burden is (by
|
* When you contribute a new feature to TensorFlow, the maintenance burden is (by
|
||||||
default) transferred to the TensorFlow team. This means that benefit of
|
default) transferred to the TensorFlow team. This means that benefit of the
|
||||||
contribution must be compared against the cost of maintaining the feature.
|
contribution must be compared against the cost of maintaining the feature.
|
||||||
* Full new features (e.g., a new op implementing a cutting-edge algorithm)
|
* Full new features (e.g., a new op implementing a cutting-edge algorithm)
|
||||||
typically will live in
|
typically will live in
|
||||||
@ -68,8 +68,8 @@ Include a license at the top of new files.
|
|||||||
* [Java license example](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/java/src/main/java/org/tensorflow/Graph.java#L1)
|
* [Java license example](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/java/src/main/java/org/tensorflow/Graph.java#L1)
|
||||||
* [Go license example](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/go/operation.go#L1)
|
* [Go license example](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/go/operation.go#L1)
|
||||||
* [Bash license example](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/ci_build/ci_sanity.sh#L2)
|
* [Bash license example](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/ci_build/ci_sanity.sh#L2)
|
||||||
* [HTML license example](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tensorboard/dist/index.html#L2)
|
* [HTML license example](https://github.com/tensorflow/tensorboard/blob/master/tensorboard/components/tf_backend/tf-backend.html#L2)
|
||||||
* [JavaScript/TypeScript license example](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tensorboard/components/tf_backend/backend.ts#L1)
|
* [JavaScript/TypeScript license example](https://github.com/tensorflow/tensorboard/blob/master/tensorboard/components/tf_backend/backend.ts#L1)
|
||||||
|
|
||||||
Bazel BUILD files also need to include a license section, e.g.,
|
Bazel BUILD files also need to include a license section, e.g.,
|
||||||
[BUILD example](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/BUILD#L61).
|
[BUILD example](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/BUILD#L61).
|
||||||
@ -163,7 +163,7 @@ There are two ways to run TensorFlow unit tests.
|
|||||||
bazel test ${flags} //tensorflow/python/...
|
bazel test ${flags} //tensorflow/python/...
|
||||||
```
|
```
|
||||||
|
|
||||||
2. Using [Docker](www.docker.com) and TensorFlow's CI scripts.
|
2. Using [Docker](https://www.docker.com) and TensorFlow's CI scripts.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Install Docker first, then this will build and run cpu tests
|
# Install Docker first, then this will build and run cpu tests
|
||||||
|
114
RELEASE.md
114
RELEASE.md
@ -1,9 +1,98 @@
|
|||||||
|
# Release 1.6.0
|
||||||
|
|
||||||
|
## Breaking Changes
|
||||||
|
* Prebuilt binaries are now built against CUDA 9.0 and cuDNN 7.
|
||||||
|
* Prebuilt binaries will use AVX instructions. This may break TF on older CPUs.
|
||||||
|
|
||||||
|
## Major Features And Improvements
|
||||||
|
* New Optimizer internal API for non-slot variables. Descendants of AdamOptimizer that access _beta[12]_power will need to be updated.
|
||||||
|
* `tf.estimator.{FinalExporter,LatestExporter}` now export stripped SavedModels. This improves forward compatibility of the SavedModel.
|
||||||
|
* FFT support added to XLA CPU/GPU.
|
||||||
|
|
||||||
|
## Bug Fixes and Other Changes
|
||||||
|
* Documentation updates:
|
||||||
|
* Added a second version of Getting Started, which is aimed at ML
|
||||||
|
newcomers.
|
||||||
|
* Clarified documentation on `resize_images.align_corners` parameter.
|
||||||
|
* Additional documentation for TPUs.
|
||||||
|
* Google Cloud Storage (GCS):
|
||||||
|
* Add client-side throttle.
|
||||||
|
* Add a `FlushCaches()` method to the FileSystem interface, with an implementation for GcsFileSystem.
|
||||||
|
* Other:
|
||||||
|
* Add `tf.contrib.distributions.Kumaraswamy`.
|
||||||
|
* `RetryingFileSystem::FlushCaches()` calls the base FileSystem's `FlushCaches()`.
|
||||||
|
* Add auto_correlation to distributions.
|
||||||
|
* Add `tf.contrib.distributions.Autoregressive`.
|
||||||
|
* Add SeparableConv1D layer.
|
||||||
|
* Add convolutional Flipout layers.
|
||||||
|
* When both inputs of `tf.matmul` are bfloat16, it returns bfloat16, instead of float32.
|
||||||
|
* Added `tf.contrib.image.connected_components`.
|
||||||
|
* Add `tf.contrib.framework.CriticalSection` that allows atomic variable access.
|
||||||
|
* Output variance over trees predictions for classifications tasks.
|
||||||
|
* For `pt` and `eval` commands, allow writing tensor values to filesystem as numpy files.
|
||||||
|
* gRPC: Propagate truncated errors (instead of returning gRPC internal error).
|
||||||
|
* Augment parallel_interleave to support 2 kinds of prefetching.
|
||||||
|
* Improved XLA support for C64-related ops log, pow, atan2, tanh.
|
||||||
|
* Add probabilistic convolutional layers.
|
||||||
|
|
||||||
|
## API Changes
|
||||||
|
* Introducing prepare_variance boolean with default setting to False for backward compatibility.
|
||||||
|
* Move `layers_dense_variational_impl.py` to `layers_dense_variational.py`.
|
||||||
|
|
||||||
|
## Known Bugs
|
||||||
|
* Using XLA:GPU with CUDA 9 and CUDA 9.1 results in garbage results and/or
|
||||||
|
`CUDA_ILLEGAL_ADDRESS` failures.
|
||||||
|
|
||||||
|
Google discovered in mid-December 2017 that the PTX-to-SASS compiler in CUDA 9
|
||||||
|
and CUDA 9.1 sometimes does not properly compute the carry bit when
|
||||||
|
decomposing 64-bit address calculations with large offsets (e.g. `load [x +
|
||||||
|
large_constant]`) into 32-bit arithmetic in SASS.
|
||||||
|
|
||||||
|
As a result, these versions of `ptxas` miscompile most XLA programs which use
|
||||||
|
more than 4GB of temp memory. This results in garbage results and/or
|
||||||
|
`CUDA_ERROR_ILLEGAL_ADDRESS` failures.
|
||||||
|
|
||||||
|
A fix in CUDA 9.1.121 is expected in late February 2018. We do not expect a
|
||||||
|
fix for CUDA 9.0.x. Until the fix is available, the only workaround is to
|
||||||
|
[downgrade](https://developer.nvidia.com/cuda-toolkit-archive) to CUDA 8.0.x
|
||||||
|
or disable XLA:GPU.
|
||||||
|
|
||||||
|
TensorFlow will print a warning if you use XLA:GPU with a known-bad version of
|
||||||
|
CUDA; see e00ba24c4038e7644da417ddc639169b6ea59122.
|
||||||
|
|
||||||
|
## Thanks to our Contributors
|
||||||
|
|
||||||
|
This release contains contributions from many people at Google, as well as:
|
||||||
|
|
||||||
|
4d55397500, Ag Ramesh, Aiden Scandella, Akimasa Kimura, Alex Rothberg, Allen Goodman,
|
||||||
|
amilioto, Andrei Costinescu, Andrei Nigmatulin, Anjum Sayed, Anthony Platanios,
|
||||||
|
Anush Elangovan, Armando Fandango, Ashish Kumar Ram, Ashwini Shukla, Ben, Bhavani Subramanian,
|
||||||
|
Brett Koonce, Carl Thomé, cclauss, Cesc, Changming Sun, Christoph Boeddeker, Clayne Robison,
|
||||||
|
Clemens Schulz, Clint (Woonhyuk Baek), codrut3, Cole Gerdemann, Colin Raffel, Daniel Trebbien,
|
||||||
|
Daniel Ylitalo, Daniel Zhang, Daniyar, Darjan Salaj, Dave Maclachlan, David Norman, Dong--Jian,
|
||||||
|
dongsamb, dssgsra, Edward H, eladweiss, elilienstein, Eric Lilienstein, error.d, Eunji Jeong, fanlu,
|
||||||
|
Florian Courtial, fo40225, Fred, Gregg Helt, Guozhong Zhuang, Hanchen Li, hsm207, hyunyoung2,
|
||||||
|
ImSheridan, Ishant Mrinal Haloi, Jacky Ko, Jay Young, Jean Flaherty, Jerome, JerrikEph, Jesse
|
||||||
|
Kinkead, jfaath, Jian Lin, jinghuangintel, Jiongyan Zhang, Joel Hestness, Joel Shor, Johnny Chan,
|
||||||
|
Julian Niedermeier, Julian Wolff, JxKing, K-W-W, Karl Lessard, Kasper Marstal, Keiji Ariyama,
|
||||||
|
Koan-Sin Tan, Loki Der Quaeler, Loo Rong Jie, Luke Schaefer, Lynn Jackson, ManHyuk, Matt Basta,
|
||||||
|
Matt Smith, Matthew Schulkind, Michael, michaelkhan3, Miguel Piedrafita, Mikalai Drabovich,
|
||||||
|
Mike Knapp, mjwen, mktozk, Mohamed Aly, Mohammad Ashraf Bhuiyan, Myungjoo Ham, Naman Bhalla,
|
||||||
|
Namrata-Ibm, Nathan Luehr, nathansilberman, Netzeband, Niranjan Hasabnis, Omar Aflak, Ozge
|
||||||
|
Yalcinkaya, Parth P Panchal, patrickzzy, Patryk Chrabaszcz, Paul Van Eck, Paweł Kapica, Peng Yu,
|
||||||
|
Philip Yang, Pierre Blondeau, Po-Hsien Chu, powderluv, Puyu Wang, Rajendra Arora, Rasmus, Renat
|
||||||
|
Idrisov, resec, Robin Richtsfeld, Ronald Eddy Jr, Sahil Singh, Sam Matzek, Sami Kama, sandipmgiri,
|
||||||
|
Santiago Castro, Sayed Hadi Hashemi, Scott Tseng, Sergii Khomenko, Shahid, Shengpeng Liu, Shreyash
|
||||||
|
Sharma, Shrinidhi Kl, Simone Cirillo, simsicon, Stanislav Levental, starsblinking, Stephen Lumenta,
|
||||||
|
Steven Hickson, Su Tang, Taehoon Lee, Takuya Wakisaka, Ted Chang, Ted Ying, Tijmen Verhulsdonck,
|
||||||
|
Timofey Kondrashov, vade, vaibhav, Valentin Khrulkov, vchigrin, Victor Costan, Viraj Navkal,
|
||||||
|
Vivek Rane, wagonhelm, Yan Facai (颜发才), Yanbo Liang, Yaroslav Bulatov, yegord, Yong Tang,
|
||||||
|
Yoni Tsafir, yordun, Yuan (Terry) Tang, Yuxin Wu, zhengdi, Zhengsheng Wei, 田传武
|
||||||
|
|
||||||
# Release 1.5.0
|
# Release 1.5.0
|
||||||
|
|
||||||
## Breaking Changes
|
## Breaking Changes
|
||||||
* Prebuilt binaries are now built against CUDA 9.0 and cuDNN 7.
|
* Prebuilt binaries are now built against CUDA 9.0 and cuDNN 7.
|
||||||
* Our Linux binaries are built using ubuntu 16 containers, potentially
|
|
||||||
introducing glibc incompatibility issues with ubuntu 14.
|
|
||||||
* Starting from 1.6 release, our prebuilt binaries will use AVX instructions.
|
* Starting from 1.6 release, our prebuilt binaries will use AVX instructions.
|
||||||
This may break TF on older CPUs.
|
This may break TF on older CPUs.
|
||||||
|
|
||||||
@ -146,6 +235,27 @@
|
|||||||
* Minor refactor: move stats files from `stochastic` to `common` and remove
|
* Minor refactor: move stats files from `stochastic` to `common` and remove
|
||||||
`stochastic`.
|
`stochastic`.
|
||||||
|
|
||||||
|
## Known Bugs
|
||||||
|
* Using XLA:GPU with CUDA 9 and CUDA 9.1 results in garbage results and/or
|
||||||
|
`CUDA_ILLEGAL_ADDRESS` failures.
|
||||||
|
|
||||||
|
Google discovered in mid-December 2017 that the PTX-to-SASS compiler in CUDA 9
|
||||||
|
and CUDA 9.1 sometimes does not properly compute the carry bit when
|
||||||
|
decomposing 64-bit address calculations with large offsets (e.g. `load [x +
|
||||||
|
large_constant]`) into 32-bit arithmetic in SASS.
|
||||||
|
|
||||||
|
As a result, these versions of `ptxas` miscompile most XLA programs which use
|
||||||
|
more than 4GB of temp memory. This results in garbage results and/or
|
||||||
|
`CUDA_ERROR_ILLEGAL_ADDRESS` failures.
|
||||||
|
|
||||||
|
A fix in CUDA 9.1.121 is expected in late February 2018. We do not expect a
|
||||||
|
fix for CUDA 9.0.x. Until the fix is available, the only workaround is to
|
||||||
|
[downgrade](https://developer.nvidia.com/cuda-toolkit-archive) to CUDA 8.0.x
|
||||||
|
or disable XLA:GPU.
|
||||||
|
|
||||||
|
TensorFlow will print a warning if you use XLA:GPU with a known-bad version of
|
||||||
|
CUDA; see e00ba24c4038e7644da417ddc639169b6ea59122.
|
||||||
|
|
||||||
## Thanks to our Contributors
|
## Thanks to our Contributors
|
||||||
|
|
||||||
This release contains contributions from many people at Google, as well as:
|
This release contains contributions from many people at Google, as well as:
|
||||||
|
28
configure.py
28
configure.py
@ -827,6 +827,28 @@ def set_gcc_host_compiler_path(environ_cp):
|
|||||||
write_action_env_to_bazelrc('GCC_HOST_COMPILER_PATH', gcc_host_compiler_path)
|
write_action_env_to_bazelrc('GCC_HOST_COMPILER_PATH', gcc_host_compiler_path)
|
||||||
|
|
||||||
|
|
||||||
|
def reformat_version_sequence(version_str, sequence_count):
|
||||||
|
"""Reformat the version string to have the given number of sequences.
|
||||||
|
|
||||||
|
For example:
|
||||||
|
Given (7, 2) -> 7.0
|
||||||
|
(7.0.1, 2) -> 7.0
|
||||||
|
(5, 1) -> 5
|
||||||
|
(5.0.3.2, 1) -> 5
|
||||||
|
|
||||||
|
Args:
|
||||||
|
version_str: String, the version string.
|
||||||
|
sequence_count: int, an integer.
|
||||||
|
Returns:
|
||||||
|
string, reformatted version string.
|
||||||
|
"""
|
||||||
|
v = version_str.split('.')
|
||||||
|
if len(v) < sequence_count:
|
||||||
|
v = v + (['0'] * (sequence_count - len(v)))
|
||||||
|
|
||||||
|
return '.'.join(v[:sequence_count])
|
||||||
|
|
||||||
|
|
||||||
def set_tf_cuda_version(environ_cp):
|
def set_tf_cuda_version(environ_cp):
|
||||||
"""Set CUDA_TOOLKIT_PATH and TF_CUDA_VERSION."""
|
"""Set CUDA_TOOLKIT_PATH and TF_CUDA_VERSION."""
|
||||||
ask_cuda_version = (
|
ask_cuda_version = (
|
||||||
@ -837,6 +859,7 @@ def set_tf_cuda_version(environ_cp):
|
|||||||
# Configure the Cuda SDK version to use.
|
# Configure the Cuda SDK version to use.
|
||||||
tf_cuda_version = get_from_env_or_user_or_default(
|
tf_cuda_version = get_from_env_or_user_or_default(
|
||||||
environ_cp, 'TF_CUDA_VERSION', ask_cuda_version, _DEFAULT_CUDA_VERSION)
|
environ_cp, 'TF_CUDA_VERSION', ask_cuda_version, _DEFAULT_CUDA_VERSION)
|
||||||
|
tf_cuda_version = reformat_version_sequence(str(tf_cuda_version), 2)
|
||||||
|
|
||||||
# Find out where the CUDA toolkit is installed
|
# Find out where the CUDA toolkit is installed
|
||||||
default_cuda_path = _DEFAULT_CUDA_PATH
|
default_cuda_path = _DEFAULT_CUDA_PATH
|
||||||
@ -893,6 +916,7 @@ def set_tf_cudnn_version(environ_cp):
|
|||||||
tf_cudnn_version = get_from_env_or_user_or_default(
|
tf_cudnn_version = get_from_env_or_user_or_default(
|
||||||
environ_cp, 'TF_CUDNN_VERSION', ask_cudnn_version,
|
environ_cp, 'TF_CUDNN_VERSION', ask_cudnn_version,
|
||||||
_DEFAULT_CUDNN_VERSION)
|
_DEFAULT_CUDNN_VERSION)
|
||||||
|
tf_cudnn_version = reformat_version_sequence(str(tf_cudnn_version), 1)
|
||||||
|
|
||||||
default_cudnn_path = environ_cp.get('CUDA_TOOLKIT_PATH')
|
default_cudnn_path = environ_cp.get('CUDA_TOOLKIT_PATH')
|
||||||
ask_cudnn_path = (r'Please specify the location where cuDNN %s library is '
|
ask_cudnn_path = (r'Please specify the location where cuDNN %s library is '
|
||||||
@ -1400,6 +1424,10 @@ def main():
|
|||||||
if is_linux():
|
if is_linux():
|
||||||
set_tf_tensorrt_install_path(environ_cp)
|
set_tf_tensorrt_install_path(environ_cp)
|
||||||
set_tf_cuda_compute_capabilities(environ_cp)
|
set_tf_cuda_compute_capabilities(environ_cp)
|
||||||
|
if 'LD_LIBRARY_PATH' in environ_cp and environ_cp.get(
|
||||||
|
'LD_LIBRARY_PATH') != '1':
|
||||||
|
write_action_env_to_bazelrc('LD_LIBRARY_PATH',
|
||||||
|
environ_cp.get('LD_LIBRARY_PATH'))
|
||||||
|
|
||||||
set_tf_cuda_clang(environ_cp)
|
set_tf_cuda_clang(environ_cp)
|
||||||
if environ_cp.get('TF_CUDA_CLANG') == '1':
|
if environ_cp.get('TF_CUDA_CLANG') == '1':
|
||||||
|
@ -70,7 +70,7 @@ class GraphCompiler {
|
|||||||
|
|
||||||
private:
|
private:
|
||||||
// Partially sets params. This partially set params can be reused
|
// Partially sets params. This partially set params can be reused
|
||||||
// across multple nodes visit.
|
// across multiple nodes visit.
|
||||||
void PartiallySetupParams(OpKernelContext::Params* params);
|
void PartiallySetupParams(OpKernelContext::Params* params);
|
||||||
|
|
||||||
// Tests if a node is a functional node. A functional node represents a
|
// Tests if a node is a functional node. A functional node represents a
|
||||||
|
@ -37,7 +37,7 @@ class IndexUtil {
|
|||||||
static int64 MultidimensionalIndexToLinearIndex(
|
static int64 MultidimensionalIndexToLinearIndex(
|
||||||
const Shape& shape, tensorflow::gtl::ArraySlice<int64> multi_index);
|
const Shape& shape, tensorflow::gtl::ArraySlice<int64> multi_index);
|
||||||
|
|
||||||
// Coverts a linear index into multidimensional index (eg {x, y, z}) based on
|
// Converts a linear index into multidimensional index (eg {x, y, z}) based on
|
||||||
// the shape and its layout. The first index in the returned multidimensional
|
// the shape and its layout. The first index in the returned multidimensional
|
||||||
// index is dimension 0.
|
// index is dimension 0.
|
||||||
static std::vector<int64> LinearIndexToMultidimensionalIndex(
|
static std::vector<int64> LinearIndexToMultidimensionalIndex(
|
||||||
|
@ -1128,7 +1128,7 @@ def initialize_replica_count(replica_count):
|
|||||||
|
|
||||||
Args:
|
Args:
|
||||||
replica_count: number of replicas that are desired for set up during XLA
|
replica_count: number of replicas that are desired for set up during XLA
|
||||||
initalization.
|
initialization.
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
A runtime exception if the XLA service has already been initialized.
|
A runtime exception if the XLA service has already been initialized.
|
||||||
|
@ -171,8 +171,6 @@ cc_library(
|
|||||||
":shape_inference",
|
":shape_inference",
|
||||||
"//tensorflow/compiler/xla:literal_util",
|
"//tensorflow/compiler/xla:literal_util",
|
||||||
"//tensorflow/compiler/xla:shape_util",
|
"//tensorflow/compiler/xla:shape_util",
|
||||||
"//tensorflow/compiler/xla:status",
|
|
||||||
"//tensorflow/compiler/xla:status_macros",
|
|
||||||
"//tensorflow/compiler/xla:statusor",
|
"//tensorflow/compiler/xla:statusor",
|
||||||
"//tensorflow/compiler/xla:types",
|
"//tensorflow/compiler/xla:types",
|
||||||
"//tensorflow/compiler/xla:util",
|
"//tensorflow/compiler/xla:util",
|
||||||
|
@ -614,7 +614,7 @@ TEST_F(BufferAssignmentTest, TrivialMap) {
|
|||||||
BufferAllocation map_buffer = GetAssignedOutputAllocation(*buffers, map);
|
BufferAllocation map_buffer = GetAssignedOutputAllocation(*buffers, map);
|
||||||
EXPECT_NE(param0_buffer.index(), map_buffer.index());
|
EXPECT_NE(param0_buffer.index(), map_buffer.index());
|
||||||
|
|
||||||
// The final computation node of the map is an add of an f32 parm and a
|
// The final computation node of the map is an add of an f32 param and a
|
||||||
// constant.
|
// constant.
|
||||||
EXPECT_EQ(HloOpcode::kAdd, inner_last->opcode());
|
EXPECT_EQ(HloOpcode::kAdd, inner_last->opcode());
|
||||||
const BufferAllocation& inner_add_buffer =
|
const BufferAllocation& inner_add_buffer =
|
||||||
|
@ -1337,7 +1337,7 @@ IrEmitter::ReductionGenerator IrEmitter::MatchReductionGenerator(
|
|||||||
if (ShapeUtil::ElementIsComplex(root_shape)) {
|
if (ShapeUtil::ElementIsComplex(root_shape)) {
|
||||||
// TODO(b/65408531): Complex add could by done via bitcast to <float x [2N]>
|
// TODO(b/65408531): Complex add could by done via bitcast to <float x [2N]>
|
||||||
// Complex multiply would be more challenging. We could perhaps use a
|
// Complex multiply would be more challenging. We could perhaps use a
|
||||||
// strided load to get all reals in a vector, all imags in a vector, or use
|
// strided load to get all reals in a vector, all images in a vector, or use
|
||||||
// CreateShuffleVector on a bitcast to float x [2N].
|
// CreateShuffleVector on a bitcast to float x [2N].
|
||||||
*failure_reason = "complex values not supported";
|
*failure_reason = "complex values not supported";
|
||||||
return nullptr;
|
return nullptr;
|
||||||
|
@ -209,9 +209,9 @@ std::vector<llvm::Value*> GetArrayFunctionCallArguments(
|
|||||||
parameter_addresses[i], ir_builder->getInt8PtrTy(),
|
parameter_addresses[i], ir_builder->getInt8PtrTy(),
|
||||||
AsStringRef(tensorflow::strings::StrCat(name, "_parameter_", i,
|
AsStringRef(tensorflow::strings::StrCat(name, "_parameter_", i,
|
||||||
"_address_as_i8ptr")));
|
"_address_as_i8ptr")));
|
||||||
llvm::Value* slot_in_param_adresses = ir_builder->CreateInBoundsGEP(
|
llvm::Value* slot_in_param_addresses = ir_builder->CreateInBoundsGEP(
|
||||||
parameter_addresses_buffer, {ir_builder->getInt64(i)});
|
parameter_addresses_buffer, {ir_builder->getInt64(i)});
|
||||||
ir_builder->CreateStore(parameter_as_i8ptr, slot_in_param_adresses);
|
ir_builder->CreateStore(parameter_as_i8ptr, slot_in_param_addresses);
|
||||||
}
|
}
|
||||||
|
|
||||||
const auto to_int8_ptr = [=](llvm::Value* ptr) {
|
const auto to_int8_ptr = [=](llvm::Value* ptr) {
|
||||||
|
@ -34,8 +34,6 @@ limitations under the License.
|
|||||||
#include "tensorflow/compiler/xla/service/hlo_query.h"
|
#include "tensorflow/compiler/xla/service/hlo_query.h"
|
||||||
#include "tensorflow/compiler/xla/service/shape_inference.h"
|
#include "tensorflow/compiler/xla/service/shape_inference.h"
|
||||||
#include "tensorflow/compiler/xla/shape_util.h"
|
#include "tensorflow/compiler/xla/shape_util.h"
|
||||||
#include "tensorflow/compiler/xla/status.h"
|
|
||||||
#include "tensorflow/compiler/xla/status_macros.h"
|
|
||||||
#include "tensorflow/compiler/xla/types.h"
|
#include "tensorflow/compiler/xla/types.h"
|
||||||
#include "tensorflow/compiler/xla/util.h"
|
#include "tensorflow/compiler/xla/util.h"
|
||||||
#include "tensorflow/compiler/xla/window_util.h"
|
#include "tensorflow/compiler/xla/window_util.h"
|
||||||
|
@ -758,7 +758,7 @@ def _build_nccl_hybrid(input_tensors, red_op, upper_level_f):
|
|||||||
|
|
||||||
|
|
||||||
def _reduce_non_singleton(input_tensors, red_f, un_op):
|
def _reduce_non_singleton(input_tensors, red_f, un_op):
|
||||||
"""If input_tenors has more than one element apply red_f, else apply un_op."""
|
"""If input_tensors has more than one element apply red_f, else apply un_op."""
|
||||||
if len(input_tensors) > 1:
|
if len(input_tensors) > 1:
|
||||||
return red_f(input_tensors)
|
return red_f(input_tensors)
|
||||||
else:
|
else:
|
||||||
|
@ -286,7 +286,21 @@ if (tensorflow_ENABLE_GPU)
|
|||||||
list(APPEND CMAKE_LIBRARY_PATH "${tensorflow_CUDA_LIBRARY_PATH}/stubs")
|
list(APPEND CMAKE_LIBRARY_PATH "${tensorflow_CUDA_LIBRARY_PATH}/stubs")
|
||||||
endif (NOT WIN32)
|
endif (NOT WIN32)
|
||||||
|
|
||||||
find_package(CUDA ${tensorflow_CUDA_VERSION} REQUIRED)
|
# later command will make use of the value in tensorflow_CUDA_VERSION
|
||||||
|
find_package(CUDA ${tensorflow_CUDA_VERSION} REQUIRED EXACT)
|
||||||
|
|
||||||
|
# Test compatibility of compiler on CUDA
|
||||||
|
try_compile(CUDA_TEST_COMPILE_C
|
||||||
|
${CMAKE_CURRENT_BINARY_DIR}/tests/cuda
|
||||||
|
${CMAKE_CURRENT_SOURCE_DIR}/tests/cuda/compatibility_test.c
|
||||||
|
CMAKE_FLAGS -DINCLUDE_DIRECTORIES=${CUDA_INCLUDE_DIRS})
|
||||||
|
try_compile(CUDA_TEST_COMPILE_CXX
|
||||||
|
${CMAKE_CURRENT_BINARY_DIR}/tests/cuda
|
||||||
|
${CMAKE_CURRENT_SOURCE_DIR}/tests/cuda/compatibility_test.cc
|
||||||
|
CMAKE_FLAGS -DINCLUDE_DIRECTORIES=${CUDA_INCLUDE_DIRS})
|
||||||
|
if(NOT (CUDA_TEST_COMPILE_C AND CUDA_TEST_COMPILE_CXX))
|
||||||
|
message(FATAL_ERROR "Selected compiler (or version) is not supported for CUDA")
|
||||||
|
endif()
|
||||||
|
|
||||||
# by default we assume compute cabability 3.5 and 5.2. If you change this change it in
|
# by default we assume compute cabability 3.5 and 5.2. If you change this change it in
|
||||||
# CUDA_NVCC_FLAGS and cuda_config.h below
|
# CUDA_NVCC_FLAGS and cuda_config.h below
|
||||||
|
@ -37,6 +37,7 @@ ExternalProject_Add(boringssl
|
|||||||
GIT_TAG ${boringssl_TAG}
|
GIT_TAG ${boringssl_TAG}
|
||||||
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
|
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
|
||||||
# BUILD_IN_SOURCE 1
|
# BUILD_IN_SOURCE 1
|
||||||
|
BUILD_BYPRODUCTS ${boringssl_STATIC_LIBRARIES}
|
||||||
INSTALL_COMMAND ""
|
INSTALL_COMMAND ""
|
||||||
CMAKE_CACHE_ARGS
|
CMAKE_CACHE_ARGS
|
||||||
-DCMAKE_POSITION_INDEPENDENT_CODE:BOOL=${tensorflow_ENABLE_POSITION_INDEPENDENT_CODE}
|
-DCMAKE_POSITION_INDEPENDENT_CODE:BOOL=${tensorflow_ENABLE_POSITION_INDEPENDENT_CODE}
|
||||||
|
@ -33,6 +33,7 @@ if(WIN32)
|
|||||||
URL_HASH ${farmhash_HASH}
|
URL_HASH ${farmhash_HASH}
|
||||||
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
|
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
|
||||||
BUILD_IN_SOURCE 1
|
BUILD_IN_SOURCE 1
|
||||||
|
BUILD_BYPRODUCTS ${farmhash_STATIC_LIBRARIES}
|
||||||
PATCH_COMMAND ${CMAKE_COMMAND} -E copy_if_different ${CMAKE_CURRENT_SOURCE_DIR}/patches/farmhash/CMakeLists.txt ${farmhash_BUILD}
|
PATCH_COMMAND ${CMAKE_COMMAND} -E copy_if_different ${CMAKE_CURRENT_SOURCE_DIR}/patches/farmhash/CMakeLists.txt ${farmhash_BUILD}
|
||||||
INSTALL_DIR ${farmhash_INSTALL}
|
INSTALL_DIR ${farmhash_INSTALL}
|
||||||
CMAKE_CACHE_ARGS
|
CMAKE_CACHE_ARGS
|
||||||
|
@ -29,6 +29,7 @@ if(WIN32)
|
|||||||
URL_HASH ${fft2d_HASH}
|
URL_HASH ${fft2d_HASH}
|
||||||
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
|
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
|
||||||
BUILD_IN_SOURCE 1
|
BUILD_IN_SOURCE 1
|
||||||
|
BUILD_BYPRODUCTS ${fft2d_STATIC_LIBRARIES}
|
||||||
PATCH_COMMAND ${CMAKE_COMMAND} -E copy_if_different ${CMAKE_CURRENT_SOURCE_DIR}/patches/fft2d/CMakeLists.txt ${fft2d_BUILD}/src/fft2d/CMakeLists.txt
|
PATCH_COMMAND ${CMAKE_COMMAND} -E copy_if_different ${CMAKE_CURRENT_SOURCE_DIR}/patches/fft2d/CMakeLists.txt ${fft2d_BUILD}/src/fft2d/CMakeLists.txt
|
||||||
INSTALL_DIR ${fft2d_INSTALL}
|
INSTALL_DIR ${fft2d_INSTALL}
|
||||||
CMAKE_CACHE_ARGS
|
CMAKE_CACHE_ARGS
|
||||||
|
1
tensorflow/contrib/cmake/external/gif.cmake
vendored
1
tensorflow/contrib/cmake/external/gif.cmake
vendored
@ -33,6 +33,7 @@ if(WIN32)
|
|||||||
PREFIX gif
|
PREFIX gif
|
||||||
URL ${gif_URL}
|
URL ${gif_URL}
|
||||||
URL_HASH ${gif_HASH}
|
URL_HASH ${gif_HASH}
|
||||||
|
BUILD_BYPRODUCTS ${gif_STATIC_LIBRARIES}
|
||||||
PATCH_COMMAND ${CMAKE_COMMAND} -E copy_if_different ${CMAKE_SOURCE_DIR}/patches/gif/CMakeLists.txt ${gif_BUILD}
|
PATCH_COMMAND ${CMAKE_COMMAND} -E copy_if_different ${CMAKE_SOURCE_DIR}/patches/gif/CMakeLists.txt ${gif_BUILD}
|
||||||
INSTALL_DIR ${gif_INSTALL}
|
INSTALL_DIR ${gif_INSTALL}
|
||||||
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
|
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
|
||||||
|
@ -20,8 +20,13 @@ set(googletest_BUILD ${CMAKE_CURRENT_BINARY_DIR}/googletest/)
|
|||||||
set(googletest_TAG ec44c6c1675c25b9827aacd08c02433cccde7780)
|
set(googletest_TAG ec44c6c1675c25b9827aacd08c02433cccde7780)
|
||||||
|
|
||||||
if(WIN32)
|
if(WIN32)
|
||||||
set(googletest_STATIC_LIBRARIES
|
if(${CMAKE_GENERATOR} MATCHES "Visual Studio.*")
|
||||||
${CMAKE_CURRENT_BINARY_DIR}/googletest/src/googletest/googletest/$(Configuration)/gtest.lib)
|
set(googletest_STATIC_LIBRARIES
|
||||||
|
${CMAKE_CURRENT_BINARY_DIR}/googletest/src/googletest/googletest/$(Configuration)/gtest.lib)
|
||||||
|
else()
|
||||||
|
set(googletest_STATIC_LIBRARIES
|
||||||
|
${CMAKE_CURRENT_BINARY_DIR}/googletest/src/googletest/googletest/gtest.lib)
|
||||||
|
endif()
|
||||||
else()
|
else()
|
||||||
set(googletest_STATIC_LIBRARIES
|
set(googletest_STATIC_LIBRARIES
|
||||||
${CMAKE_CURRENT_BINARY_DIR}/googletest/src/googletest/googletest/${CMAKE_BUILD_TYPE}/gtest.a)
|
${CMAKE_CURRENT_BINARY_DIR}/googletest/src/googletest/googletest/${CMAKE_BUILD_TYPE}/gtest.a)
|
||||||
@ -33,6 +38,7 @@ ExternalProject_Add(googletest
|
|||||||
GIT_TAG ${googletest_TAG}
|
GIT_TAG ${googletest_TAG}
|
||||||
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
|
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
|
||||||
BUILD_IN_SOURCE 1
|
BUILD_IN_SOURCE 1
|
||||||
|
BUILD_BYPRODUCTS ${googletest_STATIC_LIBRARIES}
|
||||||
#PATCH_COMMAND ${CMAKE_COMMAND} -E copy_if_different ${CMAKE_SOURCE_DIR}/patches/grpc/CMakeLists.txt ${GRPC_BUILD}
|
#PATCH_COMMAND ${CMAKE_COMMAND} -E copy_if_different ${CMAKE_SOURCE_DIR}/patches/grpc/CMakeLists.txt ${GRPC_BUILD}
|
||||||
INSTALL_COMMAND ""
|
INSTALL_COMMAND ""
|
||||||
CMAKE_CACHE_ARGS
|
CMAKE_CACHE_ARGS
|
||||||
|
16
tensorflow/contrib/cmake/external/grpc.cmake
vendored
16
tensorflow/contrib/cmake/external/grpc.cmake
vendored
@ -20,10 +20,17 @@ set(GRPC_BUILD ${CMAKE_CURRENT_BINARY_DIR}/grpc/src/grpc)
|
|||||||
set(GRPC_TAG 730b778632e79cc3c96ad237f282d687ee325ce7)
|
set(GRPC_TAG 730b778632e79cc3c96ad237f282d687ee325ce7)
|
||||||
|
|
||||||
if(WIN32)
|
if(WIN32)
|
||||||
set(grpc_STATIC_LIBRARIES
|
if(${CMAKE_GENERATOR} MATCHES "Visual Studio.*")
|
||||||
${CMAKE_CURRENT_BINARY_DIR}/grpc/src/grpc/Release/grpc++_unsecure.lib
|
set(grpc_STATIC_LIBRARIES
|
||||||
${CMAKE_CURRENT_BINARY_DIR}/grpc/src/grpc/Release/grpc_unsecure.lib
|
${CMAKE_CURRENT_BINARY_DIR}/grpc/src/grpc/Release/grpc++_unsecure.lib
|
||||||
${CMAKE_CURRENT_BINARY_DIR}/grpc/src/grpc/Release/gpr.lib)
|
${CMAKE_CURRENT_BINARY_DIR}/grpc/src/grpc/Release/grpc_unsecure.lib
|
||||||
|
${CMAKE_CURRENT_BINARY_DIR}/grpc/src/grpc/Release/gpr.lib)
|
||||||
|
else()
|
||||||
|
set(grpc_STATIC_LIBRARIES
|
||||||
|
${CMAKE_CURRENT_BINARY_DIR}/grpc/src/grpc/grpc++_unsecure.lib
|
||||||
|
${CMAKE_CURRENT_BINARY_DIR}/grpc/src/grpc/grpc_unsecure.lib
|
||||||
|
${CMAKE_CURRENT_BINARY_DIR}/grpc/src/grpc/gpr.lib)
|
||||||
|
endif()
|
||||||
else()
|
else()
|
||||||
set(grpc_STATIC_LIBRARIES
|
set(grpc_STATIC_LIBRARIES
|
||||||
${CMAKE_CURRENT_BINARY_DIR}/grpc/src/grpc/libgrpc++_unsecure.a
|
${CMAKE_CURRENT_BINARY_DIR}/grpc/src/grpc/libgrpc++_unsecure.a
|
||||||
@ -40,6 +47,7 @@ ExternalProject_Add(grpc
|
|||||||
GIT_TAG ${GRPC_TAG}
|
GIT_TAG ${GRPC_TAG}
|
||||||
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
|
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
|
||||||
BUILD_IN_SOURCE 1
|
BUILD_IN_SOURCE 1
|
||||||
|
BUILD_BYPRODUCTS ${grpc_STATIC_LIBRARIES}
|
||||||
BUILD_COMMAND ${CMAKE_COMMAND} --build . --config Release --target grpc++_unsecure
|
BUILD_COMMAND ${CMAKE_COMMAND} --build . --config Release --target grpc++_unsecure
|
||||||
COMMAND ${CMAKE_COMMAND} --build . --config Release --target grpc_cpp_plugin
|
COMMAND ${CMAKE_COMMAND} --build . --config Release --target grpc_cpp_plugin
|
||||||
INSTALL_COMMAND ""
|
INSTALL_COMMAND ""
|
||||||
|
@ -42,6 +42,7 @@ ExternalProject_Add(highwayhash
|
|||||||
GIT_TAG ${highwayhash_TAG}
|
GIT_TAG ${highwayhash_TAG}
|
||||||
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
|
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
|
||||||
BUILD_IN_SOURCE 1
|
BUILD_IN_SOURCE 1
|
||||||
|
BUILD_BYPRODUCTS ${highwayhash_STATIC_LIBRARIES}
|
||||||
PATCH_COMMAND ${CMAKE_COMMAND} -E copy_if_different ${CMAKE_CURRENT_SOURCE_DIR}/patches/highwayhash/CMakeLists.txt ${highwayhash_BUILD}
|
PATCH_COMMAND ${CMAKE_COMMAND} -E copy_if_different ${CMAKE_CURRENT_SOURCE_DIR}/patches/highwayhash/CMakeLists.txt ${highwayhash_BUILD}
|
||||||
INSTALL_DIR ${highwayhash_INSTALL}
|
INSTALL_DIR ${highwayhash_INSTALL}
|
||||||
CMAKE_CACHE_ARGS
|
CMAKE_CACHE_ARGS
|
||||||
|
15
tensorflow/contrib/cmake/external/jemalloc.cmake
vendored
15
tensorflow/contrib/cmake/external/jemalloc.cmake
vendored
@ -24,8 +24,11 @@ if (WIN32)
|
|||||||
${jemalloc_INCLUDE_DIRS}
|
${jemalloc_INCLUDE_DIRS}
|
||||||
${CMAKE_CURRENT_BINARY_DIR}/jemalloc/src/jemalloc/include/msvc_compat
|
${CMAKE_CURRENT_BINARY_DIR}/jemalloc/src/jemalloc/include/msvc_compat
|
||||||
)
|
)
|
||||||
set(jemalloc_ADDITIONAL_CMAKE_OPTIONS -A x64)
|
if(${CMAKE_GENERATOR} MATCHES "Visual Studio.*")
|
||||||
set(jemalloc_STATIC_LIBRARIES ${jemalloc_BUILD}/Release/jemalloc.lib)
|
set(jemalloc_STATIC_LIBRARIES ${jemalloc_BUILD}/Release/jemalloc.lib)
|
||||||
|
else()
|
||||||
|
set(jemalloc_STATIC_LIBRARIES ${jemalloc_BUILD}/jemalloc.lib)
|
||||||
|
endif()
|
||||||
else()
|
else()
|
||||||
set(jemalloc_STATIC_LIBRARIES ${jemalloc_BUILD}/Release/jemalloc.a)
|
set(jemalloc_STATIC_LIBRARIES ${jemalloc_BUILD}/Release/jemalloc.a)
|
||||||
endif()
|
endif()
|
||||||
@ -36,12 +39,12 @@ ExternalProject_Add(jemalloc
|
|||||||
URL_HASH ${jemalloc_HASH}
|
URL_HASH ${jemalloc_HASH}
|
||||||
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
|
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
|
||||||
BUILD_IN_SOURCE 1
|
BUILD_IN_SOURCE 1
|
||||||
CONFIGURE_COMMAND ${CMAKE_COMMAND}
|
BUILD_BYPRODUCTS ${jemalloc_STATIC_LIBRARIES}
|
||||||
|
BUILD_COMMAND ${CMAKE_COMMAND} --build . --config Release --target jemalloc
|
||||||
|
INSTALL_COMMAND ${CMAKE_COMMAND} -E echo "Skipping install step."
|
||||||
|
CMAKE_CACHE_ARGS
|
||||||
-DCMAKE_BUILD_TYPE:STRING=Release
|
-DCMAKE_BUILD_TYPE:STRING=Release
|
||||||
-DCMAKE_VERBOSE_MAKEFILE:BOOL=OFF
|
-DCMAKE_VERBOSE_MAKEFILE:BOOL=OFF
|
||||||
-Dwith-jemalloc-prefix:STRING=jemalloc_
|
-Dwith-jemalloc-prefix:STRING=jemalloc_
|
||||||
-Dwithout-export:BOOL=ON
|
-Dwithout-export:BOOL=ON
|
||||||
${jemalloc_ADDITIONAL_CMAKE_OPTIONS}
|
|
||||||
BUILD_COMMAND ${CMAKE_COMMAND} --build . --config Release --target jemalloc
|
|
||||||
INSTALL_COMMAND ${CMAKE_COMMAND} -E echo "Skipping install step."
|
|
||||||
)
|
)
|
||||||
|
1
tensorflow/contrib/cmake/external/jpeg.cmake
vendored
1
tensorflow/contrib/cmake/external/jpeg.cmake
vendored
@ -46,6 +46,7 @@ if (WIN32)
|
|||||||
PREFIX jpeg
|
PREFIX jpeg
|
||||||
URL ${jpeg_URL}
|
URL ${jpeg_URL}
|
||||||
URL_HASH ${jpeg_HASH}
|
URL_HASH ${jpeg_HASH}
|
||||||
|
BUILD_BYPRODUCTS ${jpeg_STATIC_LIBRARIES}
|
||||||
PATCH_COMMAND ${CMAKE_COMMAND} -E copy_if_different ${CMAKE_CURRENT_SOURCE_DIR}/patches/jpeg/CMakeLists.txt ${jpeg_BUILD}
|
PATCH_COMMAND ${CMAKE_COMMAND} -E copy_if_different ${CMAKE_CURRENT_SOURCE_DIR}/patches/jpeg/CMakeLists.txt ${jpeg_BUILD}
|
||||||
INSTALL_DIR ${jpeg_INSTALL}
|
INSTALL_DIR ${jpeg_INSTALL}
|
||||||
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
|
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
|
||||||
|
@ -23,7 +23,11 @@ set(jsoncpp_LIBRARIES ${jsoncpp_BUILD}/obj/so/libjsoncpp.so)
|
|||||||
set(jsoncpp_INCLUDES ${jsoncpp_BUILD})
|
set(jsoncpp_INCLUDES ${jsoncpp_BUILD})
|
||||||
|
|
||||||
if(WIN32)
|
if(WIN32)
|
||||||
set(jsoncpp_STATIC_LIBRARIES ${jsoncpp_BUILD}/$(Configuration)/jsoncpp.lib)
|
if(${CMAKE_GENERATOR} MATCHES "Visual Studio.*")
|
||||||
|
set(jsoncpp_STATIC_LIBRARIES ${jsoncpp_BUILD}/$(Configuration)/jsoncpp.lib)
|
||||||
|
else()
|
||||||
|
set(jsoncpp_STATIC_LIBRARIES ${jsoncpp_BUILD}/jsoncpp.lib)
|
||||||
|
endif()
|
||||||
else()
|
else()
|
||||||
set(jsoncpp_STATIC_LIBRARIES ${jsoncpp_BUILD}/libjsoncpp.a)
|
set(jsoncpp_STATIC_LIBRARIES ${jsoncpp_BUILD}/libjsoncpp.a)
|
||||||
endif()
|
endif()
|
||||||
@ -40,6 +44,7 @@ ExternalProject_Add(jsoncpp
|
|||||||
GIT_TAG ${jsoncpp_TAG}
|
GIT_TAG ${jsoncpp_TAG}
|
||||||
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
|
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
|
||||||
BUILD_IN_SOURCE 1
|
BUILD_IN_SOURCE 1
|
||||||
|
BUILD_BYPRODUCTS ${jsoncpp_STATIC_LIBRARIES}
|
||||||
INSTALL_COMMAND ""
|
INSTALL_COMMAND ""
|
||||||
CMAKE_CACHE_ARGS
|
CMAKE_CACHE_ARGS
|
||||||
-DCMAKE_POSITION_INDEPENDENT_CODE:BOOL=${tensorflow_ENABLE_POSITION_INDEPENDENT_CODE}
|
-DCMAKE_POSITION_INDEPENDENT_CODE:BOOL=${tensorflow_ENABLE_POSITION_INDEPENDENT_CODE}
|
||||||
|
13
tensorflow/contrib/cmake/external/lmdb.cmake
vendored
13
tensorflow/contrib/cmake/external/lmdb.cmake
vendored
@ -20,10 +20,17 @@ set(lmdb_HASH SHA256=108532fb94c6f227558d45be3f3347b52539f0f58290a7bb31ec06c462d
|
|||||||
set(lmdb_BUILD ${CMAKE_BINARY_DIR}/lmdb/src/lmdb)
|
set(lmdb_BUILD ${CMAKE_BINARY_DIR}/lmdb/src/lmdb)
|
||||||
set(lmdb_INSTALL ${CMAKE_BINARY_DIR}/lmdb/install)
|
set(lmdb_INSTALL ${CMAKE_BINARY_DIR}/lmdb/install)
|
||||||
|
|
||||||
|
if(WIN32)
|
||||||
|
set(lmdb_STATIC_LIBRARIES ${lmdb_INSTALL}/lib/lmdb.lib)
|
||||||
|
else()
|
||||||
|
set(lmdb_STATIC_LIBRARIES ${lmdb_INSTALL}/lib/liblmdb.a)
|
||||||
|
endif()
|
||||||
|
|
||||||
ExternalProject_Add(lmdb
|
ExternalProject_Add(lmdb
|
||||||
PREFIX lmdb
|
PREFIX lmdb
|
||||||
URL ${lmdb_URL}
|
URL ${lmdb_URL}
|
||||||
URL_HASH ${lmdb_HASH}
|
URL_HASH ${lmdb_HASH}
|
||||||
|
BUILD_BYPRODUCTS ${lmdb_STATIC_LIBRARIES}
|
||||||
PATCH_COMMAND ${CMAKE_COMMAND} -E copy_if_different
|
PATCH_COMMAND ${CMAKE_COMMAND} -E copy_if_different
|
||||||
${CMAKE_CURRENT_SOURCE_DIR}/patches/lmdb/CMakeLists.txt ${lmdb_BUILD}
|
${CMAKE_CURRENT_SOURCE_DIR}/patches/lmdb/CMakeLists.txt ${lmdb_BUILD}
|
||||||
INSTALL_DIR ${lmdb_INSTALL}
|
INSTALL_DIR ${lmdb_INSTALL}
|
||||||
@ -35,12 +42,6 @@ ExternalProject_Add(lmdb
|
|||||||
-DCMAKE_INSTALL_PREFIX:STRING=${lmdb_INSTALL}
|
-DCMAKE_INSTALL_PREFIX:STRING=${lmdb_INSTALL}
|
||||||
)
|
)
|
||||||
|
|
||||||
if(WIN32)
|
|
||||||
set(lmdb_STATIC_LIBRARIES ${lmdb_INSTALL}/lib/lmdb.lib)
|
|
||||||
else()
|
|
||||||
set(lmdb_STATIC_LIBRARIES ${lmdb_INSTALL}/lib/liblmdb.a)
|
|
||||||
endif()
|
|
||||||
|
|
||||||
set(lmdb_HEADERS
|
set(lmdb_HEADERS
|
||||||
"${lmdb_INSTALL}/include/lmdb.h"
|
"${lmdb_INSTALL}/include/lmdb.h"
|
||||||
"${lmdb_INSTALL}/include/midl.h"
|
"${lmdb_INSTALL}/include/midl.h"
|
||||||
|
@ -42,6 +42,7 @@ ExternalProject_Add(nsync
|
|||||||
GIT_TAG ${nsync_TAG}
|
GIT_TAG ${nsync_TAG}
|
||||||
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
|
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
|
||||||
BUILD_IN_SOURCE 1
|
BUILD_IN_SOURCE 1
|
||||||
|
BUILD_BYPRODUCTS ${nsync_STATIC_LIBRARIES}
|
||||||
PATCH_COMMAND ${CMAKE_COMMAND} -E copy_if_different ${CMAKE_CURRENT_SOURCE_DIR}/patches/nsync/CMakeLists.txt ${nsync_BUILD}
|
PATCH_COMMAND ${CMAKE_COMMAND} -E copy_if_different ${CMAKE_CURRENT_SOURCE_DIR}/patches/nsync/CMakeLists.txt ${nsync_BUILD}
|
||||||
INSTALL_DIR ${nsync_INSTALL}
|
INSTALL_DIR ${nsync_INSTALL}
|
||||||
CMAKE_CACHE_ARGS
|
CMAKE_CACHE_ARGS
|
||||||
|
17
tensorflow/contrib/cmake/external/png.cmake
vendored
17
tensorflow/contrib/cmake/external/png.cmake
vendored
@ -21,9 +21,19 @@ set(png_BUILD ${CMAKE_BINARY_DIR}/png/src/png)
|
|||||||
set(png_INSTALL ${CMAKE_BINARY_DIR}/png/install)
|
set(png_INSTALL ${CMAKE_BINARY_DIR}/png/install)
|
||||||
|
|
||||||
if(WIN32)
|
if(WIN32)
|
||||||
set(png_STATIC_LIBRARIES
|
if(${CMAKE_GENERATOR} MATCHES "Visual Studio.*")
|
||||||
debug ${CMAKE_BINARY_DIR}/png/install/lib/libpng12_staticd.lib
|
set(png_STATIC_LIBRARIES
|
||||||
optimized ${CMAKE_BINARY_DIR}/png/install/lib/libpng12_static.lib)
|
debug ${CMAKE_BINARY_DIR}/png/install/lib/libpng12_staticd.lib
|
||||||
|
optimized ${CMAKE_BINARY_DIR}/png/install/lib/libpng12_static.lib)
|
||||||
|
else()
|
||||||
|
if(CMAKE_BUILD_TYPE EQUAL Debug)
|
||||||
|
set(png_STATIC_LIBRARIES
|
||||||
|
${CMAKE_BINARY_DIR}/png/install/lib/libpng12_staticd.lib)
|
||||||
|
else()
|
||||||
|
set(png_STATIC_LIBRARIES
|
||||||
|
${CMAKE_BINARY_DIR}/png/install/lib/libpng12_static.lib)
|
||||||
|
endif()
|
||||||
|
endif()
|
||||||
else()
|
else()
|
||||||
set(png_STATIC_LIBRARIES ${CMAKE_BINARY_DIR}/png/install/lib/libpng12.a)
|
set(png_STATIC_LIBRARIES ${CMAKE_BINARY_DIR}/png/install/lib/libpng12.a)
|
||||||
endif()
|
endif()
|
||||||
@ -38,6 +48,7 @@ ExternalProject_Add(png
|
|||||||
DEPENDS zlib
|
DEPENDS zlib
|
||||||
URL ${png_URL}
|
URL ${png_URL}
|
||||||
URL_HASH ${png_HASH}
|
URL_HASH ${png_HASH}
|
||||||
|
BUILD_BYPRODUCTS ${png_STATIC_LIBRARIES}
|
||||||
INSTALL_DIR ${png_INSTALL}
|
INSTALL_DIR ${png_INSTALL}
|
||||||
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
|
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
|
||||||
CMAKE_CACHE_ARGS
|
CMAKE_CACHE_ARGS
|
||||||
|
44
tensorflow/contrib/cmake/external/protobuf.cmake
vendored
44
tensorflow/contrib/cmake/external/protobuf.cmake
vendored
@ -19,11 +19,34 @@ set(PROTOBUF_URL https://github.com/google/protobuf.git)
|
|||||||
set(PROTOBUF_TAG 396336eb961b75f03b25824fe86cf6490fb75e3a)
|
set(PROTOBUF_TAG 396336eb961b75f03b25824fe86cf6490fb75e3a)
|
||||||
|
|
||||||
if(WIN32)
|
if(WIN32)
|
||||||
set(protobuf_STATIC_LIBRARIES
|
if(${CMAKE_GENERATOR} MATCHES "Visual Studio.*")
|
||||||
debug ${CMAKE_CURRENT_BINARY_DIR}/protobuf/src/protobuf/$(Configuration)/libprotobufd.lib
|
set(protobuf_STATIC_LIBRARIES
|
||||||
optimized ${CMAKE_CURRENT_BINARY_DIR}/protobuf/src/protobuf/$(Configuration)/libprotobuf.lib)
|
debug ${CMAKE_CURRENT_BINARY_DIR}/protobuf/src/protobuf/$(Configuration)/libprotobufd.lib
|
||||||
set(PROTOBUF_PROTOC_EXECUTABLE ${CMAKE_CURRENT_BINARY_DIR}/protobuf/src/protobuf/$(Configuration)/protoc.exe)
|
optimized ${CMAKE_CURRENT_BINARY_DIR}/protobuf/src/protobuf/$(Configuration)/libprotobuf.lib)
|
||||||
set(PROTOBUF_ADDITIONAL_CMAKE_OPTIONS -Dprotobuf_MSVC_STATIC_RUNTIME:BOOL=OFF -A x64)
|
set(PROTOBUF_PROTOC_EXECUTABLE ${CMAKE_CURRENT_BINARY_DIR}/protobuf/src/protobuf/$(Configuration)/protoc.exe)
|
||||||
|
else()
|
||||||
|
if(CMAKE_BUILD_TYPE EQUAL Debug)
|
||||||
|
set(protobuf_STATIC_LIBRARIES
|
||||||
|
${CMAKE_CURRENT_BINARY_DIR}/protobuf/src/protobuf/libprotobufd.lib)
|
||||||
|
else()
|
||||||
|
set(protobuf_STATIC_LIBRARIES
|
||||||
|
${CMAKE_CURRENT_BINARY_DIR}/protobuf/src/protobuf/libprotobuf.lib)
|
||||||
|
endif()
|
||||||
|
set(PROTOBUF_PROTOC_EXECUTABLE ${CMAKE_CURRENT_BINARY_DIR}/protobuf/src/protobuf/protoc.exe)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
# This section is to make sure CONFIGURE_COMMAND use the same generator settings
|
||||||
|
set(PROTOBUF_GENERATOR_PLATFORM)
|
||||||
|
if (CMAKE_GENERATOR_PLATFORM)
|
||||||
|
set(PROTOBUF_GENERATOR_PLATFORM -A ${CMAKE_GENERATOR_PLATFORM})
|
||||||
|
endif()
|
||||||
|
set(PROTOBUF_GENERATOR_TOOLSET)
|
||||||
|
if (CMAKE_GENERATOR_TOOLSET)
|
||||||
|
set(PROTOBUF_GENERATOR_TOOLSET -T ${CMAKE_GENERATOR_TOOLSET})
|
||||||
|
endif()
|
||||||
|
set(PROTOBUF_ADDITIONAL_CMAKE_OPTIONS -Dprotobuf_MSVC_STATIC_RUNTIME:BOOL=OFF
|
||||||
|
-G${CMAKE_GENERATOR} ${PROTOBUF_GENERATOR_PLATFORM} ${PROTOBUF_GENERATOR_TOOLSET})
|
||||||
|
# End of section
|
||||||
else()
|
else()
|
||||||
set(protobuf_STATIC_LIBRARIES ${CMAKE_CURRENT_BINARY_DIR}/protobuf/src/protobuf/libprotobuf.a)
|
set(protobuf_STATIC_LIBRARIES ${CMAKE_CURRENT_BINARY_DIR}/protobuf/src/protobuf/libprotobuf.a)
|
||||||
set(PROTOBUF_PROTOC_EXECUTABLE ${CMAKE_CURRENT_BINARY_DIR}/protobuf/src/protobuf/protoc)
|
set(PROTOBUF_PROTOC_EXECUTABLE ${CMAKE_CURRENT_BINARY_DIR}/protobuf/src/protobuf/protoc)
|
||||||
@ -36,10 +59,15 @@ ExternalProject_Add(protobuf
|
|||||||
GIT_TAG ${PROTOBUF_TAG}
|
GIT_TAG ${PROTOBUF_TAG}
|
||||||
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
|
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
|
||||||
BUILD_IN_SOURCE 1
|
BUILD_IN_SOURCE 1
|
||||||
|
BUILD_BYPRODUCTS ${PROTOBUF_PROTOC_EXECUTABLE} ${protobuf_STATIC_LIBRARIES}
|
||||||
SOURCE_DIR ${CMAKE_CURRENT_BINARY_DIR}/protobuf/src/protobuf
|
SOURCE_DIR ${CMAKE_CURRENT_BINARY_DIR}/protobuf/src/protobuf
|
||||||
|
# SOURCE_SUBDIR cmake/ # Requires CMake 3.7, this will allow removal of CONFIGURE_COMMAND
|
||||||
|
# CONFIGURE_COMMAND resets some settings made in CMAKE_CACHE_ARGS and the generator used
|
||||||
CONFIGURE_COMMAND ${CMAKE_COMMAND} cmake/
|
CONFIGURE_COMMAND ${CMAKE_COMMAND} cmake/
|
||||||
-Dprotobuf_BUILD_TESTS=OFF
|
-DCMAKE_POSITION_INDEPENDENT_CODE:BOOL=${tensorflow_ENABLE_POSITION_INDEPENDENT_CODE}
|
||||||
-DCMAKE_POSITION_INDEPENDENT_CODE=ON
|
-DCMAKE_BUILD_TYPE:STRING=Release
|
||||||
|
-DCMAKE_VERBOSE_MAKEFILE:BOOL=OFF
|
||||||
|
-Dprotobuf_BUILD_TESTS:BOOL=OFF
|
||||||
-DZLIB_ROOT=${ZLIB_INSTALL}
|
-DZLIB_ROOT=${ZLIB_INSTALL}
|
||||||
${PROTOBUF_ADDITIONAL_CMAKE_OPTIONS}
|
${PROTOBUF_ADDITIONAL_CMAKE_OPTIONS}
|
||||||
INSTALL_COMMAND ""
|
INSTALL_COMMAND ""
|
||||||
@ -47,5 +75,7 @@ ExternalProject_Add(protobuf
|
|||||||
-DCMAKE_POSITION_INDEPENDENT_CODE:BOOL=${tensorflow_ENABLE_POSITION_INDEPENDENT_CODE}
|
-DCMAKE_POSITION_INDEPENDENT_CODE:BOOL=${tensorflow_ENABLE_POSITION_INDEPENDENT_CODE}
|
||||||
-DCMAKE_BUILD_TYPE:STRING=Release
|
-DCMAKE_BUILD_TYPE:STRING=Release
|
||||||
-DCMAKE_VERBOSE_MAKEFILE:BOOL=OFF
|
-DCMAKE_VERBOSE_MAKEFILE:BOOL=OFF
|
||||||
|
-Dprotobuf_BUILD_TESTS:BOOL=OFF
|
||||||
|
-Dprotobuf_MSVC_STATIC_RUNTIME:BOOL=OFF
|
||||||
-DZLIB_ROOT:STRING=${ZLIB_INSTALL}
|
-DZLIB_ROOT:STRING=${ZLIB_INSTALL}
|
||||||
)
|
)
|
||||||
|
7
tensorflow/contrib/cmake/external/re2.cmake
vendored
7
tensorflow/contrib/cmake/external/re2.cmake
vendored
@ -21,7 +21,11 @@ set(re2_INSTALL ${CMAKE_CURRENT_BINARY_DIR}/re2/install)
|
|||||||
set(re2_TAG e7efc48)
|
set(re2_TAG e7efc48)
|
||||||
|
|
||||||
if(WIN32)
|
if(WIN32)
|
||||||
set(re2_STATIC_LIBRARIES ${re2_BUILD}/$(Configuration)/re2.lib)
|
if(${CMAKE_GENERATOR} MATCHES "Visual Studio.*")
|
||||||
|
set(re2_STATIC_LIBRARIES ${re2_BUILD}/$(Configuration)/re2.lib)
|
||||||
|
else()
|
||||||
|
set(re2_STATIC_LIBRARIES ${re2_BUILD}/re2.lib)
|
||||||
|
endif()
|
||||||
else()
|
else()
|
||||||
set(re2_STATIC_LIBRARIES ${re2_BUILD}/libre2.a)
|
set(re2_STATIC_LIBRARIES ${re2_BUILD}/libre2.a)
|
||||||
endif()
|
endif()
|
||||||
@ -36,6 +40,7 @@ ExternalProject_Add(re2
|
|||||||
GIT_TAG ${re2_TAG}
|
GIT_TAG ${re2_TAG}
|
||||||
INSTALL_DIR ${re2_INSTALL}
|
INSTALL_DIR ${re2_INSTALL}
|
||||||
BUILD_IN_SOURCE 1
|
BUILD_IN_SOURCE 1
|
||||||
|
BUILD_BYPRODUCTS ${re2_STATIC_LIBRARIES}
|
||||||
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
|
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
|
||||||
CMAKE_CACHE_ARGS
|
CMAKE_CACHE_ARGS
|
||||||
-DCMAKE_POSITION_INDEPENDENT_CODE:BOOL=${tensorflow_ENABLE_POSITION_INDEPENDENT_CODE}
|
-DCMAKE_POSITION_INDEPENDENT_CODE:BOOL=${tensorflow_ENABLE_POSITION_INDEPENDENT_CODE}
|
||||||
|
@ -20,7 +20,11 @@ set(snappy_BUILD ${CMAKE_CURRENT_BINARY_DIR}/snappy/src/snappy)
|
|||||||
set(snappy_INCLUDE_DIR ${CMAKE_CURRENT_BINARY_DIR}/snappy/src/snappy)
|
set(snappy_INCLUDE_DIR ${CMAKE_CURRENT_BINARY_DIR}/snappy/src/snappy)
|
||||||
|
|
||||||
if(WIN32)
|
if(WIN32)
|
||||||
set(snappy_STATIC_LIBRARIES ${snappy_BUILD}/$(Configuration)/snappy.lib)
|
if(${CMAKE_GENERATOR} MATCHES "Visual Studio.*")
|
||||||
|
set(snappy_STATIC_LIBRARIES ${snappy_BUILD}/$(Configuration)/snappy.lib)
|
||||||
|
else()
|
||||||
|
set(snappy_STATIC_LIBRARIES ${snappy_BUILD}/snappy.lib)
|
||||||
|
endif()
|
||||||
else()
|
else()
|
||||||
set(snappy_STATIC_LIBRARIES ${snappy_BUILD}/libsnappy.a)
|
set(snappy_STATIC_LIBRARIES ${snappy_BUILD}/libsnappy.a)
|
||||||
endif()
|
endif()
|
||||||
@ -35,6 +39,7 @@ ExternalProject_Add(snappy
|
|||||||
GIT_TAG ${snappy_TAG}
|
GIT_TAG ${snappy_TAG}
|
||||||
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
|
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
|
||||||
BUILD_IN_SOURCE 1
|
BUILD_IN_SOURCE 1
|
||||||
|
BUILD_BYPRODUCTS ${snappy_STATIC_LIBRARIES}
|
||||||
INSTALL_COMMAND ""
|
INSTALL_COMMAND ""
|
||||||
LOG_DOWNLOAD ON
|
LOG_DOWNLOAD ON
|
||||||
LOG_CONFIGURE ON
|
LOG_CONFIGURE ON
|
||||||
|
@ -36,6 +36,7 @@ if (WIN32)
|
|||||||
PREFIX sqlite
|
PREFIX sqlite
|
||||||
URL ${sqlite_URL}
|
URL ${sqlite_URL}
|
||||||
URL_HASH ${sqlite_HASH}
|
URL_HASH ${sqlite_HASH}
|
||||||
|
BUILD_BYPRODUCTS ${sqlite_STATIC_LIBRARIES}
|
||||||
PATCH_COMMAND ${CMAKE_COMMAND} -E copy_if_different ${CMAKE_CURRENT_SOURCE_DIR}/patches/sqlite/CMakeLists.txt ${sqlite_BUILD}
|
PATCH_COMMAND ${CMAKE_COMMAND} -E copy_if_different ${CMAKE_CURRENT_SOURCE_DIR}/patches/sqlite/CMakeLists.txt ${sqlite_BUILD}
|
||||||
INSTALL_DIR ${sqlite_INSTALL}
|
INSTALL_DIR ${sqlite_INSTALL}
|
||||||
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
|
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
|
||||||
|
17
tensorflow/contrib/cmake/external/zlib.cmake
vendored
17
tensorflow/contrib/cmake/external/zlib.cmake
vendored
@ -21,9 +21,19 @@ set(ZLIB_INSTALL ${CMAKE_CURRENT_BINARY_DIR}/zlib/install)
|
|||||||
set(ZLIB_TAG 50893291621658f355bc5b4d450a8d06a563053d)
|
set(ZLIB_TAG 50893291621658f355bc5b4d450a8d06a563053d)
|
||||||
|
|
||||||
if(WIN32)
|
if(WIN32)
|
||||||
set(zlib_STATIC_LIBRARIES
|
if(${CMAKE_GENERATOR} MATCHES "Visual Studio.*")
|
||||||
debug ${CMAKE_CURRENT_BINARY_DIR}/zlib/install/lib/zlibstaticd.lib
|
set(zlib_STATIC_LIBRARIES
|
||||||
optimized ${CMAKE_CURRENT_BINARY_DIR}/zlib/install/lib/zlibstatic.lib)
|
debug ${CMAKE_CURRENT_BINARY_DIR}/zlib/install/lib/zlibstaticd.lib
|
||||||
|
optimized ${CMAKE_CURRENT_BINARY_DIR}/zlib/install/lib/zlibstatic.lib)
|
||||||
|
else()
|
||||||
|
if(CMAKE_BUILD_TYPE EQUAL Debug)
|
||||||
|
set(zlib_STATIC_LIBRARIES
|
||||||
|
${CMAKE_CURRENT_BINARY_DIR}/zlib/install/lib/zlibstaticd.lib)
|
||||||
|
else()
|
||||||
|
set(zlib_STATIC_LIBRARIES
|
||||||
|
${CMAKE_CURRENT_BINARY_DIR}/zlib/install/lib/zlibstatic.lib)
|
||||||
|
endif()
|
||||||
|
endif()
|
||||||
else()
|
else()
|
||||||
set(zlib_STATIC_LIBRARIES
|
set(zlib_STATIC_LIBRARIES
|
||||||
${CMAKE_CURRENT_BINARY_DIR}/zlib/install/lib/libz.a)
|
${CMAKE_CURRENT_BINARY_DIR}/zlib/install/lib/libz.a)
|
||||||
@ -40,6 +50,7 @@ ExternalProject_Add(zlib
|
|||||||
GIT_TAG ${ZLIB_TAG}
|
GIT_TAG ${ZLIB_TAG}
|
||||||
INSTALL_DIR ${ZLIB_INSTALL}
|
INSTALL_DIR ${ZLIB_INSTALL}
|
||||||
BUILD_IN_SOURCE 1
|
BUILD_IN_SOURCE 1
|
||||||
|
BUILD_BYPRODUCTS ${zlib_STATIC_LIBRARIES}
|
||||||
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
|
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
|
||||||
CMAKE_CACHE_ARGS
|
CMAKE_CACHE_ARGS
|
||||||
-DCMAKE_POSITION_INDEPENDENT_CODE:BOOL=${tensorflow_ENABLE_POSITION_INDEPENDENT_CODE}
|
-DCMAKE_POSITION_INDEPENDENT_CODE:BOOL=${tensorflow_ENABLE_POSITION_INDEPENDENT_CODE}
|
||||||
|
@ -299,7 +299,9 @@ tensorflow/contrib/linear_optimizer/kernels/g3doc
|
|||||||
tensorflow/contrib/linear_optimizer/python
|
tensorflow/contrib/linear_optimizer/python
|
||||||
tensorflow/contrib/linear_optimizer/python/ops
|
tensorflow/contrib/linear_optimizer/python/ops
|
||||||
# TODO(drpngx): Fix failing imports
|
# TODO(drpngx): Fix failing imports
|
||||||
|
# tensorflow/contrib/lite
|
||||||
# tensorflow/contrib/lite/python
|
# tensorflow/contrib/lite/python
|
||||||
|
# tensorflow/contrib/lite/toco
|
||||||
# tensorflow/contrib/lite/toco/python
|
# tensorflow/contrib/lite/toco/python
|
||||||
tensorflow/contrib/lookup
|
tensorflow/contrib/lookup
|
||||||
tensorflow/contrib/losses
|
tensorflow/contrib/losses
|
||||||
@ -360,6 +362,7 @@ tensorflow/contrib/reduce_slice_ops/kernels
|
|||||||
tensorflow/contrib/reduce_slice_ops/ops
|
tensorflow/contrib/reduce_slice_ops/ops
|
||||||
tensorflow/contrib/reduce_slice_ops/python
|
tensorflow/contrib/reduce_slice_ops/python
|
||||||
tensorflow/contrib/reduce_slice_ops/python/ops
|
tensorflow/contrib/reduce_slice_ops/python/ops
|
||||||
|
tensorflow/contrib/remote_fused_graph
|
||||||
tensorflow/contrib/remote_fused_graph/pylib
|
tensorflow/contrib/remote_fused_graph/pylib
|
||||||
tensorflow/contrib/remote_fused_graph/pylib/python
|
tensorflow/contrib/remote_fused_graph/pylib/python
|
||||||
tensorflow/contrib/remote_fused_graph/pylib/python/ops
|
tensorflow/contrib/remote_fused_graph/pylib/python/ops
|
||||||
@ -409,6 +412,7 @@ tensorflow/contrib/summary
|
|||||||
tensorflow/contrib/tensorboard
|
tensorflow/contrib/tensorboard
|
||||||
tensorflow/contrib/tensorboard/plugins
|
tensorflow/contrib/tensorboard/plugins
|
||||||
tensorflow/contrib/tensorboard/plugins/projector
|
tensorflow/contrib/tensorboard/plugins/projector
|
||||||
|
tensorflow/contrib/tensorboard/plugins/trace
|
||||||
tensorflow/contrib/tensor_forest
|
tensorflow/contrib/tensor_forest
|
||||||
tensorflow/contrib/tensor_forest/client
|
tensorflow/contrib/tensor_forest/client
|
||||||
tensorflow/contrib/tensor_forest/hybrid
|
tensorflow/contrib/tensor_forest/hybrid
|
||||||
@ -419,6 +423,7 @@ tensorflow/contrib/tensor_forest/hybrid/python/layers
|
|||||||
tensorflow/contrib/tensor_forest/hybrid/python/models
|
tensorflow/contrib/tensor_forest/hybrid/python/models
|
||||||
tensorflow/contrib/tensor_forest/hybrid/python/ops
|
tensorflow/contrib/tensor_forest/hybrid/python/ops
|
||||||
tensorflow/contrib/tensor_forest/kernels
|
tensorflow/contrib/tensor_forest/kernels
|
||||||
|
tensorflow/contrib/tensor_forest/proto
|
||||||
tensorflow/contrib/tensor_forest/python
|
tensorflow/contrib/tensor_forest/python
|
||||||
tensorflow/contrib/tensor_forest/python/ops
|
tensorflow/contrib/tensor_forest/python/ops
|
||||||
tensorflow/contrib/testing
|
tensorflow/contrib/testing
|
||||||
@ -439,6 +444,7 @@ tensorflow/contrib/timeseries/python/timeseries/state_space_models
|
|||||||
tensorflow/contrib/tpu
|
tensorflow/contrib/tpu
|
||||||
tensorflow/contrib/tpu/ops
|
tensorflow/contrib/tpu/ops
|
||||||
tensorflow/contrib/tpu/profiler
|
tensorflow/contrib/tpu/profiler
|
||||||
|
tensorflow/contrib/tpu/proto
|
||||||
tensorflow/contrib/tpu/python
|
tensorflow/contrib/tpu/python
|
||||||
tensorflow/contrib/tpu/python/ops
|
tensorflow/contrib/tpu/python/ops
|
||||||
tensorflow/contrib/tpu/python/profiler
|
tensorflow/contrib/tpu/python/profiler
|
||||||
|
22
tensorflow/contrib/cmake/tests/cuda/compatibility_test.c
Normal file
22
tensorflow/contrib/cmake/tests/cuda/compatibility_test.c
Normal file
@ -0,0 +1,22 @@
|
|||||||
|
/* Copyright 2018 The TensorFlow Authors. All Rights Reserved.
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
==============================================================================*/
|
||||||
|
|
||||||
|
// This is a program to test if compiler is compatible with CUDA.
|
||||||
|
#define __CUDACC__
|
||||||
|
#include "crt/host_config.h"
|
||||||
|
|
||||||
|
int main(void) {
|
||||||
|
return 0;
|
||||||
|
}
|
20
tensorflow/contrib/cmake/tests/cuda/compatibility_test.cc
Normal file
20
tensorflow/contrib/cmake/tests/cuda/compatibility_test.cc
Normal file
@ -0,0 +1,20 @@
|
|||||||
|
/* Copyright 2018 The TensorFlow Authors. All Rights Reserved.
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
============================================================================*/
|
||||||
|
|
||||||
|
// This is a program to test if compiler is compatible with CUDA.
|
||||||
|
#define __CUDACC__
|
||||||
|
#include "crt/host_config.h"
|
||||||
|
|
||||||
|
int main(void) { return 0; }
|
@ -149,7 +149,11 @@ add_library(tf_cc OBJECT ${tf_cc_srcs})
|
|||||||
add_dependencies(tf_cc tf_cc_framework tf_cc_ops)
|
add_dependencies(tf_cc tf_cc_framework tf_cc_ops)
|
||||||
|
|
||||||
if (WIN32)
|
if (WIN32)
|
||||||
set (pywrap_tensorflow_lib "${CMAKE_CURRENT_BINARY_DIR}/${CMAKE_BUILD_TYPE}/pywrap_tensorflow_internal.lib")
|
if(${CMAKE_GENERATOR} MATCHES "Visual Studio.*")
|
||||||
|
set (pywrap_tensorflow_lib "${CMAKE_CURRENT_BINARY_DIR}/${CMAKE_BUILD_TYPE}/pywrap_tensorflow_internal.lib")
|
||||||
|
else()
|
||||||
|
set (pywrap_tensorflow_lib "${CMAKE_CURRENT_BINARY_DIR}/pywrap_tensorflow_internal.lib")
|
||||||
|
endif()
|
||||||
else (WIN32)
|
else (WIN32)
|
||||||
set (pywrap_tensorflow_lib "${CMAKE_CURRENT_BINARY_DIR}/libpywrap_tensorflow_internal.so")
|
set (pywrap_tensorflow_lib "${CMAKE_CURRENT_BINARY_DIR}/libpywrap_tensorflow_internal.so")
|
||||||
endif (WIN32)
|
endif (WIN32)
|
||||||
|
@ -541,7 +541,11 @@ if(WIN32)
|
|||||||
${nsync_STATIC_LIBRARIES}
|
${nsync_STATIC_LIBRARIES}
|
||||||
)
|
)
|
||||||
|
|
||||||
set(pywrap_tensorflow_deffile "${CMAKE_CURRENT_BINARY_DIR}/${CMAKE_BUILD_TYPE}/pywrap_tensorflow.def")
|
if(${CMAKE_GENERATOR} MATCHES "Visual Studio.*")
|
||||||
|
set(pywrap_tensorflow_deffile "${CMAKE_CURRENT_BINARY_DIR}/${CMAKE_BUILD_TYPE}/pywrap_tensorflow.def")
|
||||||
|
else()
|
||||||
|
set(pywrap_tensorflow_deffile "${CMAKE_CURRENT_BINARY_DIR}/pywrap_tensorflow.def")
|
||||||
|
endif()
|
||||||
set_source_files_properties(${pywrap_tensorflow_deffile} PROPERTIES GENERATED TRUE)
|
set_source_files_properties(${pywrap_tensorflow_deffile} PROPERTIES GENERATED TRUE)
|
||||||
|
|
||||||
add_custom_command(TARGET pywrap_tensorflow_internal_static POST_BUILD
|
add_custom_command(TARGET pywrap_tensorflow_internal_static POST_BUILD
|
||||||
@ -549,6 +553,7 @@ if(WIN32)
|
|||||||
--input "${pywrap_tensorflow_internal_static_dependencies}"
|
--input "${pywrap_tensorflow_internal_static_dependencies}"
|
||||||
--output "${pywrap_tensorflow_deffile}"
|
--output "${pywrap_tensorflow_deffile}"
|
||||||
--target _pywrap_tensorflow_internal.pyd
|
--target _pywrap_tensorflow_internal.pyd
|
||||||
|
BYPRODUCTS ${pywrap_tensorflow_deffile} # Required for Ninja
|
||||||
)
|
)
|
||||||
endif(WIN32)
|
endif(WIN32)
|
||||||
|
|
||||||
@ -702,11 +707,19 @@ add_custom_command(TARGET tf_python_copy_scripts_to_destination PRE_BUILD
|
|||||||
${CMAKE_CURRENT_BINARY_DIR}/tf_python/tensorflow/contrib/testing/python/framework/)
|
${CMAKE_CURRENT_BINARY_DIR}/tf_python/tensorflow/contrib/testing/python/framework/)
|
||||||
|
|
||||||
if(WIN32)
|
if(WIN32)
|
||||||
add_custom_command(TARGET tf_python_build_pip_package POST_BUILD
|
if(${CMAKE_GENERATOR} MATCHES "Visual Studio.*")
|
||||||
COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_BINARY_DIR}/$(Configuration)/pywrap_tensorflow_internal.dll
|
add_custom_command(TARGET tf_python_build_pip_package POST_BUILD
|
||||||
${CMAKE_CURRENT_BINARY_DIR}/tf_python/tensorflow/python/_pywrap_tensorflow_internal.pyd
|
COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_BINARY_DIR}/$(Configuration)/pywrap_tensorflow_internal.dll
|
||||||
COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_BINARY_DIR}/$(Configuration)/pywrap_tensorflow_internal.lib
|
${CMAKE_CURRENT_BINARY_DIR}/tf_python/tensorflow/python/_pywrap_tensorflow_internal.pyd
|
||||||
${CMAKE_CURRENT_BINARY_DIR}/tf_python/tensorflow/python/)
|
COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_BINARY_DIR}/$(Configuration)/pywrap_tensorflow_internal.lib
|
||||||
|
${CMAKE_CURRENT_BINARY_DIR}/tf_python/tensorflow/python/)
|
||||||
|
else()
|
||||||
|
add_custom_command(TARGET tf_python_build_pip_package POST_BUILD
|
||||||
|
COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_BINARY_DIR}/pywrap_tensorflow_internal.dll
|
||||||
|
${CMAKE_CURRENT_BINARY_DIR}/tf_python/tensorflow/python/_pywrap_tensorflow_internal.pyd
|
||||||
|
COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_BINARY_DIR}/pywrap_tensorflow_internal.lib
|
||||||
|
${CMAKE_CURRENT_BINARY_DIR}/tf_python/tensorflow/python/)
|
||||||
|
endif()
|
||||||
else()
|
else()
|
||||||
add_custom_command(TARGET tf_python_build_pip_package POST_BUILD
|
add_custom_command(TARGET tf_python_build_pip_package POST_BUILD
|
||||||
COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_BINARY_DIR}/libpywrap_tensorflow_internal.so
|
COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_BINARY_DIR}/libpywrap_tensorflow_internal.so
|
||||||
|
@ -46,7 +46,11 @@ if(WIN32)
|
|||||||
$<TARGET_FILE:tf_protos_cc>
|
$<TARGET_FILE:tf_protos_cc>
|
||||||
)
|
)
|
||||||
|
|
||||||
set(tensorflow_deffile "${CMAKE_CURRENT_BINARY_DIR}/${CMAKE_BUILD_TYPE}/tensorflow.def")
|
if(${CMAKE_GENERATOR} MATCHES "Visual Studio.*")
|
||||||
|
set(tensorflow_deffile "${CMAKE_CURRENT_BINARY_DIR}/${CMAKE_BUILD_TYPE}/tensorflow.def")
|
||||||
|
else()
|
||||||
|
set(tensorflow_deffile "${CMAKE_CURRENT_BINARY_DIR}/tensorflow.def")
|
||||||
|
endif()
|
||||||
set_source_files_properties(${tensorflow_deffile} PROPERTIES GENERATED TRUE)
|
set_source_files_properties(${tensorflow_deffile} PROPERTIES GENERATED TRUE)
|
||||||
|
|
||||||
add_custom_command(TARGET tensorflow_static POST_BUILD
|
add_custom_command(TARGET tensorflow_static POST_BUILD
|
||||||
|
@ -310,6 +310,8 @@ if (tensorflow_BUILD_PYTHON_TESTS)
|
|||||||
"${tensorflow_source_dir}/tensorflow/python/kernel_tests/control_flow_util_test.py"
|
"${tensorflow_source_dir}/tensorflow/python/kernel_tests/control_flow_util_test.py"
|
||||||
# Flaky replicate_model_fn_test
|
# Flaky replicate_model_fn_test
|
||||||
"${tensorflow_source_dir}/tensorflow/contrib/estimator/python/estimator/replicate_model_fn_test.py" # b/71901810
|
"${tensorflow_source_dir}/tensorflow/contrib/estimator/python/estimator/replicate_model_fn_test.py" # b/71901810
|
||||||
|
# Broken io_utils_test
|
||||||
|
"${tensorflow_source_dir}/tensorflow/python/keras/_impl/keras/utils/io_utils_test.py" # b/72894325
|
||||||
)
|
)
|
||||||
endif()
|
endif()
|
||||||
list(REMOVE_ITEM tf_test_src_py ${tf_test_src_py_exclude})
|
list(REMOVE_ITEM tf_test_src_py ${tf_test_src_py_exclude})
|
||||||
|
@ -48,9 +48,6 @@ file(GLOB_RECURSE tf_tools_transform_graph_lib_exclude_srcs
|
|||||||
"${tensorflow_source_dir}/tensorflow/tools/graph_transforms/compare_graphs.cc"
|
"${tensorflow_source_dir}/tensorflow/tools/graph_transforms/compare_graphs.cc"
|
||||||
"${tensorflow_source_dir}/tensorflow/tools/graph_transforms/summarize_graph_main.cc"
|
"${tensorflow_source_dir}/tensorflow/tools/graph_transforms/summarize_graph_main.cc"
|
||||||
"${tensorflow_source_dir}/tensorflow/tools/graph_transforms/transform_graph_main.cc"
|
"${tensorflow_source_dir}/tensorflow/tools/graph_transforms/transform_graph_main.cc"
|
||||||
"${tensorflow_source_dir}/tensorflow/tools/graph_transforms/quantize_nodes.cc"
|
|
||||||
"${tensorflow_source_dir}/tensorflow/tools/graph_transforms/quantize_weights.cc"
|
|
||||||
"${tensorflow_source_dir}/tensorflow/tools/graph_transforms/round_weights.cc"
|
|
||||||
)
|
)
|
||||||
list(REMOVE_ITEM tf_tools_transform_graph_lib_srcs ${tf_tools_transform_graph_lib_exclude_srcs})
|
list(REMOVE_ITEM tf_tools_transform_graph_lib_srcs ${tf_tools_transform_graph_lib_exclude_srcs})
|
||||||
|
|
||||||
|
691
tensorflow/contrib/data/python/ops/dataset_ops.py
Normal file
691
tensorflow/contrib/data/python/ops/dataset_ops.py
Normal file
@ -0,0 +1,691 @@
|
|||||||
|
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
# ==============================================================================
|
||||||
|
"""Python wrappers for Datasets and Iterators."""
|
||||||
|
from __future__ import absolute_import
|
||||||
|
from __future__ import division
|
||||||
|
from __future__ import print_function
|
||||||
|
|
||||||
|
from tensorflow.contrib.data.python.ops import batching
|
||||||
|
from tensorflow.contrib.data.python.ops import enumerate_ops
|
||||||
|
from tensorflow.contrib.data.python.ops import error_ops
|
||||||
|
from tensorflow.contrib.data.python.ops import grouping
|
||||||
|
from tensorflow.python.data.ops import dataset_ops
|
||||||
|
from tensorflow.python.data.util import nest
|
||||||
|
from tensorflow.python.ops import gen_dataset_ops
|
||||||
|
from tensorflow.python.ops import gen_io_ops
|
||||||
|
from tensorflow.python.util import deprecation
|
||||||
|
|
||||||
|
|
||||||
|
class Dataset(dataset_ops.Dataset):
|
||||||
|
"""Represents a potentially large set of elements.
|
||||||
|
|
||||||
|
A `Dataset` can be used to represent an input pipeline as a
|
||||||
|
collection of elements (nested structures of tensors) and a "logical
|
||||||
|
plan" of transformations that act on those elements.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, dataset):
|
||||||
|
super(Dataset, self).__init__()
|
||||||
|
self._dataset = dataset
|
||||||
|
|
||||||
|
@deprecation.deprecated(None, "Use `ds._as_variant_tensor()`.")
|
||||||
|
def make_dataset_resource(self):
|
||||||
|
return self._as_variant_tensor()
|
||||||
|
|
||||||
|
def _as_variant_tensor(self):
|
||||||
|
return self._dataset._as_variant_tensor() # pylint: disable=protected-access
|
||||||
|
|
||||||
|
@property
|
||||||
|
def output_classes(self):
|
||||||
|
return self._dataset.output_classes
|
||||||
|
|
||||||
|
@property
|
||||||
|
def output_shapes(self):
|
||||||
|
return self._dataset.output_shapes
|
||||||
|
|
||||||
|
@property
|
||||||
|
def output_types(self):
|
||||||
|
return self._dataset.output_types
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
@deprecation.deprecated(None, "Use `tf.data.Dataset.from_tensors()`.")
|
||||||
|
def from_tensors(tensors):
|
||||||
|
"""Creates a `Dataset` with a single element, comprising the given tensors.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
tensors: A nested structure of tensors.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A `Dataset`.
|
||||||
|
"""
|
||||||
|
return Dataset(dataset_ops.TensorDataset(tensors))
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
@deprecation.deprecated(None, "Use `tf.data.Dataset.from_tensor_slices()`.")
|
||||||
|
def from_tensor_slices(tensors):
|
||||||
|
"""Creates a `Dataset` whose elements are slices of the given tensors.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
tensors: A nested structure of tensors, each having the same size in the
|
||||||
|
0th dimension.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A `Dataset`.
|
||||||
|
"""
|
||||||
|
return Dataset(dataset_ops.TensorSliceDataset(tensors))
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
@deprecation.deprecated(None,
|
||||||
|
"Use `tf.data.Dataset.from_sparse_tensor_slices()`.")
|
||||||
|
def from_sparse_tensor_slices(sparse_tensor):
|
||||||
|
"""Splits each rank-N `tf.SparseTensor` in this dataset row-wise.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
sparse_tensor: A `tf.SparseTensor`.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A `Dataset` of rank-(N-1) sparse tensors.
|
||||||
|
"""
|
||||||
|
return Dataset(dataset_ops.SparseTensorSliceDataset(sparse_tensor))
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
@deprecation.deprecated(None, "Use `tf.data.Dataset.from_generator()`.")
|
||||||
|
def from_generator(generator, output_types, output_shapes=None):
|
||||||
|
"""Creates a `Dataset` whose elements are generated by `generator`.
|
||||||
|
|
||||||
|
The `generator` argument must be a callable object that returns
|
||||||
|
an object that support the `iter()` protocol (e.g. a generator function).
|
||||||
|
The elements generated by `generator` must be compatible with the given
|
||||||
|
`output_types` and (optional) `output_shapes` arguments.
|
||||||
|
|
||||||
|
For example:
|
||||||
|
|
||||||
|
```python
|
||||||
|
import itertools
|
||||||
|
|
||||||
|
def gen():
|
||||||
|
for i in itertools.count(1):
|
||||||
|
yield (i, [1] * i)
|
||||||
|
|
||||||
|
ds = Dataset.from_generator(
|
||||||
|
gen, (tf.int64, tf.int64), (tf.TensorShape([]), tf.TensorShape([None])))
|
||||||
|
value = ds.make_one_shot_iterator().get_next()
|
||||||
|
|
||||||
|
sess.run(value) # (1, array([1]))
|
||||||
|
sess.run(value) # (2, array([1, 1]))
|
||||||
|
```
|
||||||
|
|
||||||
|
Args:
|
||||||
|
generator: A callable object that takes no arguments and returns an
|
||||||
|
object that supports the `iter()` protocol.
|
||||||
|
output_types: A nested structure of `tf.DType` objects corresponding to
|
||||||
|
each component of an element yielded by `generator`.
|
||||||
|
output_shapes: (Optional.) A nested structure of `tf.TensorShape`
|
||||||
|
objects corresponding to each component of an element yielded by
|
||||||
|
`generator`.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A `Dataset`.
|
||||||
|
"""
|
||||||
|
return Dataset(
|
||||||
|
dataset_ops.Dataset.from_generator(generator, output_types,
|
||||||
|
output_shapes))
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
@deprecation.deprecated(None, "Use `tf.data.Dataset.range()`.")
|
||||||
|
def range(*args):
|
||||||
|
"""Creates a `Dataset` of a step-separated range of values.
|
||||||
|
|
||||||
|
For example:
|
||||||
|
|
||||||
|
```python
|
||||||
|
Dataset.range(5) == [0, 1, 2, 3, 4]
|
||||||
|
Dataset.range(2, 5) == [2, 3, 4]
|
||||||
|
Dataset.range(1, 5, 2) == [1, 3]
|
||||||
|
Dataset.range(1, 5, -2) == []
|
||||||
|
Dataset.range(5, 1) == []
|
||||||
|
Dataset.range(5, 1, -2) == [5, 3]
|
||||||
|
```
|
||||||
|
|
||||||
|
Args:
|
||||||
|
*args: follow same semantics as python's xrange.
|
||||||
|
len(args) == 1 -> start = 0, stop = args[0], step = 1
|
||||||
|
len(args) == 2 -> start = args[0], stop = args[1], step = 1
|
||||||
|
len(args) == 3 -> start = args[0], stop = args[1, stop = args[2]
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A `RangeDataset`.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ValueError: if len(args) == 0.
|
||||||
|
"""
|
||||||
|
return Dataset(dataset_ops.RangeDataset(*args))
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
@deprecation.deprecated(None, "Use `tf.data.Dataset.zip()`.")
|
||||||
|
def zip(datasets):
|
||||||
|
"""Creates a `Dataset` by zipping together the given datasets.
|
||||||
|
|
||||||
|
This method has similar semantics to the built-in `zip()` function
|
||||||
|
in Python, with the main difference being that the `datasets`
|
||||||
|
argument can be an arbitrary nested structure of `Dataset` objects.
|
||||||
|
For example:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# NOTE: The following examples use `{ ... }` to represent the
|
||||||
|
# contents of a dataset.
|
||||||
|
a = { 1, 2, 3 }
|
||||||
|
b = { 4, 5, 6 }
|
||||||
|
c = { (7, 8), (9, 10), (11, 12) }
|
||||||
|
d = { 13, 14 }
|
||||||
|
|
||||||
|
# The nested structure of the `datasets` argument determines the
|
||||||
|
# structure of elements in the resulting dataset.
|
||||||
|
Dataset.zip((a, b)) == { (1, 4), (2, 5), (3, 6) }
|
||||||
|
Dataset.zip((b, a)) == { (4, 1), (5, 2), (6, 3) }
|
||||||
|
|
||||||
|
# The `datasets` argument may contain an arbitrary number of
|
||||||
|
# datasets.
|
||||||
|
Dataset.zip((a, b, c)) == { (1, 4, (7, 8)),
|
||||||
|
(2, 5, (9, 10)),
|
||||||
|
(3, 6, (11, 12)) }
|
||||||
|
|
||||||
|
# The number of elements in the resulting dataset is the same as
|
||||||
|
# the size of the smallest dataset in `datasets`.
|
||||||
|
Dataset.zip((a, d)) == { (1, 13), (2, 14) }
|
||||||
|
```
|
||||||
|
|
||||||
|
Args:
|
||||||
|
datasets: A nested structure of datasets.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A `Dataset`.
|
||||||
|
"""
|
||||||
|
return Dataset(dataset_ops.ZipDataset(datasets))
|
||||||
|
|
||||||
|
def concatenate(self, dataset):
|
||||||
|
"""Creates a `Dataset` by concatenating given dataset with this dataset.
|
||||||
|
|
||||||
|
```python
|
||||||
|
# NOTE: The following examples use `{ ... }` to represent the
|
||||||
|
# contents of a dataset.
|
||||||
|
a = { 1, 2, 3 }
|
||||||
|
b = { 4, 5, 6, 7 }
|
||||||
|
|
||||||
|
# Input dataset and dataset to be concatenated should have same
|
||||||
|
# nested structures and output types.
|
||||||
|
# c = { (8, 9), (10, 11), (12, 13) }
|
||||||
|
# d = { 14.0, 15.0, 16.0 }
|
||||||
|
# a.concatenate(c) and a.concatenate(d) would result in error.
|
||||||
|
|
||||||
|
a.concatenate(b) == { 1, 2, 3, 4, 5, 6, 7 }
|
||||||
|
```
|
||||||
|
|
||||||
|
Args:
|
||||||
|
dataset: `Dataset` to be concatenated.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A `Dataset`.
|
||||||
|
"""
|
||||||
|
return Dataset(dataset_ops.ConcatenateDataset(self._dataset, dataset))
|
||||||
|
|
||||||
|
def prefetch(self, buffer_size):
|
||||||
|
"""Creates a `Dataset` that prefetches elements from this dataset.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
buffer_size: A `tf.int64` scalar `tf.Tensor`, representing the
|
||||||
|
maximum number elements that will be buffered when prefetching.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A `Dataset`.
|
||||||
|
"""
|
||||||
|
return Dataset(dataset_ops.PrefetchDataset(self._dataset, buffer_size))
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
@deprecation.deprecated(None, "Use `tf.data.Dataset.list_files()`.")
|
||||||
|
def list_files(file_pattern):
|
||||||
|
"""A dataset of all files matching a pattern.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
If we had the following files on our filesystem:
|
||||||
|
- /path/to/dir/a.txt
|
||||||
|
- /path/to/dir/b.py
|
||||||
|
- /path/to/dir/c.py
|
||||||
|
If we pass "/path/to/dir/*.py" as the directory, the dataset would
|
||||||
|
produce:
|
||||||
|
- /path/to/dir/b.py
|
||||||
|
- /path/to/dir/c.py
|
||||||
|
|
||||||
|
Args:
|
||||||
|
file_pattern: A string or scalar string `tf.Tensor`, representing
|
||||||
|
the filename pattern that will be matched.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A `Dataset` of strings corresponding to file names.
|
||||||
|
"""
|
||||||
|
return Dataset.from_tensor_slices(gen_io_ops.matching_files(file_pattern))
|
||||||
|
|
||||||
|
def repeat(self, count=None):
|
||||||
|
"""Repeats this dataset `count` times.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
count: (Optional.) A `tf.int64` scalar `tf.Tensor`, representing the
|
||||||
|
number of times the elements of this dataset should be repeated. The
|
||||||
|
default behavior (if `count` is `None` or `-1`) is for the elements to
|
||||||
|
be repeated indefinitely.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A `Dataset`.
|
||||||
|
"""
|
||||||
|
return Dataset(dataset_ops.RepeatDataset(self._dataset, count))
|
||||||
|
|
||||||
|
@deprecation.deprecated(
|
||||||
|
None, "Use `ds.apply(tf.contrib.data.enumerate_dataset())`.")
|
||||||
|
def enumerate(self, start=0):
|
||||||
|
"""Deprecated: Use `Dataset.apply(tf.contrib.data.enumerate_dataset(..)`."""
|
||||||
|
|
||||||
|
return self.apply(enumerate_ops.enumerate_dataset(start))
|
||||||
|
|
||||||
|
def shuffle(self, buffer_size, seed=None):
|
||||||
|
"""Randomly shuffles the elements of this dataset.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
buffer_size: A `tf.int64` scalar `tf.Tensor`, representing the
|
||||||
|
number of elements from this dataset from which the new
|
||||||
|
dataset will sample.
|
||||||
|
seed: (Optional.) A `tf.int64` scalar `tf.Tensor`, representing the
|
||||||
|
random seed that will be used to create the distribution. See
|
||||||
|
@{tf.set_random_seed} for behavior.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A `Dataset`.
|
||||||
|
"""
|
||||||
|
return Dataset(dataset_ops.ShuffleDataset(self._dataset, buffer_size, seed))
|
||||||
|
|
||||||
|
def cache(self, filename=""):
|
||||||
|
"""Caches the elements in this dataset.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
filename: A `tf.string` scalar `tf.Tensor`, representing the name of a
|
||||||
|
directory on the filesystem to use for caching tensors in this Dataset.
|
||||||
|
If a filename is not provided, the dataset will be cached in memory.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A `Dataset`.
|
||||||
|
"""
|
||||||
|
return Dataset(dataset_ops.CacheDataset(self._dataset, filename))
|
||||||
|
|
||||||
|
def take(self, count):
|
||||||
|
"""Creates a `Dataset` with at most `count` elements from this dataset.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
count: A `tf.int64` scalar `tf.Tensor`, representing the number of
|
||||||
|
elements of this dataset that should be taken to form the new dataset.
|
||||||
|
If `count` is -1, or if `count` is greater than the size of this
|
||||||
|
dataset, the new dataset will contain all elements of this dataset.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A `Dataset`.
|
||||||
|
"""
|
||||||
|
return Dataset(dataset_ops.TakeDataset(self._dataset, count))
|
||||||
|
|
||||||
|
def skip(self, count):
|
||||||
|
"""Creates a `Dataset` that skips `count` elements from this dataset.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
count: A `tf.int64` scalar `tf.Tensor`, representing the number
|
||||||
|
of elements of this dataset that should be skipped to form the
|
||||||
|
new dataset. If `count` is greater than the size of this
|
||||||
|
dataset, the new dataset will contain no elements. If `count`
|
||||||
|
is -1, skips the entire dataset.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A `Dataset`.
|
||||||
|
"""
|
||||||
|
return Dataset(dataset_ops.SkipDataset(self._dataset, count))
|
||||||
|
|
||||||
|
def shard(self, num_shards, index):
|
||||||
|
"""Creates a `Dataset` that includes only 1/`num_shards` of this dataset.
|
||||||
|
|
||||||
|
This dataset operator is very useful when running distributed training, as
|
||||||
|
it allows each worker to read a unique subset.
|
||||||
|
|
||||||
|
When reading a single input file, you can skip elements as follows:
|
||||||
|
|
||||||
|
```python
|
||||||
|
d = tf.data.TFRecordDataset(FLAGS.input_file)
|
||||||
|
d = d.shard(FLAGS.num_workers, FLAGS.worker_index)
|
||||||
|
d = d.repeat(FLAGS.num_epochs)
|
||||||
|
d = d.shuffle(FLAGS.shuffle_buffer_size)
|
||||||
|
d = d.map(parser_fn, num_parallel_calls=FLAGS.num_map_threads)
|
||||||
|
```
|
||||||
|
|
||||||
|
Important caveats:
|
||||||
|
|
||||||
|
- Be sure to shard before you use any randomizing operator (such as
|
||||||
|
shuffle).
|
||||||
|
- Generally it is best if the shard operator is used early in the dataset
|
||||||
|
pipeline. For example, when reading from a set of TFRecord files, shard
|
||||||
|
before converting the dataset to input samples. This avoids reading every
|
||||||
|
file on every worker. The following is an example of an efficient
|
||||||
|
sharding strategy within a complete pipeline:
|
||||||
|
|
||||||
|
```python
|
||||||
|
d = tf.data.Dataset.list_files(FLAGS.pattern)
|
||||||
|
d = d.shard(FLAGS.num_workers, FLAGS.worker_index)
|
||||||
|
d = d.repeat(FLAGS.num_epochs)
|
||||||
|
d = d.shuffle(FLAGS.shuffle_buffer_size)
|
||||||
|
d = d.interleave(tf.data.TFRecordDataset,
|
||||||
|
cycle_length=FLAGS.num_readers, block_length=1)
|
||||||
|
d = d.map(parser_fn, num_parallel_calls=FLAGS.num_map_threads)
|
||||||
|
```
|
||||||
|
|
||||||
|
Args:
|
||||||
|
num_shards: A `tf.int64` scalar `tf.Tensor`, representing the number of
|
||||||
|
shards operating in parallel.
|
||||||
|
index: A `tf.int64` scalar `tf.Tensor`, representing the worker index.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A `Dataset`.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ValueError: if `num_shards` or `index` are illegal values. Note: error
|
||||||
|
checking is done on a best-effort basis, and aren't guaranteed to be
|
||||||
|
caught upon dataset creation. (e.g. providing in a placeholder tensor
|
||||||
|
bypasses the early checking, and will instead result in an error during
|
||||||
|
a session.run call.)
|
||||||
|
"""
|
||||||
|
return Dataset(self._dataset.shard(num_shards, index))
|
||||||
|
|
||||||
|
@deprecation.deprecated(None,
|
||||||
|
"Use `ds.apply(tf.contrib.data.ignore_errors())`.")
|
||||||
|
def ignore_errors(self):
|
||||||
|
"""Deprecated: Use `Dataset.apply(tf.contrib.data.ignore_errors())`."""
|
||||||
|
|
||||||
|
return self.apply(error_ops.ignore_errors())
|
||||||
|
|
||||||
|
def batch(self, batch_size):
|
||||||
|
"""Combines consecutive elements of this dataset into batches.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
batch_size: A `tf.int64` scalar `tf.Tensor`, representing the number of
|
||||||
|
consecutive elements of this dataset to combine in a single batch.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A `Dataset`.
|
||||||
|
"""
|
||||||
|
return Dataset(dataset_ops.BatchDataset(self._dataset, batch_size))
|
||||||
|
|
||||||
|
def padded_batch(self, batch_size, padded_shapes, padding_values=None):
|
||||||
|
"""Combines consecutive elements of this dataset into padded batches.
|
||||||
|
|
||||||
|
Like `Dataset.dense_to_sparse_batch()`, this method combines
|
||||||
|
multiple consecutive elements of this dataset, which might have
|
||||||
|
different shapes, into a single element. The tensors in the
|
||||||
|
resulting element have an additional outer dimension, and are
|
||||||
|
padded to the respective shape in `padded_shapes`.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
batch_size: A `tf.int64` scalar `tf.Tensor`, representing the number of
|
||||||
|
consecutive elements of this dataset to combine in a single batch.
|
||||||
|
padded_shapes: A nested structure of `tf.TensorShape` or
|
||||||
|
`tf.int64` vector tensor-like objects representing the shape
|
||||||
|
to which the respective component of each input element should
|
||||||
|
be padded prior to batching. Any unknown dimensions
|
||||||
|
(e.g. `tf.Dimension(None)` in a `tf.TensorShape` or `-1` in a
|
||||||
|
tensor-like object) will be padded to the maximum size of that
|
||||||
|
dimension in each batch.
|
||||||
|
padding_values: (Optional.) A nested structure of scalar-shaped
|
||||||
|
`tf.Tensor`, representing the padding values to use for the
|
||||||
|
respective components. Defaults are `0` for numeric types and
|
||||||
|
the empty string for string types.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A `Dataset`.
|
||||||
|
"""
|
||||||
|
return Dataset(
|
||||||
|
dataset_ops.PaddedBatchDataset(self._dataset, batch_size, padded_shapes,
|
||||||
|
padding_values))
|
||||||
|
|
||||||
|
@deprecation.deprecated(
|
||||||
|
None, "Use `ds.apply(tf.contrib.data.dense_to_sparse_batch())`.")
|
||||||
|
def dense_to_sparse_batch(self, batch_size, row_shape):
|
||||||
|
"""Use: `Dataset.apply(tf.contrib.data.dense_to_sparse_batch(...))`."""
|
||||||
|
|
||||||
|
return self.apply(batching.dense_to_sparse_batch(batch_size, row_shape))
|
||||||
|
|
||||||
|
@deprecation.deprecated(None,
|
||||||
|
"Use `ds.apply(tf.contrib.data.group_by_window())`.")
|
||||||
|
def group_by_window(self, key_func, reduce_func, window_size):
|
||||||
|
"""Deprecated: Use `Dataset.apply(tf.contrib.data.group_by_window(...))`."""
|
||||||
|
|
||||||
|
return self.apply(
|
||||||
|
grouping.group_by_window(key_func, reduce_func, window_size))
|
||||||
|
|
||||||
|
@deprecation.deprecated_args(
|
||||||
|
None, "Replace `num_threads=T` with `num_parallel_calls=T`. Replace "
|
||||||
|
"`output_buffer_size=N` with `ds.prefetch(N)` on the returned dataset.",
|
||||||
|
"num_threads", "output_buffer_size")
|
||||||
|
def map(self,
|
||||||
|
map_func,
|
||||||
|
num_threads=None,
|
||||||
|
output_buffer_size=None,
|
||||||
|
num_parallel_calls=None):
|
||||||
|
"""Maps `map_func` across this dataset.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
map_func: A function mapping a nested structure of tensors (having
|
||||||
|
shapes and types defined by `self.output_shapes` and
|
||||||
|
`self.output_types`) to another nested structure of tensors.
|
||||||
|
num_threads: (Optional.) Deprecated, use `num_parallel_calls` instead.
|
||||||
|
output_buffer_size: (Optional.) A `tf.int64` scalar `tf.Tensor`,
|
||||||
|
representing the maximum number of processed elements that will be
|
||||||
|
buffered.
|
||||||
|
num_parallel_calls: (Optional.) A `tf.int32` scalar `tf.Tensor`,
|
||||||
|
representing the number elements to process in parallel. If not
|
||||||
|
specified, elements will be processed sequentially.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A `Dataset`.
|
||||||
|
"""
|
||||||
|
if num_threads is None and num_parallel_calls is None:
|
||||||
|
ret = Dataset(dataset_ops.MapDataset(self._dataset, map_func))
|
||||||
|
else:
|
||||||
|
if num_threads is None:
|
||||||
|
ret = Dataset(
|
||||||
|
dataset_ops.ParallelMapDataset(self._dataset, map_func,
|
||||||
|
num_parallel_calls))
|
||||||
|
else:
|
||||||
|
ret = Dataset(
|
||||||
|
dataset_ops.ParallelMapDataset(self._dataset, map_func,
|
||||||
|
num_threads))
|
||||||
|
if output_buffer_size is not None:
|
||||||
|
ret = ret.prefetch(output_buffer_size)
|
||||||
|
return ret
|
||||||
|
|
||||||
|
def flat_map(self, map_func):
|
||||||
|
"""Maps `map_func` across this dataset and flattens the result.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
map_func: A function mapping a nested structure of tensors (having shapes
|
||||||
|
and types defined by `self.output_shapes` and `self.output_types`) to a
|
||||||
|
`Dataset`.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A `Dataset`.
|
||||||
|
"""
|
||||||
|
return Dataset(dataset_ops.FlatMapDataset(self._dataset, map_func))
|
||||||
|
|
||||||
|
def interleave(self, map_func, cycle_length, block_length=1):
|
||||||
|
"""Maps `map_func` across this dataset, and interleaves the results.
|
||||||
|
|
||||||
|
For example, you can use `Dataset.interleave()` to process many input files
|
||||||
|
concurrently:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Preprocess 4 files concurrently, and interleave blocks of 16 records from
|
||||||
|
# each file.
|
||||||
|
filenames = ["/var/data/file1.txt", "/var/data/file2.txt", ...]
|
||||||
|
dataset = (Dataset.from_tensor_slices(filenames)
|
||||||
|
.interleave(lambda x:
|
||||||
|
TextLineDataset(x).map(parse_fn, num_parallel_calls=1),
|
||||||
|
cycle_length=4, block_length=16))
|
||||||
|
```
|
||||||
|
|
||||||
|
The `cycle_length` and `block_length` arguments control the order in which
|
||||||
|
elements are produced. `cycle_length` controls the number of input elements
|
||||||
|
that are processed concurrently. If you set `cycle_length` to 1, this
|
||||||
|
transformation will handle one input element at a time, and will produce
|
||||||
|
identical results = to @{tf.data.Dataset.flat_map}. In general,
|
||||||
|
this transformation will apply `map_func` to `cycle_length` input elements,
|
||||||
|
open iterators on the returned `Dataset` objects, and cycle through them
|
||||||
|
producing `block_length` consecutive elements from each iterator, and
|
||||||
|
consuming the next input element each time it reaches the end of an
|
||||||
|
iterator.
|
||||||
|
|
||||||
|
For example:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# NOTE: The following examples use `{ ... }` to represent the
|
||||||
|
# contents of a dataset.
|
||||||
|
a = { 1, 2, 3, 4, 5 }
|
||||||
|
|
||||||
|
# NOTE: New lines indicate "block" boundaries.
|
||||||
|
a.interleave(lambda x: Dataset.from_tensors(x).repeat(6),
|
||||||
|
cycle_length=2, block_length=4) == {
|
||||||
|
1, 1, 1, 1,
|
||||||
|
2, 2, 2, 2,
|
||||||
|
1, 1,
|
||||||
|
2, 2,
|
||||||
|
3, 3, 3, 3,
|
||||||
|
4, 4, 4, 4,
|
||||||
|
3, 3,
|
||||||
|
4, 4,
|
||||||
|
5, 5, 5, 5,
|
||||||
|
5, 5,
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
NOTE: The order of elements yielded by this transformation is
|
||||||
|
deterministic, as long as `map_func` is a pure function. If
|
||||||
|
`map_func` contains any stateful operations, the order in which
|
||||||
|
that state is accessed is undefined.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
map_func: A function mapping a nested structure of tensors (having shapes
|
||||||
|
and types defined by `self.output_shapes` and `self.output_types`) to a
|
||||||
|
`Dataset`.
|
||||||
|
cycle_length: The number of elements from this dataset that will be
|
||||||
|
processed concurrently.
|
||||||
|
block_length: The number of consecutive elements to produce from each
|
||||||
|
input element before cycling to another input element.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A `Dataset`.
|
||||||
|
"""
|
||||||
|
return Dataset(
|
||||||
|
dataset_ops.InterleaveDataset(self._dataset, map_func, cycle_length,
|
||||||
|
block_length))
|
||||||
|
|
||||||
|
@deprecation.deprecated(None, "Use `ds.apply(tf.contrib.data.unbatch())`.")
|
||||||
|
def unbatch(self):
|
||||||
|
"""Deprecated: Use `Dataset.apply(tf.contrib.data.unbatch()`."""
|
||||||
|
|
||||||
|
return self.apply(batching.unbatch())
|
||||||
|
|
||||||
|
def filter(self, predicate):
|
||||||
|
"""Filters this dataset according to `predicate`.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
predicate: A function mapping a nested structure of tensors (having shapes
|
||||||
|
and types defined by `self.output_shapes` and `self.output_types`) to a
|
||||||
|
scalar `tf.bool` tensor.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A `Dataset`.
|
||||||
|
"""
|
||||||
|
return Dataset(dataset_ops.FilterDataset(self._dataset, predicate))
|
||||||
|
|
||||||
|
def apply(self, transformation_func):
|
||||||
|
"""Apply a transformation function to this dataset.
|
||||||
|
|
||||||
|
`apply` enables chaining of custom `Dataset` transformations, which are
|
||||||
|
represented as functions that take one `Dataset` argument and return a
|
||||||
|
transformed `Dataset`.
|
||||||
|
|
||||||
|
For example:
|
||||||
|
|
||||||
|
```
|
||||||
|
dataset = (dataset.map(lambda x: x ** 2)
|
||||||
|
.(group_by_window(key_func, reduce_func, window_size))
|
||||||
|
.map(lambda x: x ** 3))
|
||||||
|
```
|
||||||
|
|
||||||
|
Args:
|
||||||
|
transformation_func: A function that takes one `Dataset` argument and
|
||||||
|
returns a `Dataset`.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
The `Dataset` returned by applying `transformation_func` to this dataset.
|
||||||
|
"""
|
||||||
|
dataset = transformation_func(self)
|
||||||
|
if not isinstance(dataset, dataset_ops.Dataset):
|
||||||
|
raise TypeError("`transformation_func` must return a Dataset.")
|
||||||
|
return Dataset(dataset)
|
||||||
|
|
||||||
|
|
||||||
|
def get_single_element(dataset):
|
||||||
|
"""Returns the single element in `dataset` as a nested structure of tensors.
|
||||||
|
|
||||||
|
This function enables you to use a @{tf.data.Dataset} in a stateless
|
||||||
|
"tensor-in tensor-out" expression, without creating a @{tf.data.Iterator}.
|
||||||
|
This can be useful when your preprocessing transformations are expressed
|
||||||
|
as a `Dataset`, and you want to use the transformation at serving time.
|
||||||
|
For example:
|
||||||
|
|
||||||
|
```python
|
||||||
|
input_batch = tf.placeholder(tf.string, shape=[BATCH_SIZE])
|
||||||
|
|
||||||
|
def preprocessing_fn(input_str):
|
||||||
|
# ...
|
||||||
|
return image, label
|
||||||
|
|
||||||
|
dataset = (tf.data.Dataset.from_tensor_slices(input_batch)
|
||||||
|
.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE)
|
||||||
|
.batch(BATCH_SIZE))
|
||||||
|
|
||||||
|
image_batch, label_batch = tf.contrib.data.get_single_element(dataset)
|
||||||
|
```
|
||||||
|
|
||||||
|
Args:
|
||||||
|
dataset: A @{tf.data.Dataset} object containing a single element.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A nested structure of @{tf.Tensor} objects, corresponding to the single
|
||||||
|
element of `dataset`.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
TypeError: if `dataset` is not a `tf.data.Dataset` object.
|
||||||
|
InvalidArgumentError (at runtime): if `dataset` does not contain exactly
|
||||||
|
one element.
|
||||||
|
"""
|
||||||
|
if not isinstance(dataset, dataset_ops.Dataset):
|
||||||
|
raise TypeError("`dataset` must be a `tf.data.Dataset` object.")
|
||||||
|
return nest.pack_sequence_as(
|
||||||
|
dataset.output_types,
|
||||||
|
gen_dataset_ops.dataset_to_single_element(
|
||||||
|
dataset._as_variant_tensor(), # pylint: disable=protected-access
|
||||||
|
output_types=nest.flatten(dataset.output_types),
|
||||||
|
output_shapes=nest.flatten(dataset.output_shapes)))
|
@ -45,7 +45,7 @@ def group_by_window(key_func,
|
|||||||
key_func: A function mapping a nested structure of tensors
|
key_func: A function mapping a nested structure of tensors
|
||||||
(having shapes and types defined by `self.output_shapes` and
|
(having shapes and types defined by `self.output_shapes` and
|
||||||
`self.output_types`) to a scalar `tf.int64` tensor.
|
`self.output_types`) to a scalar `tf.int64` tensor.
|
||||||
reduce_func: A function mapping a key and a dataset of up to `batch_size`
|
reduce_func: A function mapping a key and a dataset of up to `window_size`
|
||||||
consecutive elements matching that key to another dataset.
|
consecutive elements matching that key to another dataset.
|
||||||
window_size: A `tf.int64` scalar `tf.Tensor`, representing the number of
|
window_size: A `tf.int64` scalar `tf.Tensor`, representing the number of
|
||||||
consecutive elements matching the same key to combine in a single
|
consecutive elements matching the same key to combine in a single
|
||||||
|
@ -98,7 +98,6 @@ _allowed_symbols = [
|
|||||||
'Autoregressive',
|
'Autoregressive',
|
||||||
'Binomial',
|
'Binomial',
|
||||||
'Bernoulli',
|
'Bernoulli',
|
||||||
'BernoulliWithSigmoidProbs',
|
|
||||||
'Beta',
|
'Beta',
|
||||||
'BetaWithSoftplusConcentration',
|
'BetaWithSoftplusConcentration',
|
||||||
'Categorical',
|
'Categorical',
|
||||||
|
@ -11,7 +11,7 @@ Other eager execution examples can be found under the parent directory.
|
|||||||
- `mnist.py`: Model definitions and training routines.
|
- `mnist.py`: Model definitions and training routines.
|
||||||
- `mnist_test.py`: Benchmarks for training and using the models using eager
|
- `mnist_test.py`: Benchmarks for training and using the models using eager
|
||||||
execution.
|
execution.
|
||||||
- `mnist_graph_test.py`: Benchmarks for trainig and using the models using
|
- `mnist_graph_test.py`: Benchmarks for training and using the models using
|
||||||
graph execution. The same model definitions and loss functions are used in
|
graph execution. The same model definitions and loss functions are used in
|
||||||
all benchmarks.
|
all benchmarks.
|
||||||
|
|
||||||
|
@ -39,7 +39,7 @@ class MNISTModel(tf.keras.Model):
|
|||||||
"""MNIST Network.
|
"""MNIST Network.
|
||||||
|
|
||||||
Network structure is equivalent to:
|
Network structure is equivalent to:
|
||||||
https://github.com/tensorflow/tensorflow/blob/r1.5/tensorflow/examples/tutorials/mnist/mnist_deep.py
|
https://github.com/tensorflow/tensorflow/blob/r1.6/tensorflow/examples/tutorials/mnist/mnist_deep.py
|
||||||
and
|
and
|
||||||
https://github.com/tensorflow/models/blob/master/tutorials/image/mnist/convolutional.py
|
https://github.com/tensorflow/models/blob/master/tutorials/image/mnist/convolutional.py
|
||||||
|
|
||||||
|
@ -22,6 +22,7 @@ from __future__ import print_function
|
|||||||
from tensorflow.python.eager import context
|
from tensorflow.python.eager import context
|
||||||
from tensorflow.python.framework import ops
|
from tensorflow.python.framework import ops
|
||||||
from tensorflow.python.framework import tensor_shape
|
from tensorflow.python.framework import tensor_shape
|
||||||
|
from tensorflow.python.ops import array_ops
|
||||||
from tensorflow.python.ops import gen_math_ops
|
from tensorflow.python.ops import gen_math_ops
|
||||||
from tensorflow.python.ops import math_ops
|
from tensorflow.python.ops import math_ops
|
||||||
|
|
||||||
@ -108,4 +109,3 @@ def _AddNGrad(op, grad):
|
|||||||
"""Same as gradient for AddN. Copies the gradient to all inputs."""
|
"""Same as gradient for AddN. Copies the gradient to all inputs."""
|
||||||
# Not broadcasting.
|
# Not broadcasting.
|
||||||
return [grad] * len(op.inputs)
|
return [grad] * len(op.inputs)
|
||||||
|
|
||||||
|
@ -39,12 +39,13 @@ def _assert_is_image(data):
|
|||||||
data.shape[1:].assert_is_fully_defined()
|
data.shape[1:].assert_is_fully_defined()
|
||||||
|
|
||||||
|
|
||||||
def add_gan_model_image_summaries(gan_model, grid_size=4):
|
def add_gan_model_image_summaries(gan_model, grid_size=4, model_summaries=True):
|
||||||
"""Adds image summaries for real and fake images.
|
"""Adds image summaries for real and fake images.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
gan_model: A GANModel tuple.
|
gan_model: A GANModel tuple.
|
||||||
grid_size: The size of an image grid.
|
grid_size: The size of an image grid.
|
||||||
|
model_summaries: Also add summaries of the model.
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
ValueError: If real and generated data aren't images.
|
ValueError: If real and generated data aren't images.
|
||||||
@ -83,7 +84,9 @@ def add_gan_model_image_summaries(gan_model, grid_size=4):
|
|||||||
image_shape=generated_image_shape,
|
image_shape=generated_image_shape,
|
||||||
num_channels=generated_channels),
|
num_channels=generated_channels),
|
||||||
max_outputs=1)
|
max_outputs=1)
|
||||||
add_gan_model_summaries(gan_model)
|
|
||||||
|
if model_summaries:
|
||||||
|
add_gan_model_summaries(gan_model)
|
||||||
|
|
||||||
|
|
||||||
def add_image_comparison_summaries(gan_model, num_comparisons=2,
|
def add_image_comparison_summaries(gan_model, num_comparisons=2,
|
||||||
|
@ -71,9 +71,10 @@ def get_cyclegan_model():
|
|||||||
|
|
||||||
class SummariesTest(test.TestCase):
|
class SummariesTest(test.TestCase):
|
||||||
|
|
||||||
def _test_add_gan_model_image_summaries_impl(self, get_model_fn,
|
def _test_add_gan_model_image_summaries_impl(
|
||||||
expected_num_summary_ops):
|
self, get_model_fn, expected_num_summary_ops, model_summaries):
|
||||||
summaries.add_gan_model_image_summaries(get_model_fn(), grid_size=2)
|
summaries.add_gan_model_image_summaries(
|
||||||
|
get_model_fn(), grid_size=2, model_summaries=model_summaries)
|
||||||
|
|
||||||
self.assertEquals(expected_num_summary_ops,
|
self.assertEquals(expected_num_summary_ops,
|
||||||
len(ops.get_collection(ops.GraphKeys.SUMMARIES)))
|
len(ops.get_collection(ops.GraphKeys.SUMMARIES)))
|
||||||
@ -82,10 +83,13 @@ class SummariesTest(test.TestCase):
|
|||||||
summary.merge_all().eval()
|
summary.merge_all().eval()
|
||||||
|
|
||||||
def test_add_gan_model_image_summaries(self):
|
def test_add_gan_model_image_summaries(self):
|
||||||
self._test_add_gan_model_image_summaries_impl(get_gan_model, 5)
|
self._test_add_gan_model_image_summaries_impl(get_gan_model, 5, True)
|
||||||
|
|
||||||
|
def test_add_gan_model_image_summaries_no_model(self):
|
||||||
|
self._test_add_gan_model_image_summaries_impl(get_gan_model, 2, False)
|
||||||
|
|
||||||
def test_add_gan_model_image_summaries_for_cyclegan(self):
|
def test_add_gan_model_image_summaries_for_cyclegan(self):
|
||||||
self._test_add_gan_model_image_summaries_impl(get_cyclegan_model, 10)
|
self._test_add_gan_model_image_summaries_impl(get_cyclegan_model, 10, True)
|
||||||
|
|
||||||
def _test_add_gan_model_summaries_impl(self, get_model_fn,
|
def _test_add_gan_model_summaries_impl(self, get_model_fn,
|
||||||
expected_num_summary_ops):
|
expected_num_summary_ops):
|
||||||
|
@ -119,4 +119,4 @@ In the original design (as in the reference), tensor buffers are only registered
|
|||||||
Reference
|
Reference
|
||||||
===
|
===
|
||||||
|
|
||||||
Bairen Yi, Jiacheng Xia, Li Chen, and Kai Chen. 2017. Towards Zero Copy Dataflows using RDMA. In Proceedings of SIGCOMM Posters and Demos'17, Los Angeles, CA, USA, August 22-24, 2017, 3 pages. https://doi.org/10.1145/3123878.3123907
|
Bairen Yi, Jiacheng Xia, Li Chen, and Kai Chen. 2017. Towards Zero Copy Dataflows using RDMA. In Proceedings of SIGCOMM Posters and Demos'17, Los Angeles, CA, USA, August 22-24, 2017, 3 pages. https://doi.org/10.1145/3123878.3131975
|
||||||
|
@ -35,6 +35,7 @@ See the @{$python/contrib.layers} guide.
|
|||||||
@@fully_connected
|
@@fully_connected
|
||||||
@@GDN
|
@@GDN
|
||||||
@@gdn
|
@@gdn
|
||||||
|
@@images_to_sequence
|
||||||
@@layer_norm
|
@@layer_norm
|
||||||
@@linear
|
@@linear
|
||||||
@@max_pool2d
|
@@max_pool2d
|
||||||
@ -50,6 +51,7 @@ See the @{$python/contrib.layers} guide.
|
|||||||
@@scale_gradient
|
@@scale_gradient
|
||||||
@@separable_conv2d
|
@@separable_conv2d
|
||||||
@@separable_convolution2d
|
@@separable_convolution2d
|
||||||
|
@@sequence_to_images
|
||||||
@@softmax
|
@@softmax
|
||||||
@@spatial_softmax
|
@@spatial_softmax
|
||||||
@@stack
|
@@stack
|
||||||
|
@ -60,9 +60,10 @@ __all__ = [
|
|||||||
'conv2d_in_plane', 'conv2d_transpose', 'conv3d_transpose', 'convolution',
|
'conv2d_in_plane', 'conv2d_transpose', 'conv3d_transpose', 'convolution',
|
||||||
'convolution2d', 'convolution2d_in_plane', 'convolution2d_transpose',
|
'convolution2d', 'convolution2d_in_plane', 'convolution2d_transpose',
|
||||||
'convolution3d', 'convolution3d_transpose', 'dense_to_sparse', 'dropout',
|
'convolution3d', 'convolution3d_transpose', 'dense_to_sparse', 'dropout',
|
||||||
'elu', 'flatten', 'fully_connected', 'GDN', 'gdn', 'layer_norm', 'linear',
|
'elu', 'flatten', 'fully_connected', 'GDN', 'gdn', 'images_to_sequence',
|
||||||
'pool', 'max_pool2d', 'max_pool3d', 'one_hot_encoding', 'relu', 'relu6',
|
'layer_norm', 'linear', 'pool', 'max_pool2d', 'max_pool3d',
|
||||||
'repeat', 'scale_gradient', 'separable_conv2d', 'separable_convolution2d',
|
'one_hot_encoding', 'relu', 'relu6', 'repeat', 'scale_gradient',
|
||||||
|
'separable_conv2d', 'separable_convolution2d', 'sequence_to_images',
|
||||||
'softmax', 'spatial_softmax', 'stack', 'unit_norm',
|
'softmax', 'spatial_softmax', 'stack', 'unit_norm',
|
||||||
'legacy_fully_connected', 'legacy_linear', 'legacy_relu', 'maxout'
|
'legacy_fully_connected', 'legacy_linear', 'legacy_relu', 'maxout'
|
||||||
]
|
]
|
||||||
@ -2185,6 +2186,36 @@ def layer_norm(inputs,
|
|||||||
return utils.collect_named_outputs(outputs_collections, sc.name, outputs)
|
return utils.collect_named_outputs(outputs_collections, sc.name, outputs)
|
||||||
|
|
||||||
|
|
||||||
|
@add_arg_scope
|
||||||
|
def images_to_sequence(inputs,
|
||||||
|
data_format=DATA_FORMAT_NHWC,
|
||||||
|
outputs_collections=None,
|
||||||
|
scope=None):
|
||||||
|
"""Convert a batch of images into a batch of sequences.
|
||||||
|
Args:
|
||||||
|
inputs: a (num_images, height, width, depth) tensor
|
||||||
|
data_format: A string. `NHWC` (default) and `NCHW` are supported.
|
||||||
|
outputs_collections: The collections to which the outputs are added.
|
||||||
|
scope: Optional scope for name_scope.
|
||||||
|
Returns:
|
||||||
|
(width, num_images*height, depth) sequence tensor
|
||||||
|
"""
|
||||||
|
if data_format not in (DATA_FORMAT_NCHW, DATA_FORMAT_NHWC):
|
||||||
|
raise ValueError('data_format has to be either NCHW or NHWC.')
|
||||||
|
with ops.name_scope(scope, 'ImagesToSequence', [inputs]) as sc:
|
||||||
|
inputs = ops.convert_to_tensor(inputs)
|
||||||
|
df = ('channels_first'
|
||||||
|
if data_format and data_format.startswith('NC') else 'channels_last')
|
||||||
|
if df == 'channels_first':
|
||||||
|
inputs = array_ops.transpose(inputs, [0, 2, 3, 1])
|
||||||
|
_, _, width, depth = inputs.get_shape().as_list()
|
||||||
|
s = array_ops.shape(inputs)
|
||||||
|
batch_size, height = s[0], s[1]
|
||||||
|
transposed = array_ops.transpose(inputs, [2, 0, 1, 3])
|
||||||
|
outputs = array_ops.reshape(transposed, [width, batch_size * height, depth])
|
||||||
|
return utils.collect_named_outputs(outputs_collections, sc, outputs)
|
||||||
|
|
||||||
|
|
||||||
@add_arg_scope
|
@add_arg_scope
|
||||||
def max_pool2d(inputs,
|
def max_pool2d(inputs,
|
||||||
kernel_size,
|
kernel_size,
|
||||||
@ -2664,6 +2695,38 @@ def separable_convolution2d(
|
|||||||
return utils.collect_named_outputs(outputs_collections, sc.name, outputs)
|
return utils.collect_named_outputs(outputs_collections, sc.name, outputs)
|
||||||
|
|
||||||
|
|
||||||
|
@add_arg_scope
|
||||||
|
def sequence_to_images(inputs,
|
||||||
|
height,
|
||||||
|
output_data_format='channels_last',
|
||||||
|
outputs_collections=None,
|
||||||
|
scope=None):
|
||||||
|
"""Convert a batch of sequences into a batch of images.
|
||||||
|
Args:
|
||||||
|
inputs: (num_steps, num_batches, depth) sequence tensor
|
||||||
|
height: the height of the images
|
||||||
|
output_data_format: Format of output tensor.
|
||||||
|
Currently supports `'channels_first'` and `'channels_last'`.
|
||||||
|
outputs_collections: The collections to which the outputs are added.
|
||||||
|
scope: Optional scope for name_scope.
|
||||||
|
Returns:
|
||||||
|
A tensor representing the output of the operation.
|
||||||
|
"""
|
||||||
|
with ops.name_scope(scope, 'SequenceToImages', [inputs]) as sc:
|
||||||
|
inputs = ops.convert_to_tensor(inputs)
|
||||||
|
width, num_batches, depth = inputs.get_shape().as_list()
|
||||||
|
if num_batches is None:
|
||||||
|
num_batches = -1
|
||||||
|
else:
|
||||||
|
num_batches = num_batches // height
|
||||||
|
reshaped = array_ops.reshape(inputs, [width, num_batches, height, depth])
|
||||||
|
if output_data_format == 'channels_first':
|
||||||
|
outputs = array_ops.transpose(reshaped, [1, 3, 2, 0])
|
||||||
|
else:
|
||||||
|
outputs = array_ops.transpose(reshaped, [1, 2, 0, 3])
|
||||||
|
return utils.collect_named_outputs(outputs_collections, sc, outputs)
|
||||||
|
|
||||||
|
|
||||||
@add_arg_scope
|
@add_arg_scope
|
||||||
def softmax(logits, scope=None):
|
def softmax(logits, scope=None):
|
||||||
"""Performs softmax on Nth dimension of N-dimensional logit tensor.
|
"""Performs softmax on Nth dimension of N-dimensional logit tensor.
|
||||||
|
@ -708,7 +708,7 @@ class Convolution2dTransposeTests(test.TestCase):
|
|||||||
_layers.convolution2d_transpose(images, 32, 3, data_format='CHWN')
|
_layers.convolution2d_transpose(images, 32, 3, data_format='CHWN')
|
||||||
|
|
||||||
def testOutputSizeWithStrideOneSamePaddingNCHW(self):
|
def testOutputSizeWithStrideOneSamePaddingNCHW(self):
|
||||||
# `NCHW` data fomat is only supported for `GPU` device.
|
# `NCHW` data format is only supported for `GPU` device.
|
||||||
if test.is_gpu_available(cuda_only=True):
|
if test.is_gpu_available(cuda_only=True):
|
||||||
with self.test_session(use_gpu=True) as sess:
|
with self.test_session(use_gpu=True) as sess:
|
||||||
num_filters = 32
|
num_filters = 32
|
||||||
@ -2196,7 +2196,7 @@ class BatchNormTest(test.TestCase):
|
|||||||
# After initialization moving_mean == 0 and moving_variance == 1.
|
# After initialization moving_mean == 0 and moving_variance == 1.
|
||||||
self.assertAllClose(mean, [0] * 3)
|
self.assertAllClose(mean, [0] * 3)
|
||||||
self.assertAllClose(variance, [1] * 3)
|
self.assertAllClose(variance, [1] * 3)
|
||||||
# Simulate assigment from saver restore.
|
# Simulate assignment from saver restore.
|
||||||
init_assigns = [
|
init_assigns = [
|
||||||
state_ops.assign(moving_mean, expected_mean),
|
state_ops.assign(moving_mean, expected_mean),
|
||||||
state_ops.assign(moving_variance, expected_var)
|
state_ops.assign(moving_variance, expected_var)
|
||||||
@ -2950,6 +2950,28 @@ class GDNTest(test.TestCase):
|
|||||||
self.assertAllClose(y, x * np.sqrt(1 + .1 * (x**2)), rtol=0, atol=1e-6)
|
self.assertAllClose(y, x * np.sqrt(1 + .1 * (x**2)), rtol=0, atol=1e-6)
|
||||||
|
|
||||||
|
|
||||||
|
class ImagesToSequenceTest(test.TestCase):
|
||||||
|
|
||||||
|
def testInvalidDataFormat(self):
|
||||||
|
height, width = 7, 11
|
||||||
|
images = np.random.uniform(size=(5, height, width, 2))
|
||||||
|
with self.assertRaisesRegexp(ValueError,
|
||||||
|
'data_format has to be either NCHW or NHWC.'):
|
||||||
|
_layers.images_to_sequence(images, data_format='CHWN')
|
||||||
|
|
||||||
|
def testImagesToSequenceDims(self):
|
||||||
|
height, width = 7, 11
|
||||||
|
images = np.random.uniform(size=(2, height, width, 5)).astype(np.float32)
|
||||||
|
output = _layers.images_to_sequence(images)
|
||||||
|
self.assertListEqual(output.get_shape().as_list(), [11, 14, 5])
|
||||||
|
|
||||||
|
def testImagesToSequenceNCHW(self):
|
||||||
|
height, width = 7, 11
|
||||||
|
images = np.random.uniform(size=(2, 5, height, width)).astype(np.float32)
|
||||||
|
output = _layers.images_to_sequence(images, data_format='NCHW')
|
||||||
|
self.assertListEqual(output.get_shape().as_list(), [11, 14, 5])
|
||||||
|
|
||||||
|
|
||||||
class MaxPool2DTest(test.TestCase):
|
class MaxPool2DTest(test.TestCase):
|
||||||
|
|
||||||
def testInvalidDataFormat(self):
|
def testInvalidDataFormat(self):
|
||||||
@ -3418,6 +3440,30 @@ class ScaleGradientTests(test.TestCase):
|
|||||||
np.testing.assert_array_equal([3 * 2], g_x.eval())
|
np.testing.assert_array_equal([3 * 2], g_x.eval())
|
||||||
|
|
||||||
|
|
||||||
|
class SequenceToImagesTest(test.TestCase):
|
||||||
|
|
||||||
|
def testImagesToSequenceDims(self):
|
||||||
|
num_batches = 14
|
||||||
|
num_time_steps = 11
|
||||||
|
num_channels = 5
|
||||||
|
desired_height = 7
|
||||||
|
sequence = np.random.uniform(
|
||||||
|
size=(num_time_steps, num_batches, num_channels)).astype(np.float32)
|
||||||
|
output = _layers.sequence_to_images(sequence, desired_height)
|
||||||
|
self.assertListEqual(output.get_shape().as_list(), [2, 7, 11, 5])
|
||||||
|
|
||||||
|
def testImagesToSequenceNCHW(self):
|
||||||
|
num_batches = 14
|
||||||
|
num_time_steps = 11
|
||||||
|
num_channels = 5
|
||||||
|
desired_height = 7
|
||||||
|
sequence = np.random.uniform(
|
||||||
|
size=(num_time_steps, num_batches, num_channels)).astype(np.float32)
|
||||||
|
output = _layers.sequence_to_images(
|
||||||
|
sequence, desired_height, output_data_format='channels_first')
|
||||||
|
self.assertListEqual(output.get_shape().as_list(), [2, 5, 7, 11])
|
||||||
|
|
||||||
|
|
||||||
class SoftmaxTests(test.TestCase):
|
class SoftmaxTests(test.TestCase):
|
||||||
|
|
||||||
def setUp(self):
|
def setUp(self):
|
||||||
|
@ -879,7 +879,7 @@ class GraphDump(BaseMonitor):
|
|||||||
this_output = self.data[step] if step in self.data else {}
|
this_output = self.data[step] if step in self.data else {}
|
||||||
other_output = other_dump.data[step] if step in other_dump.data else {}
|
other_output = other_dump.data[step] if step in other_dump.data else {}
|
||||||
for key in this_output:
|
for key in this_output:
|
||||||
if not isinstance(key, str) and not isinstance(key, unicode):
|
if not isinstance(key, six.string_types):
|
||||||
continue
|
continue
|
||||||
if key not in other_output:
|
if key not in other_output:
|
||||||
raise ValueError("%s missing at step %s.", (key, step))
|
raise ValueError("%s missing at step %s.", (key, step))
|
||||||
|
@ -172,7 +172,7 @@ Here is a sample command line to convert the frozen Graphdef to '.tflite' format
|
|||||||
```
|
```
|
||||||
bazel build tensorflow/contrib/lite/toco:toco
|
bazel build tensorflow/contrib/lite/toco:toco
|
||||||
|
|
||||||
bazel-bin/tensorflow/contrib/lite/toco/toco -- \
|
bazel-bin/tensorflow/contrib/lite/toco/toco \
|
||||||
--input_file=$(pwd)/mobilenet_v1_1.0_224/frozen_graph.pb \
|
--input_file=$(pwd)/mobilenet_v1_1.0_224/frozen_graph.pb \
|
||||||
--input_format=TENSORFLOW_GRAPHDEF --output_format=TFLITE \
|
--input_format=TENSORFLOW_GRAPHDEF --output_format=TFLITE \
|
||||||
--output_file=/tmp/mobilenet_v1_1.0_224.tflite --inference_type=FLOAT \
|
--output_file=/tmp/mobilenet_v1_1.0_224.tflite --inference_type=FLOAT \
|
||||||
|
@ -22,6 +22,12 @@ limitations under the License.
|
|||||||
#include "tensorflow/contrib/lite/string_util.h"
|
#include "tensorflow/contrib/lite/string_util.h"
|
||||||
#include "tensorflow/contrib/lite/version.h"
|
#include "tensorflow/contrib/lite/version.h"
|
||||||
|
|
||||||
|
#include "tensorflow/contrib/lite/builtin_op_data.h"
|
||||||
|
#include "tensorflow/contrib/lite/interpreter.h"
|
||||||
|
#include "tensorflow/contrib/lite/kernels/register.h"
|
||||||
|
#include "tensorflow/contrib/lite/string_util.h"
|
||||||
|
#include "tensorflow/contrib/lite/version.h"
|
||||||
|
|
||||||
#include "tensorflow/contrib/lite/examples/label_image/label_image.h"
|
#include "tensorflow/contrib/lite/examples/label_image/label_image.h"
|
||||||
|
|
||||||
namespace tflite {
|
namespace tflite {
|
||||||
|
@ -73,7 +73,7 @@ TfLiteStatus SinEval(TfLiteContext* context, TfLiteNode* node) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
TfLiteRegistration* Register_SIN() {
|
TfLiteRegistration* Register_SIN() {
|
||||||
static TfLiteRegistration r = {nullptr, nullptr, SinResize, SinEval};
|
static TfLiteRegistration r = {nullptr, nullptr, SinPrepare, SinEval};
|
||||||
return &r;
|
return &r;
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
/* Copyright 2017 The TensorFlow Authors. All Rights Reserved.
|
/* Copyright 2018 The TensorFlow Authors. All Rights Reserved.
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
you may not use this file except in compliance with the License.
|
you may not use this file except in compliance with the License.
|
||||||
|
@ -15,7 +15,7 @@ limitations under the License.
|
|||||||
#ifndef TF_LITE_KERNELS_INTERNAL_OPTIMIZED_TENSOR_UTILS_IMPL_H_
|
#ifndef TF_LITE_KERNELS_INTERNAL_OPTIMIZED_TENSOR_UTILS_IMPL_H_
|
||||||
#define TF_LITE_KERNELS_INTERNAL_OPTIMIZED_TENSOR_UTILS_IMPL_H_
|
#define TF_LITE_KERNELS_INTERNAL_OPTIMIZED_TENSOR_UTILS_IMPL_H_
|
||||||
|
|
||||||
// TDOD(ghodrat): Remove this header file and the dependency to internal data
|
// TODO(ghodrat): Remove this header file and the dependency to internal data
|
||||||
// structure.
|
// structure.
|
||||||
#include "tensorflow/contrib/lite/builtin_op_data.h"
|
#include "tensorflow/contrib/lite/builtin_op_data.h"
|
||||||
|
|
||||||
|
@ -15,7 +15,7 @@ limitations under the License.
|
|||||||
#ifndef TENSORFLOW_CONTRIB_LITE_KERNELS_INTERNAL_REFERENCE_PORTABLE_TENSOR_UTILS_H_
|
#ifndef TENSORFLOW_CONTRIB_LITE_KERNELS_INTERNAL_REFERENCE_PORTABLE_TENSOR_UTILS_H_
|
||||||
#define TENSORFLOW_CONTRIB_LITE_KERNELS_INTERNAL_REFERENCE_PORTABLE_TENSOR_UTILS_H_
|
#define TENSORFLOW_CONTRIB_LITE_KERNELS_INTERNAL_REFERENCE_PORTABLE_TENSOR_UTILS_H_
|
||||||
|
|
||||||
// TDOD(ghodrat): Remove this header file and the dependency to internal data
|
// TODO(ghodrat): Remove this header file and the dependency to internal data
|
||||||
// structure.
|
// structure.
|
||||||
#include "tensorflow/contrib/lite/builtin_op_data.h"
|
#include "tensorflow/contrib/lite/builtin_op_data.h"
|
||||||
|
|
||||||
|
@ -36,9 +36,9 @@ struct ArenaAlloc {
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
// This small class is responsible for allocating, dealocating and reusing
|
// This small class is responsible for allocating, deallocating and reusing
|
||||||
// dynamic memory from a common underlying buffer. The arena can be used in
|
// dynamic memory from a common underlying buffer. The arena can be used in
|
||||||
// scenarios when the pattern of memory allocations and dealocations is
|
// scenarios when the pattern of memory allocations and deallocations is
|
||||||
// repetitive, e.g. running NN inference in multiple iterations.
|
// repetitive, e.g. running NN inference in multiple iterations.
|
||||||
class SimpleMemoryArena {
|
class SimpleMemoryArena {
|
||||||
public:
|
public:
|
||||||
|
@ -629,7 +629,7 @@ def make_constant_tests(zip_path):
|
|||||||
|
|
||||||
def build_graph(parameters):
|
def build_graph(parameters):
|
||||||
# Since Toco & Tflite can't have a single constant op in the entire graph,
|
# Since Toco & Tflite can't have a single constant op in the entire graph,
|
||||||
# this test adds a zero tesnor with a constant op tensor.
|
# this test adds a zero tensor with a constant op tensor.
|
||||||
input1 = tf.placeholder(dtype=parameters["dtype"], name="input1",
|
input1 = tf.placeholder(dtype=parameters["dtype"], name="input1",
|
||||||
shape=parameters["input_shape"])
|
shape=parameters["input_shape"])
|
||||||
out = tf.ones(parameters["input_shape"], dtype=parameters["dtype"]) + input1
|
out = tf.ones(parameters["input_shape"], dtype=parameters["dtype"]) + input1
|
||||||
|
@ -73,7 +73,7 @@ void CopyTensorSegments(const std::vector<Array*>& input_arrays,
|
|||||||
|
|
||||||
// Receives a series of input arrays of type Array and an integer showing the
|
// Receives a series of input arrays of type Array and an integer showing the
|
||||||
// axis on which those arrays will be concatenated. It returns the concatenated
|
// axis on which those arrays will be concatenated. It returns the concatenated
|
||||||
// arrray.
|
// array.
|
||||||
template <ArrayDataType A>
|
template <ArrayDataType A>
|
||||||
void ConcatenateTensorBuffers(const std::vector<Array*>& input_arrays,
|
void ConcatenateTensorBuffers(const std::vector<Array*>& input_arrays,
|
||||||
int concatenation_axis,
|
int concatenation_axis,
|
||||||
|
@ -103,11 +103,11 @@ class ResolveSvdfTest : public ::testing::Test {
|
|||||||
// Add the float vector as an attribute to the node.
|
// Add the float vector as an attribute to the node.
|
||||||
(*node->mutable_attr())["dtype"].set_type(tensorflow::DT_FLOAT);
|
(*node->mutable_attr())["dtype"].set_type(tensorflow::DT_FLOAT);
|
||||||
tensorflow::TensorProto* allocated_tensor = new tensorflow::TensorProto;
|
tensorflow::TensorProto* allocated_tensor = new tensorflow::TensorProto;
|
||||||
tensorflow::TensorShapeProto* allocated_tesnor_shape =
|
tensorflow::TensorShapeProto* allocated_tensor_shape =
|
||||||
new tensorflow::TensorShapeProto;
|
new tensorflow::TensorShapeProto;
|
||||||
auto tensor_shape_dim0 = allocated_tesnor_shape->add_dim();
|
auto tensor_shape_dim0 = allocated_tensor_shape->add_dim();
|
||||||
tensor_shape_dim0->set_size(values.size());
|
tensor_shape_dim0->set_size(values.size());
|
||||||
allocated_tensor->set_allocated_tensor_shape(allocated_tesnor_shape);
|
allocated_tensor->set_allocated_tensor_shape(allocated_tensor_shape);
|
||||||
allocated_tensor->set_tensor_content(
|
allocated_tensor->set_tensor_content(
|
||||||
string(reinterpret_cast<const char*>(values.data()),
|
string(reinterpret_cast<const char*>(values.data()),
|
||||||
values.size() * sizeof(float)));
|
values.size() * sizeof(float)));
|
||||||
@ -122,11 +122,11 @@ class ResolveSvdfTest : public ::testing::Test {
|
|||||||
// Add the float vector as an attribute to the node.
|
// Add the float vector as an attribute to the node.
|
||||||
(*node->mutable_attr())["dtype"].set_type(tensorflow::DT_INT32);
|
(*node->mutable_attr())["dtype"].set_type(tensorflow::DT_INT32);
|
||||||
tensorflow::TensorProto* allocated_tensor = new tensorflow::TensorProto;
|
tensorflow::TensorProto* allocated_tensor = new tensorflow::TensorProto;
|
||||||
tensorflow::TensorShapeProto* allocated_tesnor_shape =
|
tensorflow::TensorShapeProto* allocated_tensor_shape =
|
||||||
new tensorflow::TensorShapeProto;
|
new tensorflow::TensorShapeProto;
|
||||||
auto tensor_shape_dim0 = allocated_tesnor_shape->add_dim();
|
auto tensor_shape_dim0 = allocated_tensor_shape->add_dim();
|
||||||
tensor_shape_dim0->set_size(values.size());
|
tensor_shape_dim0->set_size(values.size());
|
||||||
allocated_tensor->set_allocated_tensor_shape(allocated_tesnor_shape);
|
allocated_tensor->set_allocated_tensor_shape(allocated_tensor_shape);
|
||||||
allocated_tensor->set_tensor_content(
|
allocated_tensor->set_tensor_content(
|
||||||
string(reinterpret_cast<const char*>(values.data()),
|
string(reinterpret_cast<const char*>(values.data()),
|
||||||
values.size() * sizeof(int)));
|
values.size() * sizeof(int)));
|
||||||
|
@ -268,7 +268,7 @@ selectively register only for the operators used in your graph.
|
|||||||
```bash
|
```bash
|
||||||
tensorflow/contrib/makefile/build_all_ios.sh -a arm64 -g $HOME/graphs/inception/tensorflow_inception_graph.pb
|
tensorflow/contrib/makefile/build_all_ios.sh -a arm64 -g $HOME/graphs/inception/tensorflow_inception_graph.pb
|
||||||
```
|
```
|
||||||
Please note this is an aggresive optimization of the operators and the resulting library may not work with other graphs but will reduce the size of the final library.
|
Please note this is an aggressive optimization of the operators and the resulting library may not work with other graphs but will reduce the size of the final library.
|
||||||
|
|
||||||
The `compile_ios_tensorflow.sh` script can take optional command-line arguments.
|
The `compile_ios_tensorflow.sh` script can take optional command-line arguments.
|
||||||
The first argument will be passed as a C++ optimization flag and defaults to
|
The first argument will be passed as a C++ optimization flag and defaults to
|
||||||
|
@ -52,7 +52,7 @@ shift $((OPTIND - 1))
|
|||||||
|
|
||||||
if [ "$ARCH" == "tegra" ]; then
|
if [ "$ARCH" == "tegra" ]; then
|
||||||
if [[ -z "${JETPACK}" ]]; then
|
if [[ -z "${JETPACK}" ]]; then
|
||||||
export JETPACK="$HOME/JetPack_Android_3.0"
|
export JETPACK="$HOME/JetPack_Android_3.2"
|
||||||
fi
|
fi
|
||||||
if [ ! -d ${JETPACK} ]; then
|
if [ ! -d ${JETPACK} ]; then
|
||||||
echo "Can't find Jetpack at ${JETPACK}"
|
echo "Can't find Jetpack at ${JETPACK}"
|
||||||
|
@ -740,7 +740,7 @@ def _streaming_confusion_matrix_at_thresholds(predictions,
|
|||||||
else:
|
else:
|
||||||
for include in includes:
|
for include in includes:
|
||||||
if include not in all_includes:
|
if include not in all_includes:
|
||||||
raise ValueError('Invaild key: %s.' % include)
|
raise ValueError('Invalid key: %s.' % include)
|
||||||
|
|
||||||
predictions, labels, weights = metrics_impl._remove_squeezable_dimensions( # pylint: disable=protected-access
|
predictions, labels, weights = metrics_impl._remove_squeezable_dimensions( # pylint: disable=protected-access
|
||||||
predictions, labels, weights)
|
predictions, labels, weights)
|
||||||
@ -1516,7 +1516,7 @@ def precision_recall_at_equal_thresholds(labels,
|
|||||||
predictions: A floating point `Tensor` of arbitrary shape and whose values
|
predictions: A floating point `Tensor` of arbitrary shape and whose values
|
||||||
are in the range `[0, 1]`.
|
are in the range `[0, 1]`.
|
||||||
weights: Optional; If provided, a `Tensor` that has the same dtype as,
|
weights: Optional; If provided, a `Tensor` that has the same dtype as,
|
||||||
and broadcastable to, `predictions`. This tensor is multplied by counts.
|
and broadcastable to, `predictions`. This tensor is multiplied by counts.
|
||||||
num_thresholds: Optional; Number of thresholds, evenly distributed in
|
num_thresholds: Optional; Number of thresholds, evenly distributed in
|
||||||
`[0, 1]`. Should be `>= 2`. Defaults to 201. Note that the number of bins
|
`[0, 1]`. Should be `>= 2`. Defaults to 201. Note that the number of bins
|
||||||
is 1 less than `num_thresholds`. Using an even `num_thresholds` value
|
is 1 less than `num_thresholds`. Using an even `num_thresholds` value
|
||||||
|
@ -56,7 +56,7 @@ class HeapBase {
|
|||||||
|
|
||||||
// This method adds an element at the end of the internal array without
|
// This method adds an element at the end of the internal array without
|
||||||
// "heapifying" the array afterwards. This is useful for setting up a heap
|
// "heapifying" the array afterwards. This is useful for setting up a heap
|
||||||
// where a single call to heapify at the end of the inital insertion
|
// where a single call to heapify at the end of the initial insertion
|
||||||
// operations suffices.
|
// operations suffices.
|
||||||
void InsertUnsorted(const KeyType& key, const DataType& data) {
|
void InsertUnsorted(const KeyType& key, const DataType& data) {
|
||||||
if (v_.size() == static_cast<size_t>(num_elements_)) {
|
if (v_.size() == static_cast<size_t>(num_elements_)) {
|
||||||
|
@ -150,7 +150,7 @@ class ElasticAverageOptimizer(optimizer.Optimizer):
|
|||||||
self._global_map = ea_custom_getter._global_map
|
self._global_map = ea_custom_getter._global_map
|
||||||
|
|
||||||
if moving_rate is None:
|
if moving_rate is None:
|
||||||
self._moving_rate = BETA / communication_period / num_worker
|
self._moving_rate = self.BETA / communication_period / num_worker
|
||||||
else:
|
else:
|
||||||
self._moving_rate = moving_rate
|
self._moving_rate = moving_rate
|
||||||
if rho is None:
|
if rho is None:
|
||||||
|
@ -110,7 +110,6 @@ def _get_workers(num_workers, steps, workers):
|
|||||||
|
|
||||||
|
|
||||||
class ModelAverageOptimizerTest(test.TestCase):
|
class ModelAverageOptimizerTest(test.TestCase):
|
||||||
|
|
||||||
def _run(self, train_op, sess):
|
def _run(self, train_op, sess):
|
||||||
sess.run(train_op)
|
sess.run(train_op)
|
||||||
|
|
||||||
|
@ -132,6 +132,7 @@ py_test(
|
|||||||
py_test(
|
py_test(
|
||||||
name = "for_loops_test",
|
name = "for_loops_test",
|
||||||
srcs = ["for_loops_test.py"],
|
srcs = ["for_loops_test.py"],
|
||||||
|
srcs_version = "PY2AND3",
|
||||||
deps = [
|
deps = [
|
||||||
":test_lib",
|
":test_lib",
|
||||||
"//tensorflow/contrib/py2tf/pyct",
|
"//tensorflow/contrib/py2tf/pyct",
|
||||||
|
@ -1605,6 +1605,7 @@ class WeightNormLSTMCellTest(test.TestCase):
|
|||||||
self.assertAllClose(expected_c, actual_c, 1e-5)
|
self.assertAllClose(expected_c, actual_c, 1e-5)
|
||||||
self.assertAllClose(expected_h, actual_h, 1e-5)
|
self.assertAllClose(expected_h, actual_h, 1e-5)
|
||||||
|
|
||||||
|
|
||||||
def testBasicCellWithNorm(self):
|
def testBasicCellWithNorm(self):
|
||||||
"""Tests cell w/o peepholes and with normalisation"""
|
"""Tests cell w/o peepholes and with normalisation"""
|
||||||
|
|
||||||
|
@ -331,7 +331,7 @@ def _luong_score(query, keys, scale):
|
|||||||
# batched matmul on:
|
# batched matmul on:
|
||||||
# [batch_size, 1, depth] . [batch_size, depth, max_time]
|
# [batch_size, 1, depth] . [batch_size, depth, max_time]
|
||||||
# resulting in an output shape of:
|
# resulting in an output shape of:
|
||||||
# [batch_time, 1, max_time].
|
# [batch_size, 1, max_time].
|
||||||
# we then squeeze out the center singleton dimension.
|
# we then squeeze out the center singleton dimension.
|
||||||
score = math_ops.matmul(query, keys, transpose_b=True)
|
score = math_ops.matmul(query, keys, transpose_b=True)
|
||||||
score = array_ops.squeeze(score, [1])
|
score = array_ops.squeeze(score, [1])
|
||||||
|
@ -145,7 +145,7 @@ regular_variables_and_model_variables = slim.get_variables()
|
|||||||
|
|
||||||
How does this work? When you create a model variable via TF-Slim's layers or
|
How does this work? When you create a model variable via TF-Slim's layers or
|
||||||
directly via the `slim.model_variable` function, TF-Slim adds the variable to
|
directly via the `slim.model_variable` function, TF-Slim adds the variable to
|
||||||
a the `tf.GraphKeys.MODEL_VARIABLES` collection. What if you have your own
|
the `tf.GraphKeys.MODEL_VARIABLES` collection. What if you have your own
|
||||||
custom layers or variable creation routine but still want TF-Slim to manage or
|
custom layers or variable creation routine but still want TF-Slim to manage or
|
||||||
be aware of your model variables? TF-Slim provides a convenience function for
|
be aware of your model variables? TF-Slim provides a convenience function for
|
||||||
adding the model variable to its collection:
|
adding the model variable to its collection:
|
||||||
|
@ -28,6 +28,7 @@ from tensorflow.python.ops import array_ops
|
|||||||
from tensorflow.python.ops import control_flow_ops
|
from tensorflow.python.ops import control_flow_ops
|
||||||
from tensorflow.python.ops import linalg_ops
|
from tensorflow.python.ops import linalg_ops
|
||||||
from tensorflow.python.ops import math_ops
|
from tensorflow.python.ops import math_ops
|
||||||
|
from tensorflow.python.ops import linalg_ops
|
||||||
|
|
||||||
|
|
||||||
def conjugate_gradient(operator,
|
def conjugate_gradient(operator,
|
||||||
|
@ -20,7 +20,7 @@ from __future__ import print_function
|
|||||||
|
|
||||||
from setuptools import setup
|
from setuptools import setup
|
||||||
|
|
||||||
_VERSION = '1.5.0-rc1'
|
_VERSION = '1.6.0-rc0'
|
||||||
|
|
||||||
CONSOLE_SCRIPTS = [
|
CONSOLE_SCRIPTS = [
|
||||||
'capture_tpu_profile=cloud_tpu_profiler.main:run_main',
|
'capture_tpu_profile=cloud_tpu_profiler.main:run_main',
|
||||||
|
@ -231,7 +231,7 @@ Refer to this link for all [Cloud TPU documentation](https://cloud.google.com/tp
|
|||||||
|
|
||||||
|
|
||||||
### Profiling
|
### Profiling
|
||||||
You can profile the `worker` by using instructions as spcified in the [Cloud TPU Tools](https://cloud.google.com/tpu/docs/cloud-tpu-tools).
|
You can profile the `worker` by using instructions as specified in the [Cloud TPU Tools](https://cloud.google.com/tpu/docs/cloud-tpu-tools).
|
||||||
|
|
||||||
|
|
||||||
### Is `int64` supported?
|
### Is `int64` supported?
|
||||||
|
@ -25,9 +25,9 @@ The design is based on TensorFlow r1.0. An RDMA path is added between servers fo
|
|||||||
During the server setup, an RDMA manager is created to manage low-level RDMA components such as RDMA channel and RDMA adapter, an RDMA rendezvous manager is created to oversee send/recv operations between servers. Following the distributed TensorFlow design philosophy, the send operation is passive, i.e. merely placing a tensor in the local out-going table. It is the receive operation that actually initiates the tensor transfer.
|
During the server setup, an RDMA manager is created to manage low-level RDMA components such as RDMA channel and RDMA adapter, an RDMA rendezvous manager is created to oversee send/recv operations between servers. Following the distributed TensorFlow design philosophy, the send operation is passive, i.e. merely placing a tensor in the local out-going table. It is the receive operation that actually initiates the tensor transfer.
|
||||||
|
|
||||||
TensorFlow dynamically allocates memory for tensors that are to be sent or received. This causes difficulty for RDMA operations where pinned memory is required. Few remedies are possible:
|
TensorFlow dynamically allocates memory for tensors that are to be sent or received. This causes difficulty for RDMA operations where pinned memory is required. Few remedies are possible:
|
||||||
1. The memory is pinned, transfered, then unpinned for each and every tensor to be transferred. This incurs significant operation overhead since pinning and unpinning memory for each dynamically generated tensor is slow.
|
1. The memory is pinned, transferred, then unpinned for each and every tensor to be transferred. This incurs significant operation overhead since pinning and unpinning memory for each dynamically generated tensor is slow.
|
||||||
2. Buffer is pre-allocated and pinned for each tensor. This incurs large memory overhead and extra copying from the tensor to its pinned buffer, but may still be faster than the former.
|
2. Buffer is pre-allocated and pinned for each tensor. This incurs large memory overhead and extra copying from the tensor to its pinned buffer, but may still be faster than the former.
|
||||||
3. Following HKUST research on the use of GPU direct, and their [GDR implementation](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/gdr/README.md), there is a smart way to benefit from the TensorFlow allocation theme which is mostly pool based, i.e allocators pre-allocate a large memory block, and allocate the tensors from there. By attaching a custom Visitor to relevant alloactors, we can do a single registration of the entire memory block, which zeros the registration overhead. Once the block is registered, each new tensor allocated will be at a registred address, which will allow us to do direct RDMA writes to it.
|
3. Following HKUST research on the use of GPU direct, and their [GDR implementation](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/gdr/README.md), there is a smart way to benefit from the TensorFlow allocation theme which is mostly pool based, i.e allocators pre-allocate a large memory block, and allocate the tensors from there. By attaching a custom Visitor to relevant allocators, we can do a single registration of the entire memory block, which zeros the registration overhead. Once the block is registered, each new tensor allocated will be at a registered address, which will allow us to do direct RDMA writes to it.
|
||||||
|
|
||||||
For best performance, we will adopt HKUST 0 copies approach in our solution. This means:
|
For best performance, we will adopt HKUST 0 copies approach in our solution. This means:
|
||||||
|
|
||||||
@ -77,7 +77,7 @@ When the receiver receives the **RDMA_MESSAGE_META_DATA_RESPONSE**, it will loca
|
|||||||
|
|
||||||
1. Update the local meta-data cache.
|
1. Update the local meta-data cache.
|
||||||
2. Reallocate the result/proxy tensors.
|
2. Reallocate the result/proxy tensors.
|
||||||
3. Re-send the tensor request. For tracability, the new message has a different name: **RDMA_MESSAGE_TENSOR_RE_REQUEST**.
|
3. Re-send the tensor request. For traceability, the new message has a different name: **RDMA_MESSAGE_TENSOR_RE_REQUEST**.
|
||||||
|
|
||||||
When the sender receives a **RDMA_MESSAGE_TENSOR_RE_REQUEST**, it will locate the relevant **RdmaTensorResponse** using the request index specified in the message, and invoke its **Resume()** method, which will RDMA write the contents of the tensor that was cloned earlier, to the new remote address specified in the re-request.
|
When the sender receives a **RDMA_MESSAGE_TENSOR_RE_REQUEST**, it will locate the relevant **RdmaTensorResponse** using the request index specified in the message, and invoke its **Resume()** method, which will RDMA write the contents of the tensor that was cloned earlier, to the new remote address specified in the re-request.
|
||||||
|
|
||||||
@ -93,7 +93,7 @@ When the receiver receives the RDMA write, it will locate the relevant **RdmaTen
|
|||||||
|
|
||||||
1. When the sender receives a tensor request, the source tensor may or may not be ready yet. The situation is handled through a process of tag matching:
|
1. When the sender receives a tensor request, the source tensor may or may not be ready yet. The situation is handled through a process of tag matching:
|
||||||
* If the request arrives before the tensor is ready, then a callback is put in a local table, and will be invoked once the tensor arrives.
|
* If the request arrives before the tensor is ready, then a callback is put in a local table, and will be invoked once the tensor arrives.
|
||||||
* If the tensor is ready before the request arives, than the tensor is put in a local table. When the request arrives, it will invoke the callback immediatly.
|
* If the tensor is ready before the request arives, than the tensor is put in a local table. When the request arrives, it will invoke the callback immediately.
|
||||||
In code it is done by calling **RecvLocalAsync()**, which receives the tensor's key, step-id, and the callback.
|
In code it is done by calling **RecvLocalAsync()**, which receives the tensor's key, step-id, and the callback.
|
||||||
2. When the callback is invoked, the relevant tensor is removed from the tag matching table. In the case where we need to send the tensor's meta-data, the **RdmaTensorResponse** will store a copy of the tensor until the re-request arrives.
|
2. When the callback is invoked, the relevant tensor is removed from the tag matching table. In the case where we need to send the tensor's meta-data, the **RdmaTensorResponse** will store a copy of the tensor until the re-request arrives.
|
||||||
3. The sending of protocol messages (**RDMA_MESSAGE_TENSOR_REQUEST**, **RDMA_MESSAGE_META_DATA_RESPONSE** and **RDMA_MESSAGE_TENSOR_RE_REQUEST**) is done by the class **RdmaMessageBuffer**. All messages are sent using RDMA writes from/to fixed messages buffers. This implies that we cannot send on a specific channel more than one message at a time. In order to synchronize the messages, the **RdmaMessageBuffer** holds the a local and remote buffer statuses which can be either busy or idle. When a write is issued, both statuses will be changed to busy. When the write-complete event is received, the local status is changed to idle. When the write is received on the remote side, the remote side will parse the message, and return an ACK back to the sending side on which the sending side will update the remote status to idle. When both the local and remote statuses are idle, the next message can be sent.
|
3. The sending of protocol messages (**RDMA_MESSAGE_TENSOR_REQUEST**, **RDMA_MESSAGE_META_DATA_RESPONSE** and **RDMA_MESSAGE_TENSOR_RE_REQUEST**) is done by the class **RdmaMessageBuffer**. All messages are sent using RDMA writes from/to fixed messages buffers. This implies that we cannot send on a specific channel more than one message at a time. In order to synchronize the messages, the **RdmaMessageBuffer** holds the a local and remote buffer statuses which can be either busy or idle. When a write is issued, both statuses will be changed to busy. When the write-complete event is received, the local status is changed to idle. When the write is received on the remote side, the remote side will parse the message, and return an ACK back to the sending side on which the sending side will update the remote status to idle. When both the local and remote statuses are idle, the next message can be sent.
|
||||||
@ -115,7 +115,7 @@ When the receiver receives the RDMA write, it will locate the relevant **RdmaTen
|
|||||||
* Reallocate the result tensor (and proxy tensor if required).
|
* Reallocate the result tensor (and proxy tensor if required).
|
||||||
* Re-send the request to the remote side.
|
* Re-send the request to the remote side.
|
||||||
* **RecvTensorContent()** - Receive tensor content from the remote side (RDMA write was completed).
|
* **RecvTensorContent()** - Receive tensor content from the remote side (RDMA write was completed).
|
||||||
* Decode proto if required and/or move to GPU if the content was not written to it directly (GPU direct is not avaliable).
|
* Decode proto if required and/or move to GPU if the content was not written to it directly (GPU direct is not available).
|
||||||
* Invoke the done callback.
|
* Invoke the done callback.
|
||||||
* **class RdmaTensorResponse** - Holds and manages information for a single tensor response throughout the entire send cycle. API:
|
* **class RdmaTensorResponse** - Holds and manages information for a single tensor response throughout the entire send cycle. API:
|
||||||
* **Start()** - Start the response sequence.
|
* **Start()** - Start the response sequence.
|
||||||
@ -153,7 +153,7 @@ When the receiver receives the RDMA write, it will locate the relevant **RdmaTen
|
|||||||
* request_index - Request index.
|
* request_index - Request index.
|
||||||
* is_dead/data_type/tensor_shape/tensor_bytes - The up-to-date meta-data.
|
* is_dead/data_type/tensor_shape/tensor_bytes - The up-to-date meta-data.
|
||||||
* checksum - In data validation mode, this will hold the checksum of the source tensor.
|
* checksum - In data validation mode, this will hold the checksum of the source tensor.
|
||||||
* **RDMA_MESSAGE_TENSOR_RE_REQUEST** - (receiver ==> sender) Tensor re-requset after meta-data update and reallocation of result/proxy tensors.
|
* **RDMA_MESSAGE_TENSOR_RE_REQUEST** - (receiver ==> sender) Tensor re-request after meta-data update and reallocation of result/proxy tensors.
|
||||||
* type - The message type.
|
* type - The message type.
|
||||||
* name (name_size) - Name of the requested tensor.
|
* name (name_size) - Name of the requested tensor.
|
||||||
* step_id - Step ID.
|
* step_id - Step ID.
|
||||||
|
@ -66,7 +66,7 @@ class GrpcBufferWriter final
|
|||||||
}
|
}
|
||||||
// It's dangerous to keep an inlined grpc_slice as the backup slice, since
|
// It's dangerous to keep an inlined grpc_slice as the backup slice, since
|
||||||
// on a following Next() call, a reference will be returned to this slice
|
// on a following Next() call, a reference will be returned to this slice
|
||||||
// via GRPC_SLICE_START_PTR, which will not be an adddress held by
|
// via GRPC_SLICE_START_PTR, which will not be an address held by
|
||||||
// slice_buffer_.
|
// slice_buffer_.
|
||||||
have_backup_ = backup_slice_.refcount != NULL;
|
have_backup_ = backup_slice_.refcount != NULL;
|
||||||
byte_count_ -= count;
|
byte_count_ -= count;
|
||||||
|
@ -82,7 +82,7 @@ bool AttrDefEqual(const OpDef::AttrDef& a1, const OpDef::AttrDef& a2);
|
|||||||
uint64 AttrDefHash(const OpDef::AttrDef& a);
|
uint64 AttrDefHash(const OpDef::AttrDef& a);
|
||||||
|
|
||||||
// Returns true if all AttrDefs in `a1` equal corresponding AttrDefs in
|
// Returns true if all AttrDefs in `a1` equal corresponding AttrDefs in
|
||||||
// `a2`. Corrspondence is established by name.
|
// `a2`. Correspondence is established by name.
|
||||||
bool RepeatedAttrDefEqual(const protobuf::RepeatedPtrField<OpDef::AttrDef>& a1,
|
bool RepeatedAttrDefEqual(const protobuf::RepeatedPtrField<OpDef::AttrDef>& a1,
|
||||||
const protobuf::RepeatedPtrField<OpDef::AttrDef>& a2);
|
const protobuf::RepeatedPtrField<OpDef::AttrDef>& a2);
|
||||||
|
|
||||||
|
@ -474,7 +474,7 @@ std::unique_ptr<Tensor> OpKernelContext::forward_input(
|
|||||||
return nullptr;
|
return nullptr;
|
||||||
}
|
}
|
||||||
// Check that input and output memory types match, i.e.
|
// Check that input and output memory types match, i.e.
|
||||||
// that they either both live in host or both live in device memmory.
|
// that they either both live in host or both live in device memory.
|
||||||
if (input_memory_type(input_index) != output_memory_type) {
|
if (input_memory_type(input_index) != output_memory_type) {
|
||||||
return nullptr;
|
return nullptr;
|
||||||
}
|
}
|
||||||
|
@ -77,7 +77,7 @@ void ConstantOpTest::PersistentMemoryTrackingTest(bool on_gpu) {
|
|||||||
EXPECT_EQ(ctx.persistent_memory_allocated(), 480);
|
EXPECT_EQ(ctx.persistent_memory_allocated(), 480);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Remove memry leak errors.
|
// Remove memory leak errors.
|
||||||
for (auto allocator_pair : ctx.wrapped_allocators()) {
|
for (auto allocator_pair : ctx.wrapped_allocators()) {
|
||||||
allocator_pair.second->GetRecordsAndUnRef();
|
allocator_pair.second->GetRecordsAndUnRef();
|
||||||
}
|
}
|
||||||
|
@ -19,7 +19,7 @@ limitations under the License.
|
|||||||
|
|
||||||
namespace tensorflow {
|
namespace tensorflow {
|
||||||
namespace functor {
|
namespace functor {
|
||||||
DEFINE_UNARY6(invert, int8, int16, int32, int64, uint8, uint16);
|
DEFINE_UNARY8(invert, int8, int16, int32, int64, uint8, uint16, uint32, uint64);
|
||||||
} // namespace functor
|
} // namespace functor
|
||||||
} // namespace tensorflow
|
} // namespace tensorflow
|
||||||
|
|
||||||
|
@ -16,17 +16,17 @@ limitations under the License.
|
|||||||
#include "tensorflow/core/kernels/cwise_ops_common.h"
|
#include "tensorflow/core/kernels/cwise_ops_common.h"
|
||||||
|
|
||||||
namespace tensorflow {
|
namespace tensorflow {
|
||||||
REGISTER6(UnaryOp, CPU, "Invert", functor::invert, int8, int16, int32, int64,
|
REGISTER8(UnaryOp, CPU, "Invert", functor::invert, int8, int16, int32, int64,
|
||||||
uint8, uint16);
|
uint8, uint16, uint32, uint64);
|
||||||
|
|
||||||
#ifdef TENSORFLOW_USE_SYCL
|
#ifdef TENSORFLOW_USE_SYCL
|
||||||
REGISTER6(UnaryOp, SYCL, "Invert", functor::invert, int8, int16, int32, int64,
|
REGISTER6(UnaryOp, SYCL, "Invert", functor::invert, int8, int16, int32, int64,
|
||||||
uint8, uint16);
|
uint8, uint16, uint32, uint64);
|
||||||
#endif // TENSORFLOW_USE_SYCL
|
#endif // TENSORFLOW_USE_SYCL
|
||||||
|
|
||||||
#if GOOGLE_CUDA
|
#if GOOGLE_CUDA
|
||||||
REGISTER6(UnaryOp, GPU, "Invert", functor::invert, int8, int16, int32, int64,
|
REGISTER8(UnaryOp, GPU, "Invert", functor::invert, int8, int16, int32, int64,
|
||||||
uint8, uint16);
|
uint8, uint16, uint32, uint64);
|
||||||
#endif // GOOGLE_CUDA
|
#endif // GOOGLE_CUDA
|
||||||
|
|
||||||
} // namespace tensorflow
|
} // namespace tensorflow
|
||||||
|
@ -136,6 +136,9 @@ struct ApproximateEqual<GPUDevice, T> {
|
|||||||
#define DEFINE_UNARY7(F, T0, T1, T2, T3, T4, T5, T6) \
|
#define DEFINE_UNARY7(F, T0, T1, T2, T3, T4, T5, T6) \
|
||||||
DEFINE_UNARY2(F, T0, T1); \
|
DEFINE_UNARY2(F, T0, T1); \
|
||||||
DEFINE_UNARY5(F, T2, T3, T4, T5, T6)
|
DEFINE_UNARY5(F, T2, T3, T4, T5, T6)
|
||||||
|
#define DEFINE_UNARY8(F, T0, T1, T2, T3, T4, T5, T6, T7) \
|
||||||
|
DEFINE_UNARY4(F, T0, T1, T2, T3); \
|
||||||
|
DEFINE_UNARY4(F, T4, T5, T6, T7)
|
||||||
|
|
||||||
// Macros to explicitly instantiate kernels on GPU for multiple types
|
// Macros to explicitly instantiate kernels on GPU for multiple types
|
||||||
// (T0, T1, etc.) for BinaryFunctor.
|
// (T0, T1, etc.) for BinaryFunctor.
|
||||||
|
@ -16,6 +16,7 @@ limitations under the License.
|
|||||||
// Functions to read images in GIF format.
|
// Functions to read images in GIF format.
|
||||||
|
|
||||||
#include "tensorflow/core/lib/gif/gif_io.h"
|
#include "tensorflow/core/lib/gif/gif_io.h"
|
||||||
|
#include <algorithm>
|
||||||
#include "tensorflow/core/lib/gtl/cleanup.h"
|
#include "tensorflow/core/lib/gtl/cleanup.h"
|
||||||
#include "tensorflow/core/lib/strings/strcat.h"
|
#include "tensorflow/core/lib/strings/strcat.h"
|
||||||
#include "tensorflow/core/platform/gif.h"
|
#include "tensorflow/core/platform/gif.h"
|
||||||
@ -89,23 +90,52 @@ uint8* Decode(const void* srcdata, int datasize,
|
|||||||
uint8* const dstdata = allocate_output(num_frames, width, height, channel);
|
uint8* const dstdata = allocate_output(num_frames, width, height, channel);
|
||||||
if (!dstdata) return nullptr;
|
if (!dstdata) return nullptr;
|
||||||
for (int k = 0; k < num_frames; k++) {
|
for (int k = 0; k < num_frames; k++) {
|
||||||
|
uint8* this_dst = dstdata + k * width * channel * height;
|
||||||
|
|
||||||
SavedImage* this_image = &gif_file->SavedImages[k];
|
SavedImage* this_image = &gif_file->SavedImages[k];
|
||||||
GifImageDesc* img_desc = &this_image->ImageDesc;
|
GifImageDesc* img_desc = &this_image->ImageDesc;
|
||||||
|
|
||||||
|
int imgLeft = img_desc->Left;
|
||||||
|
int imgTop = img_desc->Top;
|
||||||
|
int imgRight = img_desc->Left + img_desc->Width;
|
||||||
|
int imgBottom = img_desc->Top + img_desc->Height;
|
||||||
|
|
||||||
if (img_desc->Left != 0 || img_desc->Top != 0 || img_desc->Width != width ||
|
if (img_desc->Left != 0 || img_desc->Top != 0 || img_desc->Width != width ||
|
||||||
img_desc->Height != height) {
|
img_desc->Height != height) {
|
||||||
*error_string = strings::StrCat("can't process optimized gif");
|
// If the first frame does not fill the entire canvas then return error.
|
||||||
return nullptr;
|
if (k == 0) {
|
||||||
|
*error_string =
|
||||||
|
strings::StrCat("the first frame does not fill the canvas");
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
// Otherwise previous frame will be reused to fill the unoccupied canvas.
|
||||||
|
imgLeft = std::max(imgLeft, 0);
|
||||||
|
imgTop = std::max(imgTop, 0);
|
||||||
|
imgRight = std::min(imgRight, width);
|
||||||
|
imgBottom = std::min(imgBottom, height);
|
||||||
|
|
||||||
|
uint8* last_dst = dstdata + (k - 1) * width * channel * height;
|
||||||
|
for (int i = 0; i < height; ++i) {
|
||||||
|
uint8* p_dst = this_dst + i * width * channel;
|
||||||
|
uint8* l_dst = last_dst + i * width * channel;
|
||||||
|
for (int j = 0; j < width; ++j) {
|
||||||
|
p_dst[j * channel + 0] = l_dst[j * channel + 0];
|
||||||
|
p_dst[j * channel + 1] = l_dst[j * channel + 1];
|
||||||
|
p_dst[j * channel + 2] = l_dst[j * channel + 2];
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
ColorMapObject* color_map = this_image->ImageDesc.ColorMap
|
ColorMapObject* color_map = this_image->ImageDesc.ColorMap
|
||||||
? this_image->ImageDesc.ColorMap
|
? this_image->ImageDesc.ColorMap
|
||||||
: gif_file->SColorMap;
|
: gif_file->SColorMap;
|
||||||
|
|
||||||
uint8* this_dst = dstdata + k * width * channel * height;
|
for (int i = imgTop; i < imgBottom; ++i) {
|
||||||
for (int i = 0; i < height; ++i) {
|
|
||||||
uint8* p_dst = this_dst + i * width * channel;
|
uint8* p_dst = this_dst + i * width * channel;
|
||||||
for (int j = 0; j < width; ++j) {
|
for (int j = imgLeft; j < imgRight; ++j) {
|
||||||
GifByteType color_index = this_image->RasterBits[i * width + j];
|
GifByteType color_index =
|
||||||
|
this_image->RasterBits[(i - img_desc->Top) * (img_desc->Width) +
|
||||||
|
(j - img_desc->Left)];
|
||||||
const GifColorType& gif_color = color_map->Colors[color_index];
|
const GifColorType& gif_color = color_map->Colors[color_index];
|
||||||
p_dst[j * channel + 0] = gif_color.Red;
|
p_dst[j * channel + 0] = gif_color.Red;
|
||||||
p_dst[j * channel + 1] = gif_color.Green;
|
p_dst[j * channel + 1] = gif_color.Green;
|
||||||
|
@ -478,7 +478,7 @@ class optional : private internal_optional::optional_data<T>,
|
|||||||
return *this;
|
return *this;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Copy assigment, standard semantics.
|
// Copy assignment, standard semantics.
|
||||||
optional& operator=(const optional& src) = default;
|
optional& operator=(const optional& src) = default;
|
||||||
|
|
||||||
// Move assignment, standard semantics.
|
// Move assignment, standard semantics.
|
||||||
|
@ -49,7 +49,7 @@ RecordReaderOptions RecordReaderOptions::CreateRecordReaderOptions(
|
|||||||
#endif // IS_SLIM_BUILD
|
#endif // IS_SLIM_BUILD
|
||||||
} else if (compression_type != compression::kNone) {
|
} else if (compression_type != compression::kNone) {
|
||||||
LOG(ERROR) << "Unsupported compression_type:" << compression_type
|
LOG(ERROR) << "Unsupported compression_type:" << compression_type
|
||||||
<< ". No comprression will be used.";
|
<< ". No compression will be used.";
|
||||||
}
|
}
|
||||||
return options;
|
return options;
|
||||||
}
|
}
|
||||||
|
@ -455,6 +455,17 @@ REGISTER_OP("SampleDistortedBoundingBox")
|
|||||||
.Attr("use_image_if_no_bounding_boxes: bool = false")
|
.Attr("use_image_if_no_bounding_boxes: bool = false")
|
||||||
.SetIsStateful()
|
.SetIsStateful()
|
||||||
.SetShapeFn([](InferenceContext* c) {
|
.SetShapeFn([](InferenceContext* c) {
|
||||||
|
// Get inputs and validate ranks.
|
||||||
|
ShapeHandle image_size;
|
||||||
|
TF_RETURN_IF_ERROR(c->WithRank(c->input(0), 1, &image_size));
|
||||||
|
ShapeHandle bounding_boxes;
|
||||||
|
TF_RETURN_IF_ERROR(c->WithRank(c->input(1), 3, &bounding_boxes));
|
||||||
|
// image_size: 1-D with [height, width, channels]
|
||||||
|
// bounding_boxes: 3-D with shape [batch, N, 4]
|
||||||
|
DimensionHandle unused;
|
||||||
|
TF_RETURN_IF_ERROR(c->WithValue(c->Dim(image_size, 0), 3, &unused));
|
||||||
|
TF_RETURN_IF_ERROR(c->WithValue(c->Dim(bounding_boxes, 2), 4, &unused));
|
||||||
|
|
||||||
c->set_output(0, c->Vector(3));
|
c->set_output(0, c->Vector(3));
|
||||||
c->set_output(1, c->Vector(3));
|
c->set_output(1, c->Vector(3));
|
||||||
c->set_output(2, c->MakeShape({1, 1, 4}));
|
c->set_output(2, c->MakeShape({1, 1, 4}));
|
||||||
@ -477,6 +488,19 @@ REGISTER_OP("SampleDistortedBoundingBoxV2")
|
|||||||
.Attr("use_image_if_no_bounding_boxes: bool = false")
|
.Attr("use_image_if_no_bounding_boxes: bool = false")
|
||||||
.SetIsStateful()
|
.SetIsStateful()
|
||||||
.SetShapeFn([](InferenceContext* c) {
|
.SetShapeFn([](InferenceContext* c) {
|
||||||
|
// Get inputs and validate ranks.
|
||||||
|
ShapeHandle image_size;
|
||||||
|
TF_RETURN_IF_ERROR(c->WithRank(c->input(0), 1, &image_size));
|
||||||
|
ShapeHandle bounding_boxes;
|
||||||
|
TF_RETURN_IF_ERROR(c->WithRank(c->input(1), 3, &bounding_boxes));
|
||||||
|
ShapeHandle min_object_covered;
|
||||||
|
TF_RETURN_IF_ERROR(c->WithRank(c->input(2), 0, &min_object_covered));
|
||||||
|
// image_size: 1-D with [height, width, channels]
|
||||||
|
// bounding_boxes: 3-D with shape [batch, N, 4]
|
||||||
|
DimensionHandle unused;
|
||||||
|
TF_RETURN_IF_ERROR(c->WithValue(c->Dim(image_size, 0), 3, &unused));
|
||||||
|
TF_RETURN_IF_ERROR(c->WithValue(c->Dim(bounding_boxes, 2), 4, &unused));
|
||||||
|
|
||||||
c->set_output(0, c->Vector(3));
|
c->set_output(0, c->Vector(3));
|
||||||
c->set_output(1, c->Vector(3));
|
c->set_output(1, c->Vector(3));
|
||||||
c->set_output(2, c->MakeShape({1, 1, 4}));
|
c->set_output(2, c->MakeShape({1, 1, 4}));
|
||||||
|
@ -44,6 +44,9 @@ limitations under the License.
|
|||||||
|
|
||||||
namespace tensorflow {
|
namespace tensorflow {
|
||||||
|
|
||||||
|
// 128KB copy buffer
|
||||||
|
constexpr size_t kCopyFileBufferSize = 128 * 1024;
|
||||||
|
|
||||||
class FileSystemRegistryImpl : public FileSystemRegistry {
|
class FileSystemRegistryImpl : public FileSystemRegistry {
|
||||||
public:
|
public:
|
||||||
Status Register(const string& scheme, Factory factory) override;
|
Status Register(const string& scheme, Factory factory) override;
|
||||||
@ -278,6 +281,17 @@ Status Env::RenameFile(const string& src, const string& target) {
|
|||||||
return src_fs->RenameFile(src, target);
|
return src_fs->RenameFile(src, target);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
Status Env::CopyFile(const string& src, const string& target) {
|
||||||
|
FileSystem* src_fs;
|
||||||
|
FileSystem* target_fs;
|
||||||
|
TF_RETURN_IF_ERROR(GetFileSystemForFile(src, &src_fs));
|
||||||
|
TF_RETURN_IF_ERROR(GetFileSystemForFile(target, &target_fs));
|
||||||
|
if (src_fs == target_fs) {
|
||||||
|
return src_fs->CopyFile(src, target);
|
||||||
|
}
|
||||||
|
return FileSystemCopyFile(src_fs, src, target_fs, target);
|
||||||
|
}
|
||||||
|
|
||||||
string Env::GetExecutablePath() {
|
string Env::GetExecutablePath() {
|
||||||
char exe_path[PATH_MAX] = {0};
|
char exe_path[PATH_MAX] = {0};
|
||||||
#ifdef __APPLE__
|
#ifdef __APPLE__
|
||||||
@ -406,6 +420,29 @@ Status WriteStringToFile(Env* env, const string& fname,
|
|||||||
return s;
|
return s;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
Status FileSystemCopyFile(FileSystem* src_fs, const string& src,
|
||||||
|
FileSystem* target_fs, const string& target) {
|
||||||
|
std::unique_ptr<RandomAccessFile> src_file;
|
||||||
|
TF_RETURN_IF_ERROR(src_fs->NewRandomAccessFile(src, &src_file));
|
||||||
|
|
||||||
|
std::unique_ptr<WritableFile> target_file;
|
||||||
|
TF_RETURN_IF_ERROR(target_fs->NewWritableFile(target, &target_file));
|
||||||
|
|
||||||
|
uint64 offset = 0;
|
||||||
|
std::unique_ptr<char[]> scratch(new char[kCopyFileBufferSize]);
|
||||||
|
Status s = Status::OK();
|
||||||
|
while (s.ok()) {
|
||||||
|
StringPiece result;
|
||||||
|
s = src_file->Read(offset, kCopyFileBufferSize, &result, scratch.get());
|
||||||
|
if (!(s.ok() || s.code() == error::OUT_OF_RANGE)) {
|
||||||
|
return s;
|
||||||
|
}
|
||||||
|
TF_RETURN_IF_ERROR(target_file->Append(result));
|
||||||
|
offset += result.size();
|
||||||
|
}
|
||||||
|
return target_file->Close();
|
||||||
|
}
|
||||||
|
|
||||||
// A ZeroCopyInputStream on a RandomAccessFile.
|
// A ZeroCopyInputStream on a RandomAccessFile.
|
||||||
namespace {
|
namespace {
|
||||||
class FileStream : public ::tensorflow::protobuf::io::ZeroCopyInputStream {
|
class FileStream : public ::tensorflow::protobuf::io::ZeroCopyInputStream {
|
||||||
|
@ -214,6 +214,9 @@ class Env {
|
|||||||
/// replaced.
|
/// replaced.
|
||||||
Status RenameFile(const string& src, const string& target);
|
Status RenameFile(const string& src, const string& target);
|
||||||
|
|
||||||
|
/// \brief Copy the src to target.
|
||||||
|
Status CopyFile(const string& src, const string& target);
|
||||||
|
|
||||||
/// \brief Returns the absolute path of the current executable. It resolves
|
/// \brief Returns the absolute path of the current executable. It resolves
|
||||||
/// symlinks if there is any.
|
/// symlinks if there is any.
|
||||||
string GetExecutablePath();
|
string GetExecutablePath();
|
||||||
@ -381,6 +384,11 @@ struct ThreadOptions {
|
|||||||
size_t guard_size = 0; // 0: use system default value
|
size_t guard_size = 0; // 0: use system default value
|
||||||
};
|
};
|
||||||
|
|
||||||
|
/// A utility routine: copy contents of `src` in file system `src_fs`
|
||||||
|
/// to `target` in file system `target_fs`.
|
||||||
|
Status FileSystemCopyFile(FileSystem* src_fs, const string& src,
|
||||||
|
FileSystem* target_fs, const string& target);
|
||||||
|
|
||||||
/// A utility routine: reads contents of named file into `*data`
|
/// A utility routine: reads contents of named file into `*data`
|
||||||
Status ReadFileToString(Env* env, const string& fname, string* data);
|
Status ReadFileToString(Env* env, const string& fname, string* data);
|
||||||
|
|
||||||
|
@ -265,4 +265,8 @@ Status FileSystem::RecursivelyCreateDir(const string& dirname) {
|
|||||||
return Status::OK();
|
return Status::OK();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
Status FileSystem::CopyFile(const string& src, const string& target) {
|
||||||
|
return FileSystemCopyFile(this, src, this, target);
|
||||||
|
}
|
||||||
|
|
||||||
} // namespace tensorflow
|
} // namespace tensorflow
|
||||||
|
@ -189,6 +189,9 @@ class FileSystem {
|
|||||||
/// \brief Overwrites the target if it exists.
|
/// \brief Overwrites the target if it exists.
|
||||||
virtual Status RenameFile(const string& src, const string& target) = 0;
|
virtual Status RenameFile(const string& src, const string& target) = 0;
|
||||||
|
|
||||||
|
/// \brief Copy the src to target.
|
||||||
|
virtual Status CopyFile(const string& src, const string& target);
|
||||||
|
|
||||||
/// \brief Translate an URI to a filename for the FileSystem implementation.
|
/// \brief Translate an URI to a filename for the FileSystem implementation.
|
||||||
///
|
///
|
||||||
/// The implementation in this class cleans up the path, removing
|
/// The implementation in this class cleans up the path, removing
|
||||||
|
@ -18,6 +18,9 @@ limitations under the License.
|
|||||||
#include <fcntl.h>
|
#include <fcntl.h>
|
||||||
#include <stdio.h>
|
#include <stdio.h>
|
||||||
#include <sys/mman.h>
|
#include <sys/mman.h>
|
||||||
|
#if !defined(__APPLE__)
|
||||||
|
#include <sys/sendfile.h>
|
||||||
|
#endif
|
||||||
#include <sys/stat.h>
|
#include <sys/stat.h>
|
||||||
#include <sys/time.h>
|
#include <sys/time.h>
|
||||||
#include <sys/types.h>
|
#include <sys/types.h>
|
||||||
@ -34,6 +37,9 @@ limitations under the License.
|
|||||||
|
|
||||||
namespace tensorflow {
|
namespace tensorflow {
|
||||||
|
|
||||||
|
// 128KB of copy buffer
|
||||||
|
constexpr size_t kPosixCopyFileBufferSize = 128 * 1024;
|
||||||
|
|
||||||
// pread() based random-access
|
// pread() based random-access
|
||||||
class PosixRandomAccessFile : public RandomAccessFile {
|
class PosixRandomAccessFile : public RandomAccessFile {
|
||||||
private:
|
private:
|
||||||
@ -276,4 +282,70 @@ Status PosixFileSystem::RenameFile(const string& src, const string& target) {
|
|||||||
return result;
|
return result;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
Status PosixFileSystem::CopyFile(const string& src, const string& target) {
|
||||||
|
string translated_src = TranslateName(src);
|
||||||
|
struct stat sbuf;
|
||||||
|
if (stat(translated_src.c_str(), &sbuf) != 0) {
|
||||||
|
return IOError(src, errno);
|
||||||
|
}
|
||||||
|
int src_fd = open(translated_src.c_str(), O_RDONLY);
|
||||||
|
if (src_fd < 0) {
|
||||||
|
return IOError(src, errno);
|
||||||
|
}
|
||||||
|
string translated_target = TranslateName(target);
|
||||||
|
// O_WRONLY | O_CREAT:
|
||||||
|
// Open file for write and if file does not exist, create the file.
|
||||||
|
// S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH:
|
||||||
|
// Create the file with permission of 0644
|
||||||
|
int target_fd = open(translated_target.c_str(), O_WRONLY | O_CREAT,
|
||||||
|
S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH);
|
||||||
|
if (target_fd < 0) {
|
||||||
|
close(src_fd);
|
||||||
|
return IOError(target, errno);
|
||||||
|
}
|
||||||
|
int rc = 0;
|
||||||
|
off_t offset = 0;
|
||||||
|
std::unique_ptr<char[]> buffer(new char[kPosixCopyFileBufferSize]);
|
||||||
|
while (offset < sbuf.st_size) {
|
||||||
|
// Use uint64 for safe compare SSIZE_MAX
|
||||||
|
uint64 chunk = sbuf.st_size - offset;
|
||||||
|
if (chunk > SSIZE_MAX) {
|
||||||
|
chunk = SSIZE_MAX;
|
||||||
|
}
|
||||||
|
#if defined(__linux__) && !defined(__ANDROID__)
|
||||||
|
rc = sendfile(target_fd, src_fd, &offset, static_cast<size_t>(chunk));
|
||||||
|
#else
|
||||||
|
if (chunk > kPosixCopyFileBufferSize) {
|
||||||
|
chunk = kPosixCopyFileBufferSize;
|
||||||
|
}
|
||||||
|
rc = read(src_fd, buffer.get(), static_cast<size_t>(chunk));
|
||||||
|
if (rc <= 0) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
rc = write(target_fd, buffer.get(), static_cast<size_t>(chunk));
|
||||||
|
offset += chunk;
|
||||||
|
#endif
|
||||||
|
if (rc <= 0) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Status result = Status::OK();
|
||||||
|
if (rc < 0) {
|
||||||
|
result = IOError(target, errno);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Keep the error code
|
||||||
|
rc = close(target_fd);
|
||||||
|
if (rc < 0 && result == Status::OK()) {
|
||||||
|
result = IOError(target, errno);
|
||||||
|
}
|
||||||
|
rc = close(src_fd);
|
||||||
|
if (rc < 0 && result == Status::OK()) {
|
||||||
|
result = IOError(target, errno);
|
||||||
|
}
|
||||||
|
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
} // namespace tensorflow
|
} // namespace tensorflow
|
||||||
|
@ -56,6 +56,8 @@ class PosixFileSystem : public FileSystem {
|
|||||||
Status GetFileSize(const string& fname, uint64* size) override;
|
Status GetFileSize(const string& fname, uint64* size) override;
|
||||||
|
|
||||||
Status RenameFile(const string& src, const string& target) override;
|
Status RenameFile(const string& src, const string& target) override;
|
||||||
|
|
||||||
|
Status CopyFile(const string& src, const string& target) override;
|
||||||
};
|
};
|
||||||
|
|
||||||
Status IOError(const string& context, int err_number);
|
Status IOError(const string& context, int err_number);
|
||||||
|
@ -19,12 +19,12 @@ limitations under the License.
|
|||||||
// TensorFlow uses semantic versioning, see http://semver.org/.
|
// TensorFlow uses semantic versioning, see http://semver.org/.
|
||||||
|
|
||||||
#define TF_MAJOR_VERSION 1
|
#define TF_MAJOR_VERSION 1
|
||||||
#define TF_MINOR_VERSION 5
|
#define TF_MINOR_VERSION 6
|
||||||
#define TF_PATCH_VERSION 0
|
#define TF_PATCH_VERSION 0
|
||||||
|
|
||||||
// TF_VERSION_SUFFIX is non-empty for pre-releases (e.g. "-alpha", "-alpha.1",
|
// TF_VERSION_SUFFIX is non-empty for pre-releases (e.g. "-alpha", "-alpha.1",
|
||||||
// "-beta", "-rc", "-rc.1")
|
// "-beta", "-rc", "-rc.1")
|
||||||
#define TF_VERSION_SUFFIX ""
|
#define TF_VERSION_SUFFIX "-rc0"
|
||||||
|
|
||||||
#define TF_STR_HELPER(x) #x
|
#define TF_STR_HELPER(x) #x
|
||||||
#define TF_STR(x) TF_STR_HELPER(x)
|
#define TF_STR(x) TF_STR_HELPER(x)
|
||||||
|
@ -17,7 +17,6 @@ initialized with parameters that define the distributions.
|
|||||||
|
|
||||||
* @{tf.contrib.distributions.Binomial}
|
* @{tf.contrib.distributions.Binomial}
|
||||||
* @{tf.contrib.distributions.Bernoulli}
|
* @{tf.contrib.distributions.Bernoulli}
|
||||||
* @{tf.contrib.distributions.BernoulliWithSigmoidProbs}
|
|
||||||
* @{tf.contrib.distributions.Beta}
|
* @{tf.contrib.distributions.Beta}
|
||||||
* @{tf.contrib.distributions.Categorical}
|
* @{tf.contrib.distributions.Categorical}
|
||||||
* @{tf.contrib.distributions.Chi2}
|
* @{tf.contrib.distributions.Chi2}
|
||||||
|
@ -38,7 +38,7 @@ The preceding examples rely on the following data set utility:
|
|||||||
<tr> <th>Utility</th> <th>Description</th></tr>
|
<tr> <th>Utility</th> <th>Description</th></tr>
|
||||||
|
|
||||||
<tr>
|
<tr>
|
||||||
<td><a href="../../examples/get_started/regression/imports85.py">imports85.py</a></td>
|
<td><a href="https://www.tensorflow.org/code/tensorflow/examples/get_started/regression/imports85.py">imports85.py</a></td>
|
||||||
<td>This program provides utility functions that load the
|
<td>This program provides utility functions that load the
|
||||||
<tt>imports85</tt> data set into formats that other TensorFlow
|
<tt>imports85</tt> data set into formats that other TensorFlow
|
||||||
programs (for example, <tt>linear_regression.py</tt> and
|
programs (for example, <tt>linear_regression.py</tt> and
|
||||||
|
@ -65,5 +65,5 @@ please read the following list carefully:
|
|||||||
on GitHub. For example, use the issue tracker to request a
|
on GitHub. For example, use the issue tracker to request a
|
||||||
new operation in TensorFlow.
|
new operation in TensorFlow.
|
||||||
* To report vulnerabilities, please follow our
|
* To report vulnerabilities, please follow our
|
||||||
[vulnerability disclosure guidelines](https://github.com/tensorflow/tensorflow/blob/master/SECURITY.md).
|
[vulnerability disclosure guidelines](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/SECURITY.md).
|
||||||
|
|
||||||
|
@ -705,7 +705,7 @@ for pred_dict, expec in zip(predictions, expected):
|
|||||||
|
|
||||||
class_id = pred_dict['class_ids'][0]
|
class_id = pred_dict['class_ids'][0]
|
||||||
probability = pred_dict['probabilities'][class_id]
|
probability = pred_dict['probabilities'][class_id]
|
||||||
print(template.format(SPECIES[class_id], 100 * probability, expec))
|
print(template.format(iris_data.SPECIES[class_id], 100 * probability, expec))
|
||||||
```
|
```
|
||||||
|
|
||||||
Running the program yields the following output:
|
Running the program yields the following output:
|
||||||
|
@ -38,7 +38,7 @@ enable TensorFlow for C:
|
|||||||
OS="linux" # Change to "darwin" for macOS
|
OS="linux" # Change to "darwin" for macOS
|
||||||
TARGET_DIRECTORY="/usr/local"
|
TARGET_DIRECTORY="/usr/local"
|
||||||
curl -L \
|
curl -L \
|
||||||
"https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-${OS}-x86_64-1.5.0.tar.gz" |
|
"https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-${OS}-x86_64-1.6.0-rc0.tar.gz" |
|
||||||
sudo tar -C $TARGET_DIRECTORY -xz
|
sudo tar -C $TARGET_DIRECTORY -xz
|
||||||
|
|
||||||
The `tar` command extracts the TensorFlow C library into the `lib`
|
The `tar` command extracts the TensorFlow C library into the `lib`
|
||||||
|
@ -38,7 +38,7 @@ steps to install this library and enable TensorFlow for Go:
|
|||||||
TF_TYPE="cpu" # Change to "gpu" for GPU support
|
TF_TYPE="cpu" # Change to "gpu" for GPU support
|
||||||
TARGET_DIRECTORY='/usr/local'
|
TARGET_DIRECTORY='/usr/local'
|
||||||
curl -L \
|
curl -L \
|
||||||
"https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-$(go env GOOS)-x86_64-1.5.0.tar.gz" |
|
"https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-$(go env GOOS)-x86_64-1.6.0-rc0.tar.gz" |
|
||||||
sudo tar -C $TARGET_DIRECTORY -xz
|
sudo tar -C $TARGET_DIRECTORY -xz
|
||||||
|
|
||||||
The `tar` command extracts the TensorFlow C library into the `lib`
|
The `tar` command extracts the TensorFlow C library into the `lib`
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user