Compare commits
13 Commits
v2.3.2
...
rei/fork-r
Author | SHA1 | Date | |
---|---|---|---|
27a1657c4f | |||
|
4bdd395511 | ||
|
9173bdff3d | ||
|
182e869fb8 | ||
|
dc74c09b8d | ||
|
90f5b25508 | ||
|
9b67f161e5 | ||
|
811608d4e4 | ||
|
ca5d9fdf5c | ||
|
23ad988fcd | ||
|
6dc2a1becf | ||
|
4336a5b49f | ||
|
6fad14b203 |
26
.bazelrc
26
.bazelrc
@ -94,6 +94,9 @@ build:libc++ --linkopt -fuse-ld=lld
|
||||
# https://docs.bazel.build/versions/master/user-manual.html#flag--fat_apk_cpu
|
||||
build:android --crosstool_top=//external:android/crosstool
|
||||
build:android --host_crosstool_top=@bazel_tools//tools/cpp:toolchain
|
||||
build:android --copt=-D_GLIBCXX_USE_C99
|
||||
build:android --cxxopt=-std=c++14
|
||||
build:android --action_env ANDROID_NDK_API_LEVEL=21
|
||||
build:android_arm --config=android
|
||||
build:android_arm --cpu=armeabi-v7a
|
||||
build:android_arm --fat_apk_cpu=armeabi-v7a
|
||||
@ -202,6 +205,29 @@ build:sycl_asan --copt -fno-omit-frame-pointer --copt -fsanitize-coverage=3 --co
|
||||
build:sycl_nodouble --config=sycl
|
||||
build:sycl_trisycl --define=using_trisycl=true
|
||||
|
||||
build --copt=-DTFLITE_WITH_RUY_GEMV
|
||||
|
||||
build:rpi3 --host_crosstool_top=@bazel_tools//tools/cpp:toolchain
|
||||
build:rpi3 --crosstool_top=//third_party/toolchains/embedded/linaro-gcc72-armeabi:toolchain
|
||||
build:rpi3 --cpu=armv7a --define=target_system=rpi3
|
||||
build:rpi3 --copt=-march=armv7-a --copt=-mtune=cortex-a53 --copt=-mfloat-abi=hard --copt=-mfpu=neon-fp-armv8 --copt=-DRASPBERRY_PI --copt=-D_GLIBCXX_USE_CXX11_ABI=0 --copt=-std=gnu99 --copt=-mno-unaligned-access
|
||||
build:rpi3 --define=tensorflow_mkldnn_contraction_kernel=0
|
||||
build:rpi3_opt -c opt --config=rpi3 --copt=-funsafe-math-optimizations --copt=-ftree-vectorize --copt=-pipe
|
||||
|
||||
build:rpi3-armv8 --host_crosstool_top=@bazel_tools//tools/cpp:toolchain
|
||||
build:rpi3-armv8 --crosstool_top=//third_party/toolchains/embedded/linaro-gcc72-aarch64:toolchain
|
||||
build:rpi3-armv8 --cpu=aarch64 --define=target_system=rpi3-armv8
|
||||
build:rpi3-armv8 --copt=-march=armv8-a --copt=-mtune=cortex-a53 --copt=-DRASPBERRY_PI --copt=-D_GLIBCXX_USE_CXX11_ABI=0 --copt=-std=gnu99
|
||||
build:rpi3-armv8 --define=tensorflow_mkldnn_contraction_kernel=0
|
||||
build:rpi3-armv8_opt -c opt --config=rpi3-armv8 --copt=-funsafe-math-optimizations --copt=-ftree-vectorize --copt=-pipe
|
||||
|
||||
build:rpi4ub-armv8 --host_crosstool_top=@bazel_tools//tools/cpp:toolchain
|
||||
build:rpi4ub-armv8 --crosstool_top=//third_party/toolchains/embedded/linaro-gcc72-aarch64:toolchain
|
||||
build:rpi4ub-armv8 --cpu=aarch64 --define=target_system=rpi4ub-armv8
|
||||
build:rpi4ub-armv8 --copt=-march=armv8-a --copt=-mtune=cortex-a72 --copt=-DRASPBERRY_PI --copt=-D_GLIBCXX_USE_CXX11_ABI=0 --copt=-std=gnu99
|
||||
build:rpi4ub-armv8 --define=tensorflow_mkldnn_contraction_kernel=0
|
||||
build:rpi4ub-armv8_opt -c opt --config=rpi4ub-armv8 --copt=-funsafe-math-optimizations --copt=-ftree-vectorize --copt=-pipe
|
||||
|
||||
# Options extracted from configure script
|
||||
build:ngraph --define=with_ngraph_support=true
|
||||
build:numa --define=with_numa_support=true
|
||||
|
15
.github/pull_request_template.md
vendored
Normal file
15
.github/pull_request_template.md
vendored
Normal file
@ -0,0 +1,15 @@
|
||||
# Pull request guidelines
|
||||
|
||||
Welcome to the 🐸tensorflow project! We are excited to see your interest, and appreciate your support!
|
||||
|
||||
This repository is governed by the Contributor Covenant Code of Conduct. For more details, see the [CODE_OF_CONDUCT.md](CODE_OF_CONDUCT.md) file.
|
||||
|
||||
In order to make a good pull request, please see our [CONTRIBUTING.md](CONTRIBUTING.md) file.
|
||||
|
||||
Before accepting your pull request, you will be asked to sign a [Contributor License Agreement](https://cla-assistant.io/coqui-ai/tensorflow).
|
||||
|
||||
This [Contributor License Agreement](https://cla-assistant.io/coqui-ai/tensorflow):
|
||||
|
||||
- Protects you, Coqui, and the users of the code.
|
||||
- Does not change your rights to use your contributions for any purpose.
|
||||
- Does not change the license of the 🐸tensorflow project. It just makes the terms of your contribution clearer and lets us know you are OK to contribute.
|
75
RELEASE.md
75
RELEASE.md
@ -1,78 +1,3 @@
|
||||
# Release 2.3.2
|
||||
|
||||
## Bug Fixes and Other Changes
|
||||
* Fixes an access to unitialized memory in Eigen code
|
||||
([CVE-2020-26266](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-26266))
|
||||
* Fixes a security vulnerability caused by lack of validation in
|
||||
`tf.raw_ops.DataFormatVecPermute` and `tf.raw_ops.DataFormatDimMap`
|
||||
([CVE-2020-26267](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-26267))
|
||||
* Fixes a vulnerability caused by attempting to write to immutable memory region in
|
||||
`tf.raw_ops.ImmutableConst`
|
||||
([CVE-2020-26268](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-26268)
|
||||
* Fixes a `CHECK`-fail in LSTM with zero-length input
|
||||
([CVE-2020-26270](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-26270))
|
||||
* Fixes a security vulnerability caused by accessing heap data outside of bounds
|
||||
when loading a specially crafted `SavedModel`
|
||||
([CVE-2020-26271](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-26271))
|
||||
* Solves an OOM issue on TPUs when XLA contexts use fused average updates
|
||||
* Updates `libjpeg-turbo` to `2.0.5` to handle
|
||||
[CVE-2020-13790](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13790).
|
||||
* Updates `junit` to `4.13.1` to handle
|
||||
[CVE-2020-15250](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15250).
|
||||
* Updates `PCRE` to `8.44` to handle
|
||||
[CVE-2019-20838](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20838)
|
||||
and
|
||||
[CVE-2020-14155](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14155).
|
||||
* Updates `sqlite3` to `3.44.0` to keep in sync with master branch.
|
||||
|
||||
# Release 2.3.1
|
||||
|
||||
## Bug Fixes and Other Changes
|
||||
* Fixes an undefined behavior causing a segfault in `tf.raw_ops.Switch`
|
||||
([CVE-2020-15190](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15190))
|
||||
* Fixes three vulnerabilities in conversion to DLPack format
|
||||
([CVE-2020-15191](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15191),
|
||||
[CVE-2020-15192](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15192),
|
||||
[CVE-2020-15193](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15193))
|
||||
* Fixes two vulnerabilities in `SparseFillEmptyRowsGrad`
|
||||
([CVE-2020-15194](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15194),
|
||||
[CVE-2020-15195](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15195))
|
||||
* Fixes several vulnerabilities in `RaggedCountSparseOutput` and
|
||||
`SparseCountSparseOutput` operations
|
||||
([CVE-2020-15196](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15196),
|
||||
[CVE-2020-15197](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15197),
|
||||
[CVE-2020-15198](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15198),
|
||||
[CVE-2020-15199](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15199),
|
||||
[CVE-2020-15200](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15200),
|
||||
[CVE-2020-15201](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15201))
|
||||
* Fixes an integer truncation vulnerability in code using the work sharder API
|
||||
([CVE-2020-15202](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15202))
|
||||
* Fixes a format string vulnerability in `tf.strings.as_string`
|
||||
([CVE-2020-15203](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15203))
|
||||
* Fixes segfault raised by calling session-only ops in eager mode
|
||||
([CVE-2020-15204](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15204))
|
||||
* Fixes data leak and potential ASLR violation from `tf.raw_ops.StringNGrams`
|
||||
([CVE-2020-15205](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15205))
|
||||
* Fixes segfaults caused by incomplete `SavedModel` validation
|
||||
([CVE-2020-15206](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15206))
|
||||
* Fixes a data corruption due to a bug in negative indexing support in TFLite
|
||||
([CVE-2020-15207](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15207))
|
||||
* Fixes a data corruption due to dimension mismatch in TFLite
|
||||
([CVE-2020-15208](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15208))
|
||||
* Fixes several vulnerabilities in TFLite saved model format
|
||||
([CVE-2020-15209](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15209),
|
||||
[CVE-2020-15210](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15210),
|
||||
[CVE-2020-15211](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15211))
|
||||
* Fixes several vulnerabilities in TFLite implementation of segment sum
|
||||
([CVE-2020-15212](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15212),
|
||||
[CVE-2020-15213](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15213),
|
||||
[CVE-2020-15214](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15214))
|
||||
* Updates `sqlite3` to `3.33.00` to handle
|
||||
[CVE-2020-15358](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15358).
|
||||
* Fixes deprecated usage of `collections` API
|
||||
* Removes `scipy` dependency from `setup.py` since TensorFlow does not need it
|
||||
to install the pip package
|
||||
|
||||
# Release 2.3.0
|
||||
|
||||
## Major Features and Improvements
|
||||
|
12
WORKSPACE
12
WORKSPACE
@ -18,6 +18,18 @@ load("//tensorflow:workspace.bzl", "tf_repositories")
|
||||
# Please add all new TensorFlow dependencies in workspace.bzl.
|
||||
tf_repositories()
|
||||
|
||||
load("@bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")
|
||||
|
||||
git_repository(
|
||||
name = "com_github_nelhage_rules_boost",
|
||||
commit = "1e3a69bf2d5cd10c34b74f066054cd335d033d71",
|
||||
remote = "https://github.com/nelhage/rules_boost",
|
||||
shallow_since = "1591047380 -0700",
|
||||
)
|
||||
|
||||
load("@com_github_nelhage_rules_boost//:boost/boost.bzl", "boost_deps")
|
||||
boost_deps()
|
||||
|
||||
register_toolchains("@local_config_python//:py_toolchain")
|
||||
|
||||
load("@io_bazel_rules_closure//closure:defs.bzl", "closure_repositories")
|
||||
|
1
native_client
Symbolic link
1
native_client
Symbolic link
@ -0,0 +1 @@
|
||||
../native_client
|
@ -248,36 +248,21 @@ void TFE_CallDLManagedTensorDeleter(void* dlm_ptr) {
|
||||
}
|
||||
|
||||
void* TFE_HandleToDLPack(TFE_TensorHandle* h, TF_Status* status) {
|
||||
auto tf_dlm_context = GetDlContext(h, status);
|
||||
if (!status->status.ok()) {
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
auto* tf_dlm_data = TFE_TensorHandleDevicePointer(h, status);
|
||||
if (!status->status.ok()) {
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
const Tensor* tensor = GetTensorFromHandle(h, status);
|
||||
TF_DataType data_type = static_cast<TF_DataType>(tensor->dtype());
|
||||
|
||||
auto tf_dlm_type = GetDlDataType(data_type, status);
|
||||
if (!status->status.ok()) {
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
TensorReference tensor_ref(*tensor); // This will call buf_->Ref()
|
||||
|
||||
auto* tf_dlm_tensor_ctx = new TfDlManagedTensorCtx(tensor_ref);
|
||||
tf_dlm_tensor_ctx->reference = tensor_ref;
|
||||
|
||||
DLManagedTensor* dlm_tensor = &tf_dlm_tensor_ctx->tensor;
|
||||
dlm_tensor->manager_ctx = tf_dlm_tensor_ctx;
|
||||
dlm_tensor->deleter = &DLManagedTensorDeleter;
|
||||
dlm_tensor->dl_tensor.ctx = tf_dlm_context;
|
||||
dlm_tensor->dl_tensor.ctx = GetDlContext(h, status);
|
||||
int ndim = tensor->dims();
|
||||
dlm_tensor->dl_tensor.ndim = ndim;
|
||||
dlm_tensor->dl_tensor.data = tf_dlm_data;
|
||||
dlm_tensor->dl_tensor.dtype = tf_dlm_type;
|
||||
dlm_tensor->dl_tensor.data = TFE_TensorHandleDevicePointer(h, status);
|
||||
dlm_tensor->dl_tensor.dtype = GetDlDataType(data_type, status);
|
||||
|
||||
std::vector<int64_t>* shape_arr = &tf_dlm_tensor_ctx->shape;
|
||||
std::vector<int64_t>* stride_arr = &tf_dlm_tensor_ctx->strides;
|
||||
@ -290,14 +275,13 @@ void* TFE_HandleToDLPack(TFE_TensorHandle* h, TF_Status* status) {
|
||||
(*stride_arr)[i] = (*shape_arr)[i + 1] * (*stride_arr)[i + 1];
|
||||
}
|
||||
|
||||
dlm_tensor->dl_tensor.shape = shape_arr->data();
|
||||
dlm_tensor->dl_tensor.shape = &(*shape_arr)[0];
|
||||
// There are two ways to represent compact row-major data
|
||||
// 1) nullptr indicates tensor is compact and row-majored.
|
||||
// 2) fill in the strides array as the real case for compact row-major data.
|
||||
// Here we choose option 2, since some frameworks didn't handle the strides
|
||||
// argument properly.
|
||||
dlm_tensor->dl_tensor.strides = stride_arr->data();
|
||||
|
||||
dlm_tensor->dl_tensor.strides = &(*stride_arr)[0];
|
||||
dlm_tensor->dl_tensor.byte_offset =
|
||||
0; // TF doesn't handle the strides and byte_offsets here
|
||||
return static_cast<void*>(dlm_tensor);
|
||||
|
@ -21,7 +21,6 @@ limitations under the License.
|
||||
#include "tensorflow/cc/saved_model/loader_util.h"
|
||||
#include "tensorflow/cc/saved_model/reader.h"
|
||||
#include "tensorflow/core/framework/attr_value.pb.h"
|
||||
#include "tensorflow/core/framework/function.pb.h"
|
||||
#include "tensorflow/core/framework/node_def.pb.h"
|
||||
#include "tensorflow/core/framework/tensor.pb.h"
|
||||
#include "tensorflow/core/lib/io/path.h"
|
||||
@ -73,41 +72,26 @@ uint64 GetLatencyMicroseconds(const uint64 start_microseconds) {
|
||||
// Ensure that constant tensors loaded from the saved model have valid shape.
|
||||
// Also ensure that constant nodes have a value assigned to them.
|
||||
// TODO(b/154763635): this is temporary and will be replaced with a better audit
|
||||
static Status ValidateNode(const NodeDef& node) {
|
||||
const auto node_iterator = node.attr().find("value");
|
||||
if (node_iterator != node.attr().end()) {
|
||||
AttrValue node_value = node_iterator->second;
|
||||
if (node_value.has_tensor()) {
|
||||
const PartialTensorShape node_shape(node_value.tensor().tensor_shape());
|
||||
if (node_shape.num_elements() < 0) {
|
||||
return errors::FailedPrecondition(
|
||||
"Saved model contains node \"", node.name(), "\" (op \"", node.op(),
|
||||
"\") which initializes from a tensor with ",
|
||||
node_shape.num_elements(), " elements");
|
||||
}
|
||||
}
|
||||
} else if (node.op() == "Const") {
|
||||
return errors::FailedPrecondition(
|
||||
"Saved model contains node \"", node.name(),
|
||||
"\" which is a constant tensor but no value has been provided");
|
||||
}
|
||||
return Status::OK();
|
||||
}
|
||||
|
||||
static Status ValidateSavedTensors(const GraphDef& graph_def) {
|
||||
for (const auto& node : graph_def.node()) {
|
||||
TF_RETURN_IF_ERROR(ValidateNode(node));
|
||||
}
|
||||
|
||||
if (graph_def.has_library()) {
|
||||
const FunctionDefLibrary& library = graph_def.library();
|
||||
for (const auto& function : library.function()) {
|
||||
for (const auto& node : function.node_def()) {
|
||||
TF_RETURN_IF_ERROR(ValidateNode(node));
|
||||
const auto node_iterator = node.attr().find("value");
|
||||
if (node_iterator != node.attr().end()) {
|
||||
AttrValue node_value = node_iterator->second;
|
||||
if (node_value.has_tensor()) {
|
||||
const PartialTensorShape node_shape(node_value.tensor().tensor_shape());
|
||||
if (node_shape.num_elements() < 0) {
|
||||
return errors::FailedPrecondition(
|
||||
"Saved model contains node \"", node.name(), "\" (op \"",
|
||||
node.op(), "\") which initializes from a tensor with ",
|
||||
node_shape.num_elements(), " elements");
|
||||
}
|
||||
}
|
||||
} else if (node.op() == "Const") {
|
||||
return errors::FailedPrecondition(
|
||||
"Saved model contains node \"", node.name(),
|
||||
"\" which is a constant tensor but no value has been provided");
|
||||
}
|
||||
}
|
||||
|
||||
return Status::OK();
|
||||
}
|
||||
|
||||
|
@ -307,12 +307,7 @@ Status KernelAndDeviceOp::Run(
|
||||
if (outputs != nullptr) {
|
||||
outputs->clear();
|
||||
for (int i = 0; i < context.num_outputs(); ++i) {
|
||||
const auto* output_tensor = context.mutable_output(i);
|
||||
if (output_tensor != nullptr) {
|
||||
outputs->push_back(Tensor(*output_tensor));
|
||||
} else {
|
||||
outputs->push_back(Tensor());
|
||||
}
|
||||
outputs->push_back(Tensor(*context.mutable_output(i)));
|
||||
}
|
||||
}
|
||||
return Status::OK();
|
||||
|
@ -44,7 +44,6 @@ limitations under the License.
|
||||
#include "tensorflow/core/lib/gtl/inlined_vector.h"
|
||||
#include "tensorflow/core/lib/strings/scanner.h"
|
||||
#include "tensorflow/core/lib/strings/str_util.h"
|
||||
#include "tensorflow/core/platform/errors.h"
|
||||
#include "tensorflow/core/platform/logging.h"
|
||||
#include "tensorflow/core/platform/macros.h"
|
||||
#include "tensorflow/core/public/version.h"
|
||||
@ -1426,17 +1425,6 @@ void GraphConstructor::Undo() {
|
||||
|
||||
Status GraphConstructor::MakeEdge(Node* src, int output_index, Node* dst,
|
||||
int input_index) {
|
||||
if (output_index >= src->num_outputs()) {
|
||||
return errors::InvalidArgument(
|
||||
"Output ", output_index, " of node ", src->name(),
|
||||
" does not exist. Node only has ", src->num_outputs(), " outputs.");
|
||||
}
|
||||
if (input_index >= dst->num_inputs()) {
|
||||
return errors::InvalidArgument(
|
||||
"Input ", input_index, " of node ", dst->name(),
|
||||
" does not exist. Node only has ", dst->num_inputs(), " inputs.");
|
||||
}
|
||||
|
||||
DataType src_out = src->output_type(output_index);
|
||||
DataType dst_in = dst->input_type(input_index);
|
||||
if (!TypesCompatible(dst_in, src_out)) {
|
||||
|
@ -5864,15 +5864,15 @@ cc_library(
|
||||
":string_format_op",
|
||||
":string_join_op",
|
||||
":string_length_op",
|
||||
":string_lower_op",
|
||||
# ":string_lower_op",
|
||||
":string_ngrams_op",
|
||||
":string_split_op",
|
||||
":string_strip_op",
|
||||
":string_to_hash_bucket_op",
|
||||
":string_upper_op",
|
||||
# ":string_upper_op",
|
||||
":substr_op",
|
||||
":unicode_ops",
|
||||
":unicode_script_op",
|
||||
# ":unicode_ops",
|
||||
# ":unicode_script_op",
|
||||
":unsorted_segment_join_op",
|
||||
],
|
||||
)
|
||||
@ -5885,7 +5885,7 @@ cc_library(
|
||||
"//tensorflow/core:framework",
|
||||
"//tensorflow/core:lib",
|
||||
"//tensorflow/core:protos_all_cc",
|
||||
"@icu//:common",
|
||||
# "@icu//:common",
|
||||
],
|
||||
)
|
||||
|
||||
@ -6041,7 +6041,7 @@ tf_kernel_library(
|
||||
prefix = "string_lower_op",
|
||||
deps = STRING_DEPS + [
|
||||
"@com_google_absl//absl/strings",
|
||||
"@icu//:common",
|
||||
# "@icu//:common",
|
||||
],
|
||||
)
|
||||
|
||||
@ -6050,7 +6050,7 @@ tf_kernel_library(
|
||||
prefix = "string_upper_op",
|
||||
deps = STRING_DEPS + [
|
||||
"@com_google_absl//absl/strings",
|
||||
"@icu//:common",
|
||||
# "@icu//:common",
|
||||
],
|
||||
)
|
||||
|
||||
@ -6085,24 +6085,6 @@ tf_kernel_library(
|
||||
deps = STRING_DEPS,
|
||||
)
|
||||
|
||||
tf_cc_test(
|
||||
name = "as_string_op_test",
|
||||
size = "small",
|
||||
srcs = ["as_string_op_test.cc"],
|
||||
deps = [
|
||||
":as_string_op",
|
||||
":ops_testutil",
|
||||
":ops_util",
|
||||
"//tensorflow/core:core_cpu",
|
||||
"//tensorflow/core:framework",
|
||||
"//tensorflow/core:lib",
|
||||
"//tensorflow/core:protos_all_cc",
|
||||
"//tensorflow/core:test",
|
||||
"//tensorflow/core:test_main",
|
||||
"//tensorflow/core:testlib",
|
||||
],
|
||||
)
|
||||
|
||||
tf_kernel_library(
|
||||
name = "unicode_ops",
|
||||
prefix = "unicode_ops",
|
||||
@ -6114,7 +6096,7 @@ tf_kernel_library(
|
||||
"//tensorflow/core:lib_internal",
|
||||
"//third_party/eigen3",
|
||||
"//third_party/icu/data:conversion_data",
|
||||
"@icu//:common",
|
||||
# "@icu//:common",
|
||||
],
|
||||
)
|
||||
|
||||
@ -7143,10 +7125,10 @@ filegroup(
|
||||
"mutex_ops.*",
|
||||
"batch_kernels.*",
|
||||
"regex_replace_op.cc",
|
||||
"string_lower_op.cc", # Requires ICU for unicode.
|
||||
"string_upper_op.cc", # Requires ICU for unicode.
|
||||
# "string_lower_op.cc", # Requires ICU for unicode.
|
||||
# "string_upper_op.cc", # Requires ICU for unicode.
|
||||
"unicode_ops.cc",
|
||||
"unicode_script_op.cc",
|
||||
# "unicode_script_op.cc",
|
||||
# Ops that are inherently incompatible with Android (e.g. tied to x86 platform).
|
||||
"mkl_*",
|
||||
"xsmm_*",
|
||||
@ -8638,7 +8620,7 @@ tf_kernel_library(
|
||||
srcs = ["unicode_script_op.cc"],
|
||||
deps = [
|
||||
"//tensorflow/core:framework",
|
||||
"@icu//:common",
|
||||
# "@icu//:common",
|
||||
],
|
||||
)
|
||||
|
||||
@ -8670,6 +8652,39 @@ cc_library(
|
||||
],
|
||||
)
|
||||
|
||||
tf_kernel_library(
|
||||
name = "deepspeech_cwise_ops",
|
||||
srcs = [
|
||||
"cwise_op_add_1.cc",
|
||||
"cwise_op_add_2.cc",
|
||||
"cwise_op_less.cc",
|
||||
"cwise_op_minimum.cc",
|
||||
"cwise_op_mul_1.cc",
|
||||
"cwise_op_rsqrt.cc",
|
||||
"cwise_op_squared_difference.cc",
|
||||
"cwise_op_sub.cc",
|
||||
"cwise_op_sigmoid.cc",
|
||||
"cwise_op_tanh.cc",
|
||||
],
|
||||
gpu_srcs = [
|
||||
"cwise_op_gpu_add.cu.cc",
|
||||
"cwise_op_gpu_less.cu.cc",
|
||||
"cwise_op_gpu_minimum.cu.cc",
|
||||
"cwise_op_gpu_mul.cu.cc",
|
||||
"cwise_op_gpu_rsqrt.cu.cc",
|
||||
"cwise_op_gpu_squared_difference.cu.cc",
|
||||
"cwise_op_gpu_sub.cu.cc",
|
||||
"cwise_op_gpu_sigmoid.cu.cc",
|
||||
"cwise_op_gpu_tanh.cu.cc",
|
||||
],
|
||||
deps = [
|
||||
":cwise_lib",
|
||||
"//tensorflow/core:framework",
|
||||
"//tensorflow/core:lib",
|
||||
"//third_party/eigen3",
|
||||
],
|
||||
)
|
||||
|
||||
# Header-only version of cwise_lib for clients that want to use the cwise_ops
|
||||
# functionality in their own custom ops.
|
||||
cc_header_only_library(
|
||||
|
@ -65,26 +65,9 @@ class AsStringOp : public OpKernel {
|
||||
OP_REQUIRES(ctx, !(scientific && shortest),
|
||||
errors::InvalidArgument(
|
||||
"Cannot select both scientific and shortest notation"));
|
||||
|
||||
format_ = "%";
|
||||
if (!fill_string.empty()) {
|
||||
switch (fill_string[0]) {
|
||||
case ' ':
|
||||
case '+':
|
||||
case '-':
|
||||
case '0':
|
||||
case '#':
|
||||
strings::Appendf(&format_, "%s", fill_string.c_str());
|
||||
break;
|
||||
default:
|
||||
bool fill_not_supported = true;
|
||||
OP_REQUIRES(ctx, !fill_not_supported,
|
||||
errors::InvalidArgument("Fill argument not supported: \"",
|
||||
fill_string, "\""));
|
||||
}
|
||||
}
|
||||
if (width > -1) {
|
||||
strings::Appendf(&format_, "%d", width);
|
||||
strings::Appendf(&format_, "%s%d", fill_string.c_str(), width);
|
||||
}
|
||||
if (precision > -1) {
|
||||
strings::Appendf(&format_, ".%d", precision);
|
||||
|
@ -1,245 +0,0 @@
|
||||
/* Copyright 2020 The TensorFlow Authors. All Rights Reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
==============================================================================*/
|
||||
|
||||
#include "tensorflow/core/framework/fake_input.h"
|
||||
#include "tensorflow/core/framework/node_def_builder.h"
|
||||
#include "tensorflow/core/framework/tensor.h"
|
||||
#include "tensorflow/core/framework/tensor_testutil.h"
|
||||
#include "tensorflow/core/framework/types.h"
|
||||
#include "tensorflow/core/kernels/ops_testutil.h"
|
||||
#include "tensorflow/core/kernels/ops_util.h"
|
||||
#include "tensorflow/core/lib/core/status_test_util.h"
|
||||
|
||||
namespace tensorflow {
|
||||
namespace {
|
||||
|
||||
class AsStringGraphTest : public OpsTestBase {
|
||||
protected:
|
||||
Status Init(DataType input_type, const string& fill = "", int width = -1,
|
||||
int precision = -1, bool scientific = false,
|
||||
bool shortest = false) {
|
||||
TF_CHECK_OK(NodeDefBuilder("op", "AsString")
|
||||
.Input(FakeInput(input_type))
|
||||
.Attr("fill", fill)
|
||||
.Attr("precision", precision)
|
||||
.Attr("scientific", scientific)
|
||||
.Attr("shortest", shortest)
|
||||
.Attr("width", width)
|
||||
.Finalize(node_def()));
|
||||
return InitOp();
|
||||
}
|
||||
};
|
||||
|
||||
TEST_F(AsStringGraphTest, Int8) {
|
||||
TF_ASSERT_OK(Init(DT_INT8));
|
||||
|
||||
AddInputFromArray<int8>(TensorShape({3}), {-42, 0, 42});
|
||||
TF_ASSERT_OK(RunOpKernel());
|
||||
Tensor expected(allocator(), DT_STRING, TensorShape({3}));
|
||||
test::FillValues<tstring>(&expected, {"-42", "0", "42"});
|
||||
test::ExpectTensorEqual<tstring>(expected, *GetOutput(0));
|
||||
}
|
||||
|
||||
TEST_F(AsStringGraphTest, Int64) {
|
||||
TF_ASSERT_OK(Init(DT_INT64));
|
||||
|
||||
AddInputFromArray<int64>(TensorShape({3}), {-42, 0, 42});
|
||||
TF_ASSERT_OK(RunOpKernel());
|
||||
Tensor expected(allocator(), DT_STRING, TensorShape({3}));
|
||||
test::FillValues<tstring>(&expected, {"-42", "0", "42"});
|
||||
test::ExpectTensorEqual<tstring>(expected, *GetOutput(0));
|
||||
}
|
||||
|
||||
TEST_F(AsStringGraphTest, FloatDefault) {
|
||||
TF_ASSERT_OK(Init(DT_FLOAT));
|
||||
|
||||
AddInputFromArray<float>(TensorShape({4}), {-42, 0, 3.14159, 42});
|
||||
TF_ASSERT_OK(RunOpKernel());
|
||||
Tensor expected(allocator(), DT_STRING, TensorShape({4}));
|
||||
test::FillValues<tstring>(
|
||||
&expected, {"-42.000000", "0.000000", "3.141590", "42.000000"});
|
||||
test::ExpectTensorEqual<tstring>(expected, *GetOutput(0));
|
||||
}
|
||||
|
||||
TEST_F(AsStringGraphTest, FloatScientific) {
|
||||
TF_ASSERT_OK(Init(DT_FLOAT, /*fill=*/"", /*width=*/-1, /*precision=*/-1,
|
||||
/*scientific=*/true));
|
||||
|
||||
AddInputFromArray<float>(TensorShape({4}), {-42, 0, 3.14159, 42});
|
||||
TF_ASSERT_OK(RunOpKernel());
|
||||
Tensor expected(allocator(), DT_STRING, TensorShape({4}));
|
||||
test::FillValues<tstring>(&expected, {"-4.200000e+01", "0.000000e+00",
|
||||
"3.141590e+00", "4.200000e+01"});
|
||||
test::ExpectTensorEqual<tstring>(expected, *GetOutput(0));
|
||||
}
|
||||
|
||||
TEST_F(AsStringGraphTest, FloatShortest) {
|
||||
TF_ASSERT_OK(Init(DT_FLOAT, /*fill=*/"", /*width=*/-1, /*precision=*/-1,
|
||||
/*scientific=*/false, /*shortest=*/true));
|
||||
|
||||
AddInputFromArray<float>(TensorShape({4}), {-42, 0, 3.14159, 42});
|
||||
TF_ASSERT_OK(RunOpKernel());
|
||||
Tensor expected(allocator(), DT_STRING, TensorShape({4}));
|
||||
test::FillValues<tstring>(&expected, {"-42", "0", "3.14159", "42"});
|
||||
test::ExpectTensorEqual<tstring>(expected, *GetOutput(0));
|
||||
}
|
||||
|
||||
TEST_F(AsStringGraphTest, FloatPrecisionOnly) {
|
||||
TF_ASSERT_OK(Init(DT_FLOAT, /*fill=*/"", /*width=*/-1, /*precision=*/2));
|
||||
|
||||
AddInputFromArray<float>(TensorShape({4}), {-42, 0, 3.14159, 42});
|
||||
TF_ASSERT_OK(RunOpKernel());
|
||||
Tensor expected(allocator(), DT_STRING, TensorShape({4}));
|
||||
test::FillValues<tstring>(&expected, {"-42.00", "0.00", "3.14", "42.00"});
|
||||
test::ExpectTensorEqual<tstring>(expected, *GetOutput(0));
|
||||
}
|
||||
|
||||
TEST_F(AsStringGraphTest, FloatWidthOnly) {
|
||||
TF_ASSERT_OK(Init(DT_FLOAT, /*fill=*/"", /*width=*/5));
|
||||
|
||||
AddInputFromArray<float>(TensorShape({4}), {-42, 0, 3.14159, 42});
|
||||
TF_ASSERT_OK(RunOpKernel());
|
||||
Tensor expected(allocator(), DT_STRING, TensorShape({4}));
|
||||
test::FillValues<tstring>(
|
||||
&expected, {"-42.000000", "0.000000", "3.141590", "42.000000"});
|
||||
test::ExpectTensorEqual<tstring>(expected, *GetOutput(0));
|
||||
}
|
||||
|
||||
TEST_F(AsStringGraphTest, Float_5_2_Format) {
|
||||
TF_ASSERT_OK(Init(DT_FLOAT, /*fill=*/"", /*width=*/5, /*precision=*/2));
|
||||
|
||||
AddInputFromArray<float>(TensorShape({4}), {-42, 0, 3.14159, 42});
|
||||
TF_ASSERT_OK(RunOpKernel());
|
||||
Tensor expected(allocator(), DT_STRING, TensorShape({4}));
|
||||
test::FillValues<tstring>(&expected, {"-42.00", " 0.00", " 3.14", "42.00"});
|
||||
test::ExpectTensorEqual<tstring>(expected, *GetOutput(0));
|
||||
}
|
||||
|
||||
TEST_F(AsStringGraphTest, Complex) {
|
||||
TF_ASSERT_OK(Init(DT_COMPLEX64, /*fill=*/"", /*width=*/5, /*precision=*/2));
|
||||
|
||||
AddInputFromArray<complex64>(TensorShape({3}), {{-4, 2}, {0}, {3.14159, -1}});
|
||||
TF_ASSERT_OK(RunOpKernel());
|
||||
Tensor expected(allocator(), DT_STRING, TensorShape({3}));
|
||||
test::FillValues<tstring>(
|
||||
&expected, {"(-4.00, 2.00)", "( 0.00, 0.00)", "( 3.14,-1.00)"});
|
||||
test::ExpectTensorEqual<tstring>(expected, *GetOutput(0));
|
||||
}
|
||||
|
||||
TEST_F(AsStringGraphTest, Bool) {
|
||||
TF_ASSERT_OK(Init(DT_BOOL));
|
||||
|
||||
AddInputFromArray<bool>(TensorShape({2}), {true, false});
|
||||
TF_ASSERT_OK(RunOpKernel());
|
||||
Tensor expected(allocator(), DT_STRING, TensorShape({2}));
|
||||
test::FillValues<tstring>(&expected, {"true", "false"});
|
||||
test::ExpectTensorEqual<tstring>(expected, *GetOutput(0));
|
||||
}
|
||||
|
||||
TEST_F(AsStringGraphTest, String) {
|
||||
Status s = Init(DT_STRING);
|
||||
ASSERT_EQ(error::INVALID_ARGUMENT, s.code());
|
||||
ASSERT_TRUE(absl::StrContains(
|
||||
s.error_message(),
|
||||
"Value for attr 'T' of string is not in the list of allowed values"));
|
||||
}
|
||||
|
||||
TEST_F(AsStringGraphTest, OnlyOneOfScientificAndShortest) {
|
||||
Status s = Init(DT_FLOAT, /*fill=*/"", /*width=*/-1, /*precision=*/-1,
|
||||
/*scientific=*/true, /*shortest=*/true);
|
||||
ASSERT_EQ(error::INVALID_ARGUMENT, s.code());
|
||||
ASSERT_TRUE(
|
||||
absl::StrContains(s.error_message(),
|
||||
"Cannot select both scientific and shortest notation"));
|
||||
}
|
||||
|
||||
TEST_F(AsStringGraphTest, NoShortestForNonFloat) {
|
||||
Status s = Init(DT_INT32, /*fill=*/"", /*width=*/-1, /*precision=*/-1,
|
||||
/*scientific=*/false, /*shortest=*/true);
|
||||
ASSERT_EQ(error::INVALID_ARGUMENT, s.code());
|
||||
ASSERT_TRUE(absl::StrContains(
|
||||
s.error_message(),
|
||||
"scientific and shortest format not supported for datatype"));
|
||||
}
|
||||
|
||||
TEST_F(AsStringGraphTest, NoScientificForNonFloat) {
|
||||
Status s = Init(DT_INT32, /*fill=*/"", /*width=*/-1, /*precision=*/-1,
|
||||
/*scientific=*/true);
|
||||
ASSERT_EQ(error::INVALID_ARGUMENT, s.code());
|
||||
ASSERT_TRUE(absl::StrContains(
|
||||
s.error_message(),
|
||||
"scientific and shortest format not supported for datatype"));
|
||||
}
|
||||
|
||||
TEST_F(AsStringGraphTest, NoPrecisionForNonFloat) {
|
||||
Status s = Init(DT_INT32, /*fill=*/"", /*width=*/-1, /*precision=*/5);
|
||||
ASSERT_EQ(error::INVALID_ARGUMENT, s.code());
|
||||
ASSERT_TRUE(absl::StrContains(s.error_message(),
|
||||
"precision not supported for datatype"));
|
||||
}
|
||||
|
||||
TEST_F(AsStringGraphTest, LongFill) {
|
||||
Status s = Init(DT_INT32, /*fill=*/"asdf");
|
||||
ASSERT_EQ(error::INVALID_ARGUMENT, s.code());
|
||||
ASSERT_TRUE(absl::StrContains(s.error_message(),
|
||||
"Fill string must be one or fewer characters"));
|
||||
}
|
||||
|
||||
TEST_F(AsStringGraphTest, FillWithZero) {
|
||||
TF_ASSERT_OK(Init(DT_INT64, /*fill=*/"0", /*width=*/4));
|
||||
|
||||
AddInputFromArray<int64>(TensorShape({3}), {-42, 0, 42});
|
||||
TF_ASSERT_OK(RunOpKernel());
|
||||
Tensor expected(allocator(), DT_STRING, TensorShape({3}));
|
||||
test::FillValues<tstring>(&expected, {"-042", "0000", "0042"});
|
||||
test::ExpectTensorEqual<tstring>(expected, *GetOutput(0));
|
||||
}
|
||||
|
||||
TEST_F(AsStringGraphTest, FillWithSpace) {
|
||||
TF_ASSERT_OK(Init(DT_INT64, /*fill=*/" ", /*width=*/4));
|
||||
|
||||
AddInputFromArray<int64>(TensorShape({3}), {-42, 0, 42});
|
||||
TF_ASSERT_OK(RunOpKernel());
|
||||
Tensor expected(allocator(), DT_STRING, TensorShape({3}));
|
||||
test::FillValues<tstring>(&expected, {" -42", " 0", " 42"});
|
||||
test::ExpectTensorEqual<tstring>(expected, *GetOutput(0));
|
||||
}
|
||||
|
||||
TEST_F(AsStringGraphTest, FillWithChar1) {
|
||||
TF_ASSERT_OK(Init(DT_INT64, /*fill=*/"-", /*width=*/4));
|
||||
|
||||
AddInputFromArray<int64>(TensorShape({3}), {-42, 0, 42});
|
||||
TF_ASSERT_OK(RunOpKernel());
|
||||
Tensor expected(allocator(), DT_STRING, TensorShape({3}));
|
||||
test::FillValues<tstring>(&expected, {"-42 ", "0 ", "42 "});
|
||||
test::ExpectTensorEqual<tstring>(expected, *GetOutput(0));
|
||||
}
|
||||
|
||||
TEST_F(AsStringGraphTest, FillWithChar3) {
|
||||
Status s = Init(DT_INT32, /*fill=*/"s");
|
||||
ASSERT_EQ(error::INVALID_ARGUMENT, s.code());
|
||||
ASSERT_TRUE(
|
||||
absl::StrContains(s.error_message(), "Fill argument not supported"));
|
||||
}
|
||||
|
||||
TEST_F(AsStringGraphTest, FillWithChar4) {
|
||||
Status s = Init(DT_INT32, /*fill=*/"n");
|
||||
ASSERT_EQ(error::INVALID_ARGUMENT, s.code());
|
||||
ASSERT_TRUE(
|
||||
absl::StrContains(s.error_message(), "Fill argument not supported"));
|
||||
}
|
||||
|
||||
} // end namespace
|
||||
} // end namespace tensorflow
|
@ -193,8 +193,7 @@ struct LaunchBatchBandedTriangularSolve {
|
||||
|
||||
Shard(worker_threads.num_threads, worker_threads.workers, batch_size,
|
||||
cost_per_unit,
|
||||
[&in_x, &in_y, adjoint, lower, &bcast, out](int64 start,
|
||||
int64 limit) {
|
||||
[&in_x, &in_y, adjoint, lower, &bcast, out](int start, int limit) {
|
||||
SequentialBandedTriangularSolveKernel<Scalar>::Run(
|
||||
in_x, in_y, lower, adjoint, bcast, out, start, limit);
|
||||
});
|
||||
|
@ -121,7 +121,7 @@ class BoostedTreesTrainingPredictOp : public OpKernel {
|
||||
auto do_work = [&resource, &bucketized_features, &cached_tree_ids,
|
||||
&cached_node_ids, &output_partial_logits,
|
||||
&output_node_ids, latest_tree,
|
||||
this](int64 start, int64 end) {
|
||||
this](int32 start, int32 end) {
|
||||
for (int32 i = start; i < end; ++i) {
|
||||
int32 tree_id = cached_tree_ids(i);
|
||||
int32 node_id = cached_node_ids(i);
|
||||
@ -237,7 +237,7 @@ class BoostedTreesPredictOp : public OpKernel {
|
||||
|
||||
const int32 last_tree = resource->num_trees() - 1;
|
||||
auto do_work = [&resource, &bucketized_features, &output_logits, last_tree,
|
||||
this](int64 start, int64 end) {
|
||||
this](int32 start, int32 end) {
|
||||
for (int32 i = start; i < end; ++i) {
|
||||
std::vector<float> tree_logits(logits_dimension_, 0.0);
|
||||
int32 tree_id = 0;
|
||||
@ -340,7 +340,7 @@ class BoostedTreesExampleDebugOutputsOp : public OpKernel {
|
||||
// path. Note: feature_ids has one less value than logits_path because the
|
||||
// first value of each logit path will be the bias.
|
||||
auto do_work = [&resource, &bucketized_features, &output_debug_info,
|
||||
last_tree](int64 start, int64 end) {
|
||||
last_tree](int32 start, int32 end) {
|
||||
for (int32 i = start; i < end; ++i) {
|
||||
// Proto to store debug outputs, per example.
|
||||
boosted_trees::DebugOutput example_debug_info;
|
||||
|
@ -116,6 +116,7 @@ REGISTER_KERNEL(GPU, int16);
|
||||
REGISTER_KERNEL(GPU, qint16);
|
||||
REGISTER_KERNEL(GPU, quint16);
|
||||
REGISTER_KERNEL(GPU, uint32);
|
||||
REGISTER_KERNEL(GPU, int32);
|
||||
REGISTER_KERNEL(GPU, qint32);
|
||||
REGISTER_KERNEL(GPU, int64);
|
||||
REGISTER_KERNEL(GPU, uint64);
|
||||
|
@ -178,30 +178,10 @@ class SparseCount : public OpKernel {
|
||||
const Tensor& weights = context->input(3);
|
||||
bool use_weights = weights.NumElements() > 0;
|
||||
|
||||
OP_REQUIRES(context, TensorShapeUtils::IsMatrix(indices.shape()),
|
||||
errors::InvalidArgument(
|
||||
"Input indices must be a 2-dimensional tensor. Got: ",
|
||||
indices.shape().DebugString()));
|
||||
|
||||
if (use_weights) {
|
||||
OP_REQUIRES(
|
||||
context, weights.shape() == values.shape(),
|
||||
errors::InvalidArgument(
|
||||
"Weights and values must have the same shape. Weight shape: ",
|
||||
weights.shape().DebugString(),
|
||||
"; values shape: ", values.shape().DebugString()));
|
||||
}
|
||||
|
||||
bool is_1d = shape.NumElements() == 1;
|
||||
int num_batches = is_1d ? 1 : shape.flat<int64>()(0);
|
||||
int num_values = values.NumElements();
|
||||
|
||||
OP_REQUIRES(context, num_values == indices.shape().dim_size(0),
|
||||
errors::InvalidArgument(
|
||||
"Number of values must match first dimension of indices.",
|
||||
"Got ", num_values,
|
||||
" values, indices shape: ", indices.shape().DebugString()));
|
||||
|
||||
const auto indices_values = indices.matrix<int64>();
|
||||
const auto values_values = values.flat<T>();
|
||||
const auto weight_values = weights.flat<W>();
|
||||
@ -255,33 +235,12 @@ class RaggedCount : public OpKernel {
|
||||
bool use_weights = weights.NumElements() > 0;
|
||||
bool is_1d = false;
|
||||
|
||||
if (use_weights) {
|
||||
OP_REQUIRES(
|
||||
context, weights.shape() == values.shape(),
|
||||
errors::InvalidArgument(
|
||||
"Weights and values must have the same shape. Weight shape: ",
|
||||
weights.shape().DebugString(),
|
||||
"; values shape: ", values.shape().DebugString()));
|
||||
}
|
||||
|
||||
const auto splits_values = splits.flat<int64>();
|
||||
const auto values_values = values.flat<T>();
|
||||
const auto weight_values = weights.flat<W>();
|
||||
int num_batches = splits.NumElements() - 1;
|
||||
int num_values = values.NumElements();
|
||||
|
||||
OP_REQUIRES(
|
||||
context, num_batches > 0,
|
||||
errors::InvalidArgument(
|
||||
"Must provide at least 2 elements for the splits argument"));
|
||||
OP_REQUIRES(context, splits_values(0) == 0,
|
||||
errors::InvalidArgument("Splits must start with 0, not with ",
|
||||
splits_values(0)));
|
||||
OP_REQUIRES(context, splits_values(num_batches) == num_values,
|
||||
errors::InvalidArgument(
|
||||
"Splits must end with the number of values, got ",
|
||||
splits_values(num_batches), " instead of ", num_values));
|
||||
|
||||
auto per_batch_counts = BatchedMap<W>(num_batches);
|
||||
T max_value = 0;
|
||||
int batch_idx = 0;
|
||||
|
@ -223,7 +223,7 @@ struct CropAndResize<CPUDevice, T> {
|
||||
const int depth = crops.dimension(3);
|
||||
|
||||
// Sharding across boxes.
|
||||
auto CropAndResizePerBox = [&](int64 start_box, int64 limit_box) {
|
||||
auto CropAndResizePerBox = [&](int start_box, int limit_box) {
|
||||
for (int b = start_box; b < limit_box; ++b) {
|
||||
const float y1 = boxes(b, 0);
|
||||
const float x1 = boxes(b, 1);
|
||||
@ -449,7 +449,7 @@ struct CropAndResizeBackpropImage<CPUDevice, T> {
|
||||
|
||||
grads_image.setZero();
|
||||
|
||||
auto CropAndResizeBackImgPerBox = [&](int64 start_box, int64 limit_box) {
|
||||
auto CropAndResizeBackImgPerBox = [&](int start_box, int limit_box) {
|
||||
for (int b = start_box; b < limit_box; ++b) {
|
||||
const float y1 = boxes(b, 0);
|
||||
const float x1 = boxes(b, 1);
|
||||
|
@ -18,52 +18,16 @@ limitations under the License.
|
||||
#define EIGEN_USE_THREADS
|
||||
|
||||
#include "tensorflow/core/kernels/data_format_ops.h"
|
||||
|
||||
#include <map>
|
||||
|
||||
#include "third_party/eigen3/unsupported/Eigen/CXX11/Tensor"
|
||||
#include "tensorflow/core/framework/op_kernel.h"
|
||||
#include "tensorflow/core/framework/register_types.h"
|
||||
#include "tensorflow/core/framework/tensor.h"
|
||||
#include "tensorflow/core/platform/errors.h"
|
||||
|
||||
namespace tensorflow {
|
||||
|
||||
typedef Eigen::ThreadPoolDevice CPUDevice;
|
||||
typedef Eigen::GpuDevice GPUDevice;
|
||||
|
||||
// Ensure that `src` and `dst` define a valid permutation.
|
||||
// Ops defined in this file assume that user specifies a permutation via two
|
||||
// string attributes. This check validates that these attributes properly define
|
||||
// it to prevent security vulnerabilities.
|
||||
static bool IsValidPermutation(const std::string& src, const std::string& dst) {
|
||||
if (src.size() != dst.size()) {
|
||||
return false;
|
||||
}
|
||||
|
||||
std::map<char, bool> characters;
|
||||
|
||||
// Every character in `src` must be present only once
|
||||
for (const auto c : src) {
|
||||
if (characters[c]) {
|
||||
return false;
|
||||
}
|
||||
characters[c] = true;
|
||||
}
|
||||
|
||||
// Every character in `dst` must show up in `src` exactly once
|
||||
for (const auto c : dst) {
|
||||
if (!characters[c]) {
|
||||
return false;
|
||||
}
|
||||
characters[c] = false;
|
||||
}
|
||||
|
||||
// At this point, characters[] has been switched to true and false exactly
|
||||
// once for all character in `src` (and `dst`) so we have a valid permutation
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename Device, typename T>
|
||||
class DataFormatDimMapOp : public OpKernel {
|
||||
public:
|
||||
@ -73,20 +37,15 @@ class DataFormatDimMapOp : public OpKernel {
|
||||
OP_REQUIRES_OK(context, context->GetAttr("src_format", &src_format));
|
||||
string dst_format;
|
||||
OP_REQUIRES_OK(context, context->GetAttr("dst_format", &dst_format));
|
||||
OP_REQUIRES(context, src_format.size() == 4 || src_format.size() == 5,
|
||||
errors::InvalidArgument(
|
||||
"Source format must be of length 4 or 5, received "
|
||||
"src_format = ",
|
||||
src_format));
|
||||
OP_REQUIRES(context, dst_format.size() == 4 || dst_format.size() == 5,
|
||||
errors::InvalidArgument("Destination format must be of length "
|
||||
"4 or 5, received dst_format = ",
|
||||
dst_format));
|
||||
OP_REQUIRES(context, src_format.size() == 4,
|
||||
errors::InvalidArgument(strings::StrCat(
|
||||
"Source format must of length 4, received src_format = ",
|
||||
src_format)));
|
||||
OP_REQUIRES(
|
||||
context, IsValidPermutation(src_format, dst_format),
|
||||
errors::InvalidArgument(
|
||||
"Destination and source format must determine a permutation, got ",
|
||||
src_format, " and ", dst_format));
|
||||
context, dst_format.size() == 4,
|
||||
errors::InvalidArgument(strings::StrCat(
|
||||
"Destination format must of length 4, received dst_format = ",
|
||||
dst_format)));
|
||||
dst_idx_ = Tensor(DT_INT32, {static_cast<int64>(src_format.size())});
|
||||
for (int i = 0; i < src_format.size(); ++i) {
|
||||
for (int j = 0; j < dst_format.size(); ++j) {
|
||||
@ -118,22 +77,8 @@ class DataFormatVecPermuteOp : public OpKernel {
|
||||
: OpKernel(context) {
|
||||
string src_format;
|
||||
OP_REQUIRES_OK(context, context->GetAttr("src_format", &src_format));
|
||||
OP_REQUIRES(context, src_format.size() == 4 || src_format.size() == 5,
|
||||
errors::InvalidArgument(
|
||||
"Source format must be of length 4 or 5, received "
|
||||
"src_format = ",
|
||||
src_format));
|
||||
string dst_format;
|
||||
OP_REQUIRES_OK(context, context->GetAttr("dst_format", &dst_format));
|
||||
OP_REQUIRES(context, dst_format.size() == 4 || dst_format.size() == 5,
|
||||
errors::InvalidArgument("Destination format must be of length "
|
||||
"4 or 5, received dst_format = ",
|
||||
dst_format));
|
||||
OP_REQUIRES(
|
||||
context, IsValidPermutation(src_format, dst_format),
|
||||
errors::InvalidArgument(
|
||||
"Destination and source format must determine a permutation, got ",
|
||||
src_format, " and ", dst_format));
|
||||
src_format_ = src_format;
|
||||
dst_format_ = dst_format;
|
||||
}
|
||||
@ -179,10 +124,6 @@ class DataFormatVecPermuteOp : public OpKernel {
|
||||
};
|
||||
keep_only_spatial_dimensions(&src_format_str);
|
||||
keep_only_spatial_dimensions(&dst_format_str);
|
||||
OP_REQUIRES(context,
|
||||
src_format_str.size() == 2 && dst_format_str.size() == 2,
|
||||
errors::InvalidArgument(
|
||||
"Format specifier must contain H and W for 2D case"));
|
||||
}
|
||||
ComputeDstIndex(src_format_str, dst_format_str, input.dims(), &dst_idx);
|
||||
|
||||
|
@ -62,12 +62,6 @@ class MemmappedTensorAllocator : public Allocator {
|
||||
|
||||
void set_delete_on_deallocate() { delete_on_deallocate_ = true; }
|
||||
|
||||
// Make sure tensors or complex types (strings, variants, resources) don't get
|
||||
// their constructor called via a placement new since that would require
|
||||
// writing to immutable data.
|
||||
// See also: tensorflow/core/framework/typed_allocator.h
|
||||
bool AllocatesOpaqueHandle() const override { return true; }
|
||||
|
||||
private:
|
||||
std::unique_ptr<ReadOnlyMemoryRegion> memory_region_;
|
||||
// If there is an error during allocation we keep it in this status.
|
||||
|
@ -95,8 +95,7 @@ struct NthElementFunctor<CPUDevice, T> {
|
||||
const int last_dim = input_tensor.dim_size(input_tensor.dims() - 1);
|
||||
|
||||
// Allocate each row to different shard.
|
||||
auto SubNthElement = [&, input, output, last_dim, n](int64 start,
|
||||
int64 limit) {
|
||||
auto SubNthElement = [&, input, output, last_dim, n](int start, int limit) {
|
||||
// std::nth_element would rearrange the array, so we need a new buffer.
|
||||
std::vector<T> buf(last_dim);
|
||||
|
||||
|
@ -70,8 +70,8 @@ struct TruncatedNormalFunctor<CPUDevice, T> {
|
||||
|
||||
auto do_work = [samples_per_batch, num_elements, &ctx, &means, &stddevs,
|
||||
&minvals, &maxvals, &gen, &output,
|
||||
kStdDevsInsideBoundsToUseRandnSampler](int64 start_batch,
|
||||
int64 limit_batch) {
|
||||
kStdDevsInsideBoundsToUseRandnSampler](int start_batch,
|
||||
int limit_batch) {
|
||||
// Capturing "gen" by-value would only make a copy for the _shared_
|
||||
// lambda. Since we want to let each worker have its own copy, we pass
|
||||
// "gen" by reference and explicitly do a copy assignment here.
|
||||
@ -333,8 +333,8 @@ struct TruncatedNormalFunctorV2<CPUDevice, T> {
|
||||
|
||||
auto do_work = [num_batches, samples_per_batch, &ctx, &bcast, &means,
|
||||
&stddevs, &minvals, &maxvals, &gen, &output,
|
||||
kStdDevsInsideBoundsToUseRandnSampler](int64 start_output,
|
||||
int64 limit_output) {
|
||||
kStdDevsInsideBoundsToUseRandnSampler](int start_output,
|
||||
int limit_output) {
|
||||
// Capturing "gen" by-value would only make a copy for the _shared_
|
||||
// lambda. Since we want to let each worker have its own copy, we pass
|
||||
// "gen" by reference and explicitly do a copy assignment here.
|
||||
|
@ -182,7 +182,7 @@ struct RandomBinomialFunctor<CPUDevice, T, U> {
|
||||
// the sample shape and [H1, ... Hm] for the batch shape of the samples.
|
||||
// We have B1 * ... * Bk samples per batch member we need.
|
||||
auto DoWork = [num_batches, samples_per_batch, &bcast, &counts, &probs,
|
||||
&gen, &output](int64 start_output, int64 limit_output) {
|
||||
&gen, &output](int start_output, int limit_output) {
|
||||
// Vectorized intermediate calculations for uniform rejection sampling.
|
||||
// We always generate at most 4 samples.
|
||||
Eigen::array<T, 4> z;
|
||||
|
@ -205,7 +205,7 @@ class RandomGammaOp : public OpKernel {
|
||||
// avoid a couple flops which can be done on a per-alpha basis.
|
||||
|
||||
auto DoWork = [samples_per_alpha, num_alphas, &rng, samples_flat,
|
||||
alpha_flat](int64 start_output, int64 limit_output) {
|
||||
alpha_flat](int start_output, int limit_output) {
|
||||
using Eigen::numext::exp;
|
||||
using Eigen::numext::log;
|
||||
using Eigen::numext::log1p;
|
||||
|
@ -97,7 +97,7 @@ struct PoissonFunctor<CPUDevice, T, U> {
|
||||
typedef random::UniformDistribution<random::PhiloxRandom, CT> Uniform;
|
||||
|
||||
auto DoWork = [num_samples, num_rate, &rng, samples_flat, rate_flat](
|
||||
int64 start_output, int64 limit_output) {
|
||||
int start_output, int limit_output) {
|
||||
// Capturing "rng" by value would only make a copy for the _shared_
|
||||
// lambda. Since we want to let each worker have its own copy, we pass
|
||||
// "rng" by reference and explicitly do a copy assignment.
|
||||
|
@ -16,7 +16,6 @@ limitations under the License.
|
||||
// See docs in ../ops/data_flow_ops.cc.
|
||||
|
||||
#include <limits.h>
|
||||
|
||||
#include <vector>
|
||||
|
||||
#include "tensorflow/core/common_runtime/device.h"
|
||||
@ -28,7 +27,6 @@ limitations under the License.
|
||||
#include "tensorflow/core/framework/types.h"
|
||||
#include "tensorflow/core/lib/core/errors.h"
|
||||
#include "tensorflow/core/lib/gtl/map_util.h"
|
||||
#include "tensorflow/core/platform/errors.h"
|
||||
#include "tensorflow/core/platform/logging.h"
|
||||
#include "tensorflow/core/platform/macros.h"
|
||||
#include "tensorflow/core/platform/mutex.h"
|
||||
@ -44,11 +42,7 @@ class GetSessionHandleOp : public OpKernel {
|
||||
|
||||
void Compute(OpKernelContext* ctx) override {
|
||||
const Tensor& val = ctx->input(0);
|
||||
auto session_state = ctx->session_state();
|
||||
OP_REQUIRES(ctx, session_state != nullptr,
|
||||
errors::FailedPrecondition(
|
||||
"GetSessionHandle called on null session state"));
|
||||
int64 id = session_state->GetNewId();
|
||||
int64 id = ctx->session_state()->GetNewId();
|
||||
TensorStore::TensorAndKey tk{val, id, requested_device()};
|
||||
OP_REQUIRES_OK(ctx, ctx->tensor_store()->AddTensor(name(), tk));
|
||||
|
||||
|
@ -232,9 +232,6 @@ class SparseFillEmptyRowsGradOp : public OpKernel {
|
||||
context, TensorShapeUtils::IsVector(reverse_index_map_t->shape()),
|
||||
errors::InvalidArgument("reverse_index_map must be a vector, saw: ",
|
||||
reverse_index_map_t->shape().DebugString()));
|
||||
OP_REQUIRES(context, TensorShapeUtils::IsVector(grad_values_t->shape()),
|
||||
errors::InvalidArgument("grad_values must be a vector, saw: ",
|
||||
grad_values_t->shape().DebugString()));
|
||||
|
||||
const auto reverse_index_map = reverse_index_map_t->vec<int64>();
|
||||
const auto grad_values = grad_values_t->vec<T>();
|
||||
@ -263,13 +260,8 @@ class SparseFillEmptyRowsGradOp : public OpKernel {
|
||||
// Locate the index of the output of the forward prop associated
|
||||
// with this location in the input of the forward prop. Copy
|
||||
// the gradient into it. Mark it as visited.
|
||||
int64 reverse_index = reverse_index_map(i);
|
||||
OP_REQUIRES(
|
||||
context, 0 <= reverse_index && reverse_index < N_full,
|
||||
errors::InvalidArgument("Elements in reverse index must be in [0, ",
|
||||
N_full, ") but got ", reverse_index));
|
||||
d_values(i) = grad_values(reverse_index);
|
||||
visited(reverse_index) = true;
|
||||
d_values(i) = grad_values(reverse_index_map(i));
|
||||
visited(reverse_index_map(i)) = true;
|
||||
}
|
||||
for (int j = 0; j < N_full; ++j) {
|
||||
// The default value gradient gets the accumulated remainder of
|
||||
|
@ -252,7 +252,7 @@ class StatelessRandomGammaOp : public StatelessRandomOpBase {
|
||||
// avoid a couple flops which can be done on a per-alpha basis.
|
||||
|
||||
auto DoWork = [samples_per_alpha, num_alphas, &random, samples_flat,
|
||||
alpha_flat](int64 start_output, int64 limit_output) {
|
||||
alpha_flat](int start_output, int limit_output) {
|
||||
// Capturing "random" by-value would only make a copy for the _shared_
|
||||
// lambda. Since we want to let each worker have its own copy, we pass
|
||||
// "random" by reference and explicitly do a copy assignment.
|
||||
|
@ -19,7 +19,6 @@ limitations under the License.
|
||||
#include "absl/strings/ascii.h"
|
||||
#include "absl/strings/str_cat.h"
|
||||
#include "tensorflow/core/framework/op_kernel.h"
|
||||
#include "tensorflow/core/platform/errors.h"
|
||||
|
||||
namespace tensorflow {
|
||||
namespace text {
|
||||
@ -61,18 +60,6 @@ class StringNGramsOp : public tensorflow::OpKernel {
|
||||
OP_REQUIRES_OK(context, context->input("data_splits", &splits));
|
||||
const auto& splits_vec = splits->flat<SPLITS_TYPE>();
|
||||
|
||||
// Validate that the splits are valid indices into data
|
||||
const int input_data_size = data->flat<tstring>().size();
|
||||
const int splits_vec_size = splits_vec.size();
|
||||
for (int i = 0; i < splits_vec_size; ++i) {
|
||||
bool valid_splits = splits_vec(i) >= 0;
|
||||
valid_splits = valid_splits && (splits_vec(i) <= input_data_size);
|
||||
OP_REQUIRES(
|
||||
context, valid_splits,
|
||||
errors::InvalidArgument("Invalid split value ", splits_vec(i),
|
||||
", must be in [0,", input_data_size, "]"));
|
||||
}
|
||||
|
||||
int num_batch_items = splits_vec.size() - 1;
|
||||
tensorflow::Tensor* ngrams_splits;
|
||||
OP_REQUIRES_OK(
|
||||
|
@ -136,7 +136,7 @@ struct TopKFunctor<CPUDevice, T> {
|
||||
return Status::OK();
|
||||
}
|
||||
|
||||
auto SortIndices = [&](int64 start_batch, int64 limit_batch) {
|
||||
auto SortIndices = [&](int start_batch, int limit_batch) {
|
||||
for (int32 b = start_batch; b < limit_batch; ++b) {
|
||||
const T* input_data = &input(b, 0);
|
||||
const auto stable_comp = [input_data](const int32 a, const int32 b) {
|
||||
|
@ -22,7 +22,7 @@ limitations under the License.
|
||||
// tensorflow/tools/pip_package/setup.py
|
||||
#define TF_MAJOR_VERSION 2
|
||||
#define TF_MINOR_VERSION 3
|
||||
#define TF_PATCH_VERSION 2
|
||||
#define TF_PATCH_VERSION 0
|
||||
|
||||
// TF_VERSION_SUFFIX is non-empty for pre-releases (e.g. "-alpha", "-alpha.1",
|
||||
// "-beta", "-rc", "-rc.1")
|
||||
|
@ -35,7 +35,7 @@
|
||||
<java.version>1.8</java.version>
|
||||
<spark.version>2.4.5</spark.version>
|
||||
<yarn.api.version>2.7.3</yarn.api.version>
|
||||
<junit.version>4.13.1</junit.version>
|
||||
<junit.version>4.11</junit.version>
|
||||
</properties>
|
||||
|
||||
<build>
|
||||
|
@ -16,7 +16,7 @@
|
||||
<maven.compiler.target>1.6</maven.compiler.target>
|
||||
<hadoop.version>2.6.0</hadoop.version>
|
||||
<protobuf.version>3.5.1</protobuf.version>
|
||||
<junit.version>4.13.1</junit.version>
|
||||
<junit.version>4.11</junit.version>
|
||||
</properties>
|
||||
|
||||
<licenses>
|
||||
|
@ -18,7 +18,6 @@ limitations under the License.
|
||||
#include <algorithm>
|
||||
|
||||
#include "tensorflow/lite/arena_planner.h"
|
||||
#include "tensorflow/lite/builtin_ops.h"
|
||||
#include "tensorflow/lite/c/common.h"
|
||||
#include "tensorflow/lite/context_util.h"
|
||||
#include "tensorflow/lite/core/api/tensor_utils.h"
|
||||
@ -568,33 +567,6 @@ TfLiteStatus Subgraph::CheckTensorIndices(const char* label, const int* indices,
|
||||
return kTfLiteOk;
|
||||
}
|
||||
|
||||
// We have two arrays and we need to check that elements from one array don't
|
||||
// show up in the other. We could sort both arrays and then iterate with two
|
||||
// pointers from start to finish always increasing the smaller one but since
|
||||
// these arrays are usually short (<25 elements for inputs, usually <3 for
|
||||
// outputs), this might be slower than the naive approach (if arrays have size n
|
||||
// and m, with n >> m ~ O(1), first approach is O(nlogn) whereas the other is
|
||||
// O(n)). Plus, sorting the input and output arrays might not be something we
|
||||
// want as it destroys ordering of elements.
|
||||
//
|
||||
// If it turns out that this is an issue, we can switch to the other algorithm.
|
||||
TfLiteStatus Subgraph::CheckInputAndOutputForOverlap(const int* input_indices,
|
||||
int num_inputs,
|
||||
const int* output_indices,
|
||||
int num_outputs) {
|
||||
for (int i = 0; i < num_inputs; i++) {
|
||||
for (int j = 0; j < num_outputs; j++) {
|
||||
if (input_indices[i] == output_indices[j]) {
|
||||
ReportError("Tensor %d is both input %d and output %d\n",
|
||||
input_indices[i], i, j);
|
||||
consistent_ = false;
|
||||
return kTfLiteError;
|
||||
}
|
||||
}
|
||||
}
|
||||
return kTfLiteOk;
|
||||
}
|
||||
|
||||
namespace {
|
||||
// Multiply two sizes and return true if overflow occurred;
|
||||
// This is based off tensorflow/overflow.h but is simpler as we already
|
||||
@ -716,16 +688,6 @@ TfLiteStatus Subgraph::AddNodeWithParameters(
|
||||
&context_,
|
||||
CheckTensorIndices("node outputs", outputs.data(), outputs.size()));
|
||||
|
||||
// For builtin ops, inputs and outputs must not overlap. Custom ops must do
|
||||
// this check by themselves if they don't support overlapping tensors. This
|
||||
// distinction is to allow custom ops to just forward a tensor, reusing it as
|
||||
// both input and output.
|
||||
if (builtin_data != nullptr) {
|
||||
TF_LITE_ENSURE_OK(&context_, CheckInputAndOutputForOverlap(
|
||||
inputs.data(), inputs.size(),
|
||||
outputs.data(), outputs.size()));
|
||||
}
|
||||
|
||||
int new_node_index = nodes_and_registration_.size();
|
||||
if (node_index) *node_index = new_node_index;
|
||||
nodes_and_registration_.resize(nodes_and_registration_.size() + 1);
|
||||
@ -972,19 +934,6 @@ TfLiteStatus Subgraph::Invoke() {
|
||||
tensor->data_is_stale) {
|
||||
TF_LITE_ENSURE_STATUS(EnsureTensorDataIsReadable(tensor_index));
|
||||
}
|
||||
if (tensor->data.raw == nullptr && tensor->bytes > 0) {
|
||||
if (registration.builtin_code == kTfLiteBuiltinReshape && i == 1) {
|
||||
// In general, having a tensor here with no buffer will be an error.
|
||||
// However, for the reshape operator, the second input tensor is only
|
||||
// used for the shape, not for the data. Thus, null buffer is ok.
|
||||
continue;
|
||||
} else {
|
||||
// In all other cases, we need to return an error as otherwise we will
|
||||
// trigger a null pointer dereference (likely).
|
||||
ReportError("Input tensor %d lacks data", tensor_index);
|
||||
return kTfLiteError;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (check_cancelled_func_ != nullptr &&
|
||||
|
@ -433,15 +433,6 @@ class Subgraph {
|
||||
TfLiteStatus CheckTensorIndices(const char* label, const int* indices,
|
||||
int length);
|
||||
|
||||
// Check that the input indices and the output indices don't overlap.
|
||||
// This is needed because same tensor must not be used both as input and
|
||||
// output for an operator.
|
||||
// NOTE: this changes consistent_ to be false if indices are out of bounds.
|
||||
TfLiteStatus CheckInputAndOutputForOverlap(const int* input_indices,
|
||||
int num_inputs,
|
||||
const int* output_indices,
|
||||
int num_outputs);
|
||||
|
||||
// Compute the number of bytes required to represent a tensor with dimensions
|
||||
// specified by the array dims (of length dims_size). Returns the status code
|
||||
// and bytes.
|
||||
|
@ -609,12 +609,7 @@ TfLiteStatus InterpreterBuilder::operator()(
|
||||
auto* buffers = model_->buffers();
|
||||
|
||||
if (subgraphs->size() == 0) {
|
||||
TF_LITE_REPORT_ERROR(error_reporter_, "No subgraph in the model.\n");
|
||||
return cleanup_and_error();
|
||||
}
|
||||
|
||||
if (!buffers) {
|
||||
TF_LITE_REPORT_ERROR(error_reporter_, "No buffers in the model.\n");
|
||||
error_reporter_->Report("No subgraph in the model.\n");
|
||||
return cleanup_and_error();
|
||||
}
|
||||
|
||||
@ -635,10 +630,10 @@ TfLiteStatus InterpreterBuilder::operator()(
|
||||
(*interpreter)->subgraph(subgraph_index);
|
||||
auto operators = subgraph->operators();
|
||||
auto tensors = subgraph->tensors();
|
||||
if (!operators || !tensors) {
|
||||
TF_LITE_REPORT_ERROR(error_reporter_,
|
||||
"Did not get operators or tensors in subgraph %d.\n",
|
||||
subgraph_index);
|
||||
if (!operators || !tensors || !buffers) {
|
||||
error_reporter_->Report(
|
||||
"Did not get operators, tensors, or buffers in subgraph %d.\n",
|
||||
subgraph_index);
|
||||
return cleanup_and_error();
|
||||
}
|
||||
if (modified_subgraph->AddTensors(tensors->size()) != kTfLiteOk) {
|
||||
|
@ -70,9 +70,6 @@ inline bool ResolveAxis(const int num_dims, const int* axis,
|
||||
// eg: For num_dims=3, [0, 1, 2] is the same as [-3, -2, -1] */
|
||||
int current = axis[idx] < 0 ? (axis[idx] + num_dims) : axis[idx];
|
||||
TFLITE_DCHECK(current >= 0 && current < num_dims);
|
||||
if (current < 0 || current >= num_dims) {
|
||||
return false;
|
||||
}
|
||||
bool is_dup = false;
|
||||
for (int j = 0; j < *out_num_axis; ++j) {
|
||||
if (out_axis[j] == current) {
|
||||
|
@ -432,7 +432,7 @@ int MatchingArraySize(const ArrayType1& array1, int index1,
|
||||
inline int MatchingDim(const RuntimeShape& shape1, int index1,
|
||||
const RuntimeShape& shape2, int index2) {
|
||||
TFLITE_DCHECK_EQ(shape1.Dims(index1), shape2.Dims(index2));
|
||||
return std::min(shape1.Dims(index1), shape2.Dims(index2));
|
||||
return shape1.Dims(index1);
|
||||
}
|
||||
|
||||
template <typename... Args>
|
||||
|
@ -30,48 +30,27 @@ inline int SizeOfDimension(const TfLiteTensor* t, int dim) {
|
||||
}
|
||||
inline const TfLiteTensor* GetInput(const TfLiteContext* context,
|
||||
const TfLiteNode* node, int index) {
|
||||
const int tensor_index = node->inputs->data[index];
|
||||
if (tensor_index < 0) {
|
||||
return nullptr;
|
||||
}
|
||||
return &context->tensors[tensor_index];
|
||||
return &context->tensors[node->inputs->data[index]];
|
||||
}
|
||||
// Note: You must check if result is not null:
|
||||
// TfLiteTensor* my_tensor = GetVariableInput(context, node, kMyTensorIdx);
|
||||
// TF_LITE_ENSURE(context, my_tensor != nullptr);
|
||||
inline TfLiteTensor* GetVariableInput(TfLiteContext* context,
|
||||
const TfLiteNode* node, int index) {
|
||||
const int tensor_index = node->inputs->data[index];
|
||||
if (tensor_index < 0) {
|
||||
return nullptr;
|
||||
}
|
||||
TfLiteTensor* tensor = &context->tensors[tensor_index];
|
||||
TfLiteTensor* tensor = &context->tensors[node->inputs->data[index]];
|
||||
return (tensor->is_variable) ? tensor : nullptr;
|
||||
}
|
||||
inline TfLiteTensor* GetOutput(TfLiteContext* context, const TfLiteNode* node,
|
||||
int index) {
|
||||
const int tensor_index = node->outputs->data[index];
|
||||
if (tensor_index < 0) {
|
||||
return nullptr;
|
||||
}
|
||||
return &context->tensors[tensor_index];
|
||||
return &context->tensors[node->outputs->data[index]];
|
||||
}
|
||||
inline TfLiteTensor* GetTemporary(TfLiteContext* context,
|
||||
const TfLiteNode* node, int index) {
|
||||
const int tensor_index = node->temporaries->data[index];
|
||||
if (tensor_index < 0) {
|
||||
return nullptr;
|
||||
}
|
||||
return &context->tensors[tensor_index];
|
||||
return &context->tensors[node->temporaries->data[index]];
|
||||
}
|
||||
|
||||
inline const TfLiteTensor* GetIntermediates(TfLiteContext* context,
|
||||
const TfLiteNode* node, int index) {
|
||||
const int tensor_index = node->intermediates->data[index];
|
||||
if (tensor_index < 0) {
|
||||
return nullptr;
|
||||
}
|
||||
return &context->tensors[tensor_index];
|
||||
return &context->tensors[node->intermediates->data[index]];
|
||||
}
|
||||
inline int NumInputs(const TfLiteNode* node) { return node->inputs->size; }
|
||||
inline int NumOutputs(const TfLiteNode* node) { return node->outputs->size; }
|
||||
@ -94,7 +73,12 @@ inline int64_t NumElements(const TfLiteTensor* t) {
|
||||
inline const TfLiteTensor* GetOptionalInputTensor(TfLiteContext* context,
|
||||
const TfLiteNode* node,
|
||||
int index) {
|
||||
return GetInput(context, node, index);
|
||||
const bool use_tensor = index < node->inputs->size &&
|
||||
node->inputs->data[index] != kTfLiteOptionalTensor;
|
||||
if (use_tensor) {
|
||||
return &context->tensors[node->inputs->data[index]];
|
||||
}
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
// Determines whether tensor is constant.
|
||||
|
@ -34,24 +34,11 @@ TfLiteStatus ResizeOutputTensor(TfLiteContext* context,
|
||||
const TfLiteTensor* data,
|
||||
const TfLiteTensor* segment_ids,
|
||||
TfLiteTensor* output) {
|
||||
// Segment ids should be of same cardinality as first input dimension and they
|
||||
// should be increasing by at most 1, from 0 (e.g., [0, 0, 1, 2, 3] is valid)
|
||||
int max_index = -1;
|
||||
const int segment_id_size = segment_ids->dims->data[0];
|
||||
TF_LITE_ENSURE_EQ(context, segment_id_size, data->dims->data[0]);
|
||||
int previous_segment_id = -1;
|
||||
for (int i = 0; i < segment_id_size; i++) {
|
||||
const int current_segment_id = GetTensorData<int32_t>(segment_ids)[i];
|
||||
if (i == 0) {
|
||||
TF_LITE_ENSURE_EQ(context, current_segment_id, 0);
|
||||
} else {
|
||||
int delta = current_segment_id - previous_segment_id;
|
||||
TF_LITE_ENSURE(context, delta == 0 || delta == 1);
|
||||
}
|
||||
previous_segment_id = current_segment_id;
|
||||
if (segment_id_size > 0) {
|
||||
max_index = segment_ids->data.i32[segment_id_size - 1];
|
||||
}
|
||||
|
||||
const int max_index = previous_segment_id;
|
||||
|
||||
const int data_rank = NumDimensions(data);
|
||||
TfLiteIntArray* output_shape = TfLiteIntArrayCreate(NumDimensions(data));
|
||||
output_shape->data[0] = max_index + 1;
|
||||
|
@ -110,37 +110,5 @@ TEST(SegmentSumOpModelTest, Float32Test_ThreeDimensions) {
|
||||
EXPECT_THAT(model.GetOutputShape(), ElementsAreArray({2, 2, 1}));
|
||||
}
|
||||
|
||||
TEST(SegmentSumOpModelTest, TestFailIfSegmentsAreNotSorted) {
|
||||
SegmentSumOpModel<int32_t> model({TensorType_INT32, {3, 2}},
|
||||
{TensorType_INT32, {3}});
|
||||
model.PopulateTensor<int32_t>(model.data(), {1, 2, 3, 4, 5, 6});
|
||||
model.PopulateTensor<int32_t>(model.segment_ids(), {0, 3, 1});
|
||||
ASSERT_EQ(model.InvokeUnchecked(), kTfLiteError);
|
||||
}
|
||||
|
||||
TEST(SegmentSumOpModelTest, TestFailIfSegmentsAreNotConsecutive) {
|
||||
SegmentSumOpModel<int32_t> model({TensorType_INT32, {3, 2}},
|
||||
{TensorType_INT32, {3}});
|
||||
model.PopulateTensor<int32_t>(model.data(), {1, 2, 3, 4, 5, 6});
|
||||
model.PopulateTensor<int32_t>(model.segment_ids(), {0, 3, 5});
|
||||
ASSERT_EQ(model.InvokeUnchecked(), kTfLiteError);
|
||||
}
|
||||
|
||||
TEST(SegmentSumOpModelTest, TestFailIfSegmentsAreNegative) {
|
||||
SegmentSumOpModel<int32_t> model({TensorType_INT32, {3, 2}},
|
||||
{TensorType_INT32, {3}});
|
||||
model.PopulateTensor<int32_t>(model.data(), {1, 2, 3, 4, 5, 6});
|
||||
model.PopulateTensor<int32_t>(model.segment_ids(), {-1, 0, 1});
|
||||
ASSERT_EQ(model.InvokeUnchecked(), kTfLiteError);
|
||||
}
|
||||
|
||||
TEST(SegmentSumOpModelTest, TestFailIfSegmentsAreNotTheRightCardinality) {
|
||||
SegmentSumOpModel<int32_t> model({TensorType_INT32, {3, 2}},
|
||||
{TensorType_INT32, {2}});
|
||||
model.PopulateTensor<int32_t>(model.data(), {1, 2, 3, 4, 5, 6});
|
||||
model.PopulateTensor<int32_t>(model.segment_ids(), {0, 1});
|
||||
ASSERT_EQ(model.InvokeUnchecked(), kTfLiteError);
|
||||
}
|
||||
|
||||
} // namespace
|
||||
} // namespace tflite
|
||||
|
@ -57,7 +57,6 @@ cc_library(
|
||||
"//conditions:default": [],
|
||||
}) + select({
|
||||
"//tensorflow:fuchsia": [],
|
||||
"//tensorflow:windows": [],
|
||||
"//conditions:default": [
|
||||
"//tensorflow/lite/delegates/xnnpack:xnnpack_delegate",
|
||||
],
|
||||
|
@ -18,6 +18,7 @@ from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
|
||||
import collections
|
||||
import contextlib
|
||||
|
||||
from six.moves import xrange # pylint: disable=redefined-builtin
|
||||
@ -36,7 +37,6 @@ from tensorflow.python.platform import tf_logging as logging
|
||||
from tensorflow.python.util import compat
|
||||
from tensorflow.python.util import nest
|
||||
from tensorflow.python.util import tf_inspect
|
||||
from tensorflow.python.util.compat import collections_abc
|
||||
from tensorflow.python.util.tf_export import tf_export
|
||||
|
||||
_XLA_COMPILE_ATTR = '_xla_compile_id'
|
||||
@ -329,7 +329,7 @@ def _compile_internal(computation, inputs=None):
|
||||
if inputs is None:
|
||||
inputs = []
|
||||
|
||||
if not isinstance(inputs, collections_abc.Sequence):
|
||||
if not isinstance(inputs, collections.Sequence):
|
||||
raise TypeError('inputs must be a list')
|
||||
|
||||
# Flatten inputs.
|
||||
@ -428,15 +428,15 @@ def is_flat(outputs):
|
||||
"""
|
||||
# If outputs is a list or tuple, check if it has any nested structure. If
|
||||
# there is, then outputs is non-flat.
|
||||
if isinstance(outputs, collections_abc.Sequence):
|
||||
if isinstance(outputs, collections.Sequence):
|
||||
for o in outputs:
|
||||
if (isinstance(o, collections_abc.Sequence) or
|
||||
isinstance(o, collections_abc.Mapping) or
|
||||
if (isinstance(o, collections.Sequence) or
|
||||
isinstance(o, collections.Mapping) or
|
||||
hasattr(o.__class__, '__attrs_attrs__')):
|
||||
return False
|
||||
|
||||
# If outputs is a dict, it is non-flat.
|
||||
if isinstance(outputs, collections_abc.Mapping):
|
||||
if isinstance(outputs, collections.Mapping):
|
||||
return False
|
||||
|
||||
# If outputs is from the attrs library, it is non-flat.
|
||||
@ -467,7 +467,7 @@ def _postprocess_flat_outputs(outputs):
|
||||
if outputs is None:
|
||||
outputs = tuple()
|
||||
# If the computation only returned one value, make it a tuple.
|
||||
if not isinstance(outputs, collections_abc.Sequence):
|
||||
if not isinstance(outputs, collections.Sequence):
|
||||
outputs = (outputs,)
|
||||
|
||||
# Append `no_op` here so that return value of this function always contains
|
||||
|
@ -18,6 +18,7 @@ from __future__ import division
|
||||
from __future__ import print_function
|
||||
|
||||
import abc
|
||||
import collections
|
||||
import functools
|
||||
import sys
|
||||
import threading
|
||||
@ -71,7 +72,6 @@ from tensorflow.python.util import deprecation
|
||||
from tensorflow.python.util import function_utils
|
||||
from tensorflow.python.util import lazy_loader
|
||||
from tensorflow.python.util import nest as tf_nest
|
||||
from tensorflow.python.util.compat import collections_abc
|
||||
from tensorflow.python.util.tf_export import tf_export
|
||||
|
||||
# Loaded lazily due to a circular dependency (roughly
|
||||
@ -103,7 +103,7 @@ tf_export("data.UNKNOWN_CARDINALITY").export_constant(__name__, "UNKNOWN")
|
||||
|
||||
@tf_export("data.Dataset", v1=[])
|
||||
@six.add_metaclass(abc.ABCMeta)
|
||||
class DatasetV2(collections_abc.Iterable, tracking_base.Trackable,
|
||||
class DatasetV2(collections.Iterable, tracking_base.Trackable,
|
||||
composite_tensor.CompositeTensor):
|
||||
"""Represents a potentially large set of elements.
|
||||
|
||||
|
@ -18,6 +18,7 @@ from __future__ import division
|
||||
from __future__ import print_function
|
||||
|
||||
import abc
|
||||
import collections
|
||||
import threading
|
||||
import warnings
|
||||
|
||||
@ -40,7 +41,6 @@ from tensorflow.python.ops import gen_experimental_dataset_ops
|
||||
from tensorflow.python.training.saver import BaseSaverBuilder
|
||||
from tensorflow.python.training.tracking import base as trackable
|
||||
from tensorflow.python.util import deprecation
|
||||
from tensorflow.python.util.compat import collections_abc
|
||||
from tensorflow.python.util.tf_export import tf_export
|
||||
|
||||
|
||||
@ -543,7 +543,7 @@ class IteratorResourceDeleter(object):
|
||||
|
||||
@tf_export("data.Iterator", v1=[])
|
||||
@six.add_metaclass(abc.ABCMeta)
|
||||
class IteratorBase(collections_abc.Iterator, trackable.Trackable,
|
||||
class IteratorBase(collections.Iterator, trackable.Trackable,
|
||||
composite_tensor.CompositeTensor):
|
||||
"""Represents an iterator of a `tf.data.Dataset`.
|
||||
|
||||
|
@ -440,7 +440,7 @@ def type_spec_from_value(element, use_fallback=True):
|
||||
|
||||
if isinstance(element, tuple):
|
||||
if hasattr(element, "_fields") and isinstance(
|
||||
element._fields, collections_abc.Sequence) and all(
|
||||
element._fields, collections.Sequence) and all(
|
||||
isinstance(f, six.string_types) for f in element._fields):
|
||||
if isinstance(element, wrapt.ObjectProxy):
|
||||
element_type = type(element.__wrapped__)
|
||||
|
@ -99,6 +99,7 @@ from __future__ import division
|
||||
from __future__ import print_function
|
||||
|
||||
import abc
|
||||
import collections
|
||||
import re
|
||||
import threading
|
||||
|
||||
@ -112,7 +113,6 @@ from tensorflow.python.framework import ops
|
||||
from tensorflow.python.platform import tf_logging
|
||||
from tensorflow.python.training import monitored_session
|
||||
from tensorflow.python.util import nest
|
||||
from tensorflow.python.util.compat import collections_abc
|
||||
|
||||
|
||||
# Helper function.
|
||||
@ -445,7 +445,7 @@ class BaseDebugWrapperSession(session.SessionInterface):
|
||||
"""Check whether a possibly nested structure is empty."""
|
||||
if not nest.is_nested(x):
|
||||
return False
|
||||
if isinstance(x, collections_abc.Mapping):
|
||||
if isinstance(x, collections.Mapping):
|
||||
return is_empty(list(x.values()))
|
||||
for item in x:
|
||||
if not is_empty(item):
|
||||
|
@ -18,6 +18,7 @@ from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
|
||||
import collections
|
||||
import functools
|
||||
import sys
|
||||
|
||||
@ -52,7 +53,6 @@ from tensorflow.python.ops import math_ops
|
||||
from tensorflow.python.ops.ragged import ragged_tensor
|
||||
from tensorflow.python.types import distribute as distribute_types
|
||||
from tensorflow.python.util import nest
|
||||
from tensorflow.python.util.compat import collections_abc
|
||||
from tensorflow.python.util.deprecation import deprecated
|
||||
from tensorflow.python.util.tf_export import tf_export
|
||||
from tensorflow.tools.docs import doc_controls
|
||||
@ -143,7 +143,7 @@ def get_distributed_datasets_from_function(dataset_fn,
|
||||
|
||||
|
||||
@tf_export("distribute.DistributedIterator", v1=[])
|
||||
class DistributedIteratorInterface(collections_abc.Iterator,
|
||||
class DistributedIteratorInterface(collections.Iterator,
|
||||
distribute_types.Iterator):
|
||||
"""An iterator over `tf.distribute.DistributedDataset`.
|
||||
|
||||
@ -272,7 +272,7 @@ class DistributedIteratorInterface(collections_abc.Iterator,
|
||||
|
||||
|
||||
@tf_export("distribute.DistributedDataset", v1=[])
|
||||
class DistributedDatasetInterface(collections_abc.Iterable,
|
||||
class DistributedDatasetInterface(collections.Iterable,
|
||||
distribute_types.Iterable):
|
||||
# pylint: disable=line-too-long
|
||||
"""Represents a dataset distributed among devices and machines.
|
||||
|
@ -20,11 +20,9 @@ from __future__ import print_function
|
||||
from absl.testing import parameterized
|
||||
import numpy as np
|
||||
|
||||
|
||||
from tensorflow.python.dlpack import dlpack
|
||||
from tensorflow.python.framework import constant_op
|
||||
from tensorflow.python.framework import dtypes
|
||||
from tensorflow.python.framework import errors
|
||||
from tensorflow.python.framework import ops
|
||||
from tensorflow.python.platform import test
|
||||
|
||||
@ -97,12 +95,6 @@ class DLPackTest(parameterized.TestCase, test.TestCase):
|
||||
self.assertRaisesRegex(Exception, ".* is not supported by dlpack",
|
||||
UnsupportedComplex64)
|
||||
|
||||
def testMustPassTensorArgumentToDLPack(self):
|
||||
with self.assertRaisesRegex(
|
||||
errors.InvalidArgumentError,
|
||||
"The argument to `to_dlpack` must be a TF tensor, not Python object"):
|
||||
dlpack.to_dlpack([1])
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
ops.enable_eager_execution()
|
||||
|
@ -231,10 +231,7 @@ py_test(
|
||||
srcs = ["sequence_feature_column_integration_test.py"],
|
||||
python_version = "PY3",
|
||||
srcs_version = "PY2AND3",
|
||||
tags = [
|
||||
"no_mac",
|
||||
"no_pip",
|
||||
],
|
||||
tags = ["no_pip"],
|
||||
deps = [
|
||||
":feature_column_v2",
|
||||
"//tensorflow/python:client_testlib",
|
||||
|
@ -32,7 +32,6 @@ from tensorflow.python.framework import tensor_conversion_registry
|
||||
from tensorflow.python.framework import tensor_shape
|
||||
from tensorflow.python.framework import type_spec
|
||||
from tensorflow.python.types import internal
|
||||
from tensorflow.python.util.compat import collections_abc
|
||||
from tensorflow.python.util.lazy_loader import LazyLoader
|
||||
from tensorflow.python.util.tf_export import tf_export
|
||||
|
||||
@ -345,7 +344,7 @@ def internal_convert_n_to_tensor_or_indexed_slices(values,
|
||||
RuntimeError: If a registered conversion function returns an invalid
|
||||
value.
|
||||
"""
|
||||
if not isinstance(values, collections_abc.Iterable):
|
||||
if not isinstance(values, collections.Iterable):
|
||||
raise TypeError("values must be iterable.")
|
||||
ret = []
|
||||
for i, value in enumerate(values):
|
||||
|
@ -19,6 +19,7 @@ from __future__ import print_function
|
||||
|
||||
import abc
|
||||
import atexit
|
||||
import collections
|
||||
from collections import OrderedDict
|
||||
import functools
|
||||
import multiprocessing.pool
|
||||
@ -616,7 +617,7 @@ def standardize_sample_or_class_weights(x_weight, output_names, weight_type):
|
||||
'You should provide one `' + weight_type + '`'
|
||||
'array per model output.')
|
||||
return x_weight
|
||||
if isinstance(x_weight, collections_abc.Mapping):
|
||||
if isinstance(x_weight, collections.Mapping):
|
||||
generic_utils.check_for_unexpected_keys(weight_type, x_weight, output_names)
|
||||
x_weights = []
|
||||
for name in output_names:
|
||||
@ -863,7 +864,7 @@ def collect_per_output_metric_info(metrics,
|
||||
[metrics_module.clone_metric(m) for m in metrics])
|
||||
else:
|
||||
nested_metrics = [metrics]
|
||||
elif isinstance(metrics, collections_abc.Mapping):
|
||||
elif isinstance(metrics, collections.Mapping):
|
||||
generic_utils.check_for_unexpected_keys('metrics', metrics, output_names)
|
||||
nested_metrics = []
|
||||
for name in output_names:
|
||||
@ -1442,7 +1443,7 @@ def prepare_sample_weight_modes(training_endpoints, sample_weight_mode):
|
||||
ValueError: In case of invalid `sample_weight_mode` input.
|
||||
"""
|
||||
|
||||
if isinstance(sample_weight_mode, collections_abc.Mapping):
|
||||
if isinstance(sample_weight_mode, collections.Mapping):
|
||||
generic_utils.check_for_unexpected_keys(
|
||||
'sample_weight_mode', sample_weight_mode,
|
||||
[e.output_name for e in training_endpoints])
|
||||
@ -1535,7 +1536,7 @@ def prepare_loss_weights(training_endpoints, loss_weights=None):
|
||||
if loss_weights is None:
|
||||
for e in training_endpoints:
|
||||
e.loss_weight = 1.
|
||||
elif isinstance(loss_weights, collections_abc.Mapping):
|
||||
elif isinstance(loss_weights, collections.Mapping):
|
||||
generic_utils.check_for_unexpected_keys(
|
||||
'loss_weights', loss_weights,
|
||||
[e.output_name for e in training_endpoints])
|
||||
|
@ -30,12 +30,12 @@ from tensorflow.python.keras.engine.base_layer import Layer
|
||||
from tensorflow.python.keras.engine.input_spec import InputSpec
|
||||
from tensorflow.python.keras.utils import tf_utils
|
||||
from tensorflow.python.ops import array_ops
|
||||
from tensorflow.python.ops import control_flow_ops
|
||||
from tensorflow.python.ops import init_ops
|
||||
from tensorflow.python.ops import math_ops
|
||||
from tensorflow.python.ops import nn
|
||||
from tensorflow.python.ops import state_ops
|
||||
from tensorflow.python.ops import variables as tf_variables
|
||||
from tensorflow.python.platform import device_context
|
||||
from tensorflow.python.platform import tf_logging as logging
|
||||
from tensorflow.python.util.tf_export import keras_export
|
||||
|
||||
@ -514,7 +514,7 @@ class BatchNormalizationBase(Layer):
|
||||
use_fused_avg_updates = (
|
||||
ops.executing_eagerly_outside_functions() and
|
||||
isinstance(self.momentum, (float, int)) and
|
||||
enclosing_xla_context() is None)
|
||||
device_context.enclosing_tpu_context() is None)
|
||||
if use_fused_avg_updates:
|
||||
exponential_avg_factor = 1.0 - self.momentum
|
||||
else:
|
||||
@ -930,23 +930,6 @@ def replace_in_base_docstring(replacements):
|
||||
return string
|
||||
|
||||
|
||||
def enclosing_xla_context():
|
||||
"""Recursively find and return the XLAControlFlowContext."""
|
||||
graph = ops.get_default_graph()
|
||||
while graph is not None:
|
||||
# pylint: disable=protected-access
|
||||
context_ = graph._get_control_flow_context()
|
||||
# pylint: enable=protected-access
|
||||
while context_ is not None:
|
||||
if isinstance(context_, control_flow_ops.XLAControlFlowContext):
|
||||
return context_
|
||||
context_ = context_.outer_context
|
||||
# This may be a FuncGraph due to defuns or v2 control flow. We need to
|
||||
# find the original graph with the XLAControlFlowContext.
|
||||
graph = getattr(graph, 'outer_graph', None)
|
||||
return None
|
||||
|
||||
|
||||
@keras_export(v1=['keras.layers.BatchNormalization']) # pylint: disable=missing-docstring
|
||||
class BatchNormalization(BatchNormalizationBase):
|
||||
|
||||
|
@ -18,10 +18,11 @@ from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
|
||||
import collections
|
||||
|
||||
import numpy as np
|
||||
|
||||
from tensorflow.python.platform import test
|
||||
from tensorflow.python.util.compat import collections_abc
|
||||
|
||||
|
||||
class PreprocessingLayerTest(test.TestCase):
|
||||
@ -37,7 +38,7 @@ class PreprocessingLayerTest(test.TestCase):
|
||||
self.assertEqual(len(a), len(b))
|
||||
for a_value, b_value in zip(a, b):
|
||||
self.assertAllCloseOrEqual(a_value, b_value, msg=msg)
|
||||
elif isinstance(a, collections_abc.Mapping):
|
||||
elif isinstance(a, collections.Mapping):
|
||||
self.assertEqual(len(a), len(b))
|
||||
for key, a_value in a.items():
|
||||
b_value = b[key]
|
||||
|
@ -44,10 +44,14 @@ from tensorflow.python.platform import tf_logging as logging
|
||||
from tensorflow.python.training.tracking import base as trackable
|
||||
from tensorflow.python.training.tracking import data_structures
|
||||
from tensorflow.python.util import nest
|
||||
from tensorflow.python.util.compat import collections_abc
|
||||
from tensorflow.python.util.tf_export import keras_export
|
||||
from tensorflow.tools.docs import doc_controls
|
||||
|
||||
try:
|
||||
from collections import abc as collections_abc # pylint: disable=g-import-not-at-top
|
||||
except ImportError: # For Python 2
|
||||
import collections as collections_abc # pylint: disable=g-import-not-at-top
|
||||
|
||||
|
||||
RECURRENT_DROPOUT_WARNING_MSG = (
|
||||
'RNN `implementation=2` is not supported when `recurrent_dropout` is set. '
|
||||
|
@ -727,7 +727,6 @@ cuda_py_test(
|
||||
name = "matrix_solve_ls_op_test",
|
||||
size = "medium",
|
||||
srcs = ["matrix_solve_ls_op_test.py"],
|
||||
tags = ["no_mac"],
|
||||
deps = [
|
||||
"//tensorflow/python:array_ops",
|
||||
"//tensorflow/python:client_testlib",
|
||||
@ -790,7 +789,6 @@ tf_py_test(
|
||||
name = "parsing_ops_test",
|
||||
size = "medium",
|
||||
srcs = ["parsing_ops_test.py"],
|
||||
tags = ["no_mac"],
|
||||
deps = [
|
||||
"//tensorflow/core:protos_all_py",
|
||||
"//tensorflow/python:array_ops",
|
||||
|
@ -24,7 +24,6 @@ tf_py_test(
|
||||
name = "resource_ops_test",
|
||||
size = "small",
|
||||
srcs = ["resource_ops_test.py"],
|
||||
tags = ["no_mac"],
|
||||
deps = [
|
||||
"//tensorflow/core/kernels/boosted_trees:boosted_trees_proto_py",
|
||||
"//tensorflow/python:boosted_trees_ops",
|
||||
@ -40,7 +39,6 @@ tf_py_test(
|
||||
name = "prediction_ops_test",
|
||||
size = "small",
|
||||
srcs = ["prediction_ops_test.py"],
|
||||
tags = ["no_mac"],
|
||||
deps = [
|
||||
"//tensorflow/core/kernels/boosted_trees:boosted_trees_proto_py",
|
||||
"//tensorflow/python:array_ops",
|
||||
@ -71,7 +69,6 @@ tf_py_test(
|
||||
name = "training_ops_test",
|
||||
size = "small",
|
||||
srcs = ["training_ops_test.py"],
|
||||
tags = ["no_mac"],
|
||||
deps = [
|
||||
"//tensorflow/core/kernels/boosted_trees:boosted_trees_proto_py",
|
||||
"//tensorflow/python:array_ops",
|
||||
|
@ -4581,14 +4581,6 @@ class ControlFlowTest(test.TestCase, parameterized.TestCase):
|
||||
result = control_flow_ops.merge([v_f, v_t])
|
||||
self.evaluate(result)
|
||||
|
||||
def testSwitchEagerMode(self):
|
||||
if not context.executing_eagerly():
|
||||
return
|
||||
input_data = [1, 2, 3, 4]
|
||||
vf, vt = control_flow_ops.switch(input_data, False)
|
||||
self.assertAllEqual(vf, input_data)
|
||||
self.assertAllEqual(vt, [])
|
||||
|
||||
@test_util.run_deprecated_v1
|
||||
def testQIntArgAndRet(self):
|
||||
|
||||
|
@ -25,9 +25,7 @@ from tensorflow.python.eager import context
|
||||
from tensorflow.python.framework import errors
|
||||
from tensorflow.python.framework import ops
|
||||
from tensorflow.python.framework import sparse_tensor
|
||||
from tensorflow.python.framework import test_util
|
||||
from tensorflow.python.ops import bincount_ops
|
||||
from tensorflow.python.ops import gen_count_ops
|
||||
from tensorflow.python.ops import sparse_ops
|
||||
from tensorflow.python.ops.ragged import ragged_factory_ops
|
||||
from tensorflow.python.ops.ragged import ragged_tensor
|
||||
@ -836,121 +834,5 @@ class TestSparseCountFailureModes(test.TestCase):
|
||||
self.evaluate(bincount_ops.sparse_bincount(x, weights=weights, axis=-1))
|
||||
|
||||
|
||||
@test_util.run_all_in_graph_and_eager_modes
|
||||
@test_util.disable_tfrt
|
||||
class RawOpsTest(test.TestCase, parameterized.TestCase):
|
||||
|
||||
def testSparseCountSparseOutputBadIndicesShape(self):
|
||||
indices = [[[0], [0]], [[0], [1]], [[1], [0]], [[1], [2]]]
|
||||
values = [1, 1, 1, 10]
|
||||
weights = [1, 2, 4, 6]
|
||||
dense_shape = [2, 3]
|
||||
with self.assertRaisesRegex(errors.InvalidArgumentError,
|
||||
"Input indices must be a 2-dimensional tensor"):
|
||||
self.evaluate(
|
||||
gen_count_ops.SparseCountSparseOutput(
|
||||
indices=indices,
|
||||
values=values,
|
||||
dense_shape=dense_shape,
|
||||
weights=weights,
|
||||
binary_output=False))
|
||||
|
||||
def testSparseCountSparseOutputBadWeightsShape(self):
|
||||
indices = [[0, 0], [0, 1], [1, 0], [1, 2]]
|
||||
values = [1, 1, 1, 10]
|
||||
weights = [1, 2, 4]
|
||||
dense_shape = [2, 3]
|
||||
with self.assertRaisesRegex(errors.InvalidArgumentError,
|
||||
"Weights and values must have the same shape"):
|
||||
self.evaluate(
|
||||
gen_count_ops.SparseCountSparseOutput(
|
||||
indices=indices,
|
||||
values=values,
|
||||
dense_shape=dense_shape,
|
||||
weights=weights,
|
||||
binary_output=False))
|
||||
|
||||
def testSparseCountSparseOutputBadNumberOfValues(self):
|
||||
indices = [[0, 0], [0, 1], [1, 0]]
|
||||
values = [1, 1, 1, 10]
|
||||
weights = [1, 2, 4, 6]
|
||||
dense_shape = [2, 3]
|
||||
with self.assertRaisesRegex(
|
||||
errors.InvalidArgumentError,
|
||||
"Number of values must match first dimension of indices"):
|
||||
self.evaluate(
|
||||
gen_count_ops.SparseCountSparseOutput(
|
||||
indices=indices,
|
||||
values=values,
|
||||
dense_shape=dense_shape,
|
||||
weights=weights,
|
||||
binary_output=False))
|
||||
|
||||
def testRaggedCountSparseOutput(self):
|
||||
splits = [0, 4, 7]
|
||||
values = [1, 1, 2, 1, 2, 10, 5]
|
||||
weights = [1, 2, 3, 4, 5, 6, 7]
|
||||
output_indices, output_values, output_shape = self.evaluate(
|
||||
gen_count_ops.RaggedCountSparseOutput(
|
||||
splits=splits, values=values, weights=weights, binary_output=False))
|
||||
self.assertAllEqual([[0, 1], [0, 2], [1, 2], [1, 5], [1, 10]],
|
||||
output_indices)
|
||||
self.assertAllEqual([7, 3, 5, 7, 6], output_values)
|
||||
self.assertAllEqual([2, 11], output_shape)
|
||||
|
||||
def testRaggedCountSparseOutputBadWeightsShape(self):
|
||||
splits = [0, 4, 7]
|
||||
values = [1, 1, 2, 1, 2, 10, 5]
|
||||
weights = [1, 2, 3, 4, 5, 6]
|
||||
with self.assertRaisesRegex(errors.InvalidArgumentError,
|
||||
"Weights and values must have the same shape"):
|
||||
self.evaluate(
|
||||
gen_count_ops.RaggedCountSparseOutput(
|
||||
splits=splits,
|
||||
values=values,
|
||||
weights=weights,
|
||||
binary_output=False))
|
||||
|
||||
def testRaggedCountSparseOutputEmptySplits(self):
|
||||
splits = []
|
||||
values = [1, 1, 2, 1, 2, 10, 5]
|
||||
weights = [1, 2, 3, 4, 5, 6, 7]
|
||||
with self.assertRaisesRegex(
|
||||
errors.InvalidArgumentError,
|
||||
"Must provide at least 2 elements for the splits argument"):
|
||||
self.evaluate(
|
||||
gen_count_ops.RaggedCountSparseOutput(
|
||||
splits=splits,
|
||||
values=values,
|
||||
weights=weights,
|
||||
binary_output=False))
|
||||
|
||||
def testRaggedCountSparseOutputBadSplitsStart(self):
|
||||
splits = [1, 7]
|
||||
values = [1, 1, 2, 1, 2, 10, 5]
|
||||
weights = [1, 2, 3, 4, 5, 6, 7]
|
||||
with self.assertRaisesRegex(errors.InvalidArgumentError,
|
||||
"Splits must start with 0"):
|
||||
self.evaluate(
|
||||
gen_count_ops.RaggedCountSparseOutput(
|
||||
splits=splits,
|
||||
values=values,
|
||||
weights=weights,
|
||||
binary_output=False))
|
||||
|
||||
def testRaggedCountSparseOutputBadSplitsEnd(self):
|
||||
splits = [0, 5]
|
||||
values = [1, 1, 2, 1, 2, 10, 5]
|
||||
weights = [1, 2, 3, 4, 5, 6, 7]
|
||||
with self.assertRaisesRegex(errors.InvalidArgumentError,
|
||||
"Splits must end with the number of values"):
|
||||
self.evaluate(
|
||||
gen_count_ops.RaggedCountSparseOutput(
|
||||
splits=splits,
|
||||
values=values,
|
||||
weights=weights,
|
||||
binary_output=False))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
test.main()
|
||||
|
@ -70,6 +70,8 @@ from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
|
||||
import collections
|
||||
|
||||
import numpy as np
|
||||
import six
|
||||
from six.moves import builtins
|
||||
@ -98,7 +100,6 @@ from tensorflow.python.util import compat
|
||||
from tensorflow.python.util import deprecation
|
||||
from tensorflow.python.util import dispatch
|
||||
from tensorflow.python.util import nest
|
||||
from tensorflow.python.util.compat import collections_abc
|
||||
from tensorflow.python.util.tf_export import tf_export
|
||||
|
||||
# Aliases for some automatically-generated names.
|
||||
@ -3492,7 +3493,7 @@ def add_n(inputs, name=None):
|
||||
ValueError: If `inputs` don't all have same shape and dtype or the shape
|
||||
cannot be inferred.
|
||||
"""
|
||||
if not inputs or not isinstance(inputs, collections_abc.Iterable):
|
||||
if not inputs or not isinstance(inputs, collections.Iterable):
|
||||
raise ValueError("inputs must be an iterable of at least one "
|
||||
"Tensor/IndexedSlices with the same dtype and shape")
|
||||
inputs = ops.convert_n_to_tensor_or_indexed_slices(inputs)
|
||||
@ -3625,9 +3626,9 @@ def sigmoid(x, name=None):
|
||||
|
||||
Returns:
|
||||
A Tensor with the same type as `x`.
|
||||
|
||||
|
||||
Usage Example:
|
||||
|
||||
|
||||
>>> x = tf.constant([-128.0, 0.0, 128.0], dtype=tf.float32)
|
||||
>>> tf.sigmoid(x)
|
||||
<tf.Tensor: shape=(3,), dtype=float32,
|
||||
|
@ -18,6 +18,7 @@ from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
|
||||
import collections
|
||||
import functools
|
||||
import numbers
|
||||
import os
|
||||
@ -3269,7 +3270,7 @@ def conv_transpose(input, # pylint: disable=redefined-builtin
|
||||
[input, filter, output_shape]) as name:
|
||||
if tensor_util.is_tensor(output_shape):
|
||||
n = output_shape.shape[0] - 2
|
||||
elif isinstance(output_shape, collections_abc.Sized):
|
||||
elif isinstance(output_shape, collections.Sized):
|
||||
n = len(output_shape) - 2
|
||||
else:
|
||||
raise ValueError("output_shape must be a tensor or sized collection.")
|
||||
|
@ -27,7 +27,6 @@ from six.moves import xrange # pylint: disable=redefined-builtin
|
||||
from tensorflow.python.eager import def_function
|
||||
from tensorflow.python.framework import constant_op
|
||||
from tensorflow.python.framework import dtypes
|
||||
from tensorflow.python.framework import errors
|
||||
from tensorflow.python.framework import ops
|
||||
from tensorflow.python.framework import tensor_spec
|
||||
from tensorflow.python.framework import test_util
|
||||
@ -1217,46 +1216,6 @@ class DataFormatDimMapTest(test_lib.TestCase):
|
||||
y_val = self.evaluate(y)
|
||||
self.assertAllEqual(y_val, y_val_expected)
|
||||
|
||||
@test_util.disable_xla("XLA catches the error and rethrows as different one")
|
||||
def testInvalidLength(self):
|
||||
x = [-4, -3, -2, -1, 0, 1, 2, 3]
|
||||
with self.assertRaisesRegex(errors.InvalidArgumentError,
|
||||
"Source format must be of length 4 or 5"):
|
||||
op = nn_ops.data_format_dim_map(
|
||||
x, src_format="12345678", dst_format="87654321")
|
||||
with test_util.use_gpu():
|
||||
self.evaluate(op)
|
||||
|
||||
@test_util.disable_xla("XLA catches the error and rethrows as different one")
|
||||
def testDuplicateSrc(self):
|
||||
x = [-4, -3, -2, -1, 0, 1, 2, 3]
|
||||
with self.assertRaisesRegex(
|
||||
errors.InvalidArgumentError,
|
||||
"Destination and source format must determine a permutation"):
|
||||
op = nn_ops.data_format_dim_map(x, src_format="1233", dst_format="4321")
|
||||
with test_util.use_gpu():
|
||||
self.evaluate(op)
|
||||
|
||||
@test_util.disable_xla("XLA catches the error and rethrows as different one")
|
||||
def testDuplicateDst(self):
|
||||
x = [-4, -3, -2, -1, 0, 1, 2, 3]
|
||||
with self.assertRaisesRegex(
|
||||
errors.InvalidArgumentError,
|
||||
"Destination and source format must determine a permutation"):
|
||||
op = nn_ops.data_format_dim_map(x, src_format="1234", dst_format="3321")
|
||||
with test_util.use_gpu():
|
||||
self.evaluate(op)
|
||||
|
||||
@test_util.disable_xla("XLA catches the error and rethrows as different one")
|
||||
def testExtraSpecifiers(self):
|
||||
x = [-4, -3, -2, -1, 0, 1, 2, 3]
|
||||
with self.assertRaisesRegex(
|
||||
errors.InvalidArgumentError,
|
||||
"Destination and source format must determine a permutation"):
|
||||
op = nn_ops.data_format_dim_map(x, src_format="1234", dst_format="5321")
|
||||
with test_util.use_gpu():
|
||||
self.evaluate(op)
|
||||
|
||||
|
||||
class DataFormatVectorPermuteTest(test_lib.TestCase):
|
||||
|
||||
@ -1358,60 +1317,6 @@ class DataFormatVectorPermuteTest(test_lib.TestCase):
|
||||
y_val = self.evaluate(y)
|
||||
self.assertAllEqual(y_val, [[7, 4], [4, 5], [5, 1], [9, 3]])
|
||||
|
||||
@test_util.disable_xla("XLA catches the error and rethrows as different one")
|
||||
def testInvalidLength(self):
|
||||
x = [0, 1, 2, 3]
|
||||
with self.assertRaisesRegex(errors.InvalidArgumentError,
|
||||
"Source format must be of length 4 or 5"):
|
||||
op = nn_ops.data_format_vec_permute(
|
||||
x, src_format="12345678", dst_format="87654321")
|
||||
with test_util.use_gpu():
|
||||
self.evaluate(op)
|
||||
|
||||
@test_util.disable_xla("XLA catches the error and rethrows as different one")
|
||||
def testDuplicateSrc(self):
|
||||
x = [0, 1, 2, 3]
|
||||
with self.assertRaisesRegex(
|
||||
errors.InvalidArgumentError,
|
||||
"Destination and source format must determine a permutation"):
|
||||
op = nn_ops.data_format_vec_permute(
|
||||
x, src_format="1233", dst_format="4321")
|
||||
with test_util.use_gpu():
|
||||
self.evaluate(op)
|
||||
|
||||
@test_util.disable_xla("XLA catches the error and rethrows as different one")
|
||||
def testDuplicateDst(self):
|
||||
x = [0, 1, 2, 3]
|
||||
with self.assertRaisesRegex(
|
||||
errors.InvalidArgumentError,
|
||||
"Destination and source format must determine a permutation"):
|
||||
op = nn_ops.data_format_vec_permute(
|
||||
x, src_format="1234", dst_format="3321")
|
||||
with test_util.use_gpu():
|
||||
self.evaluate(op)
|
||||
|
||||
@test_util.disable_xla("XLA catches the error and rethrows as different one")
|
||||
def testExtraSpecifiers(self):
|
||||
x = [0, 1, 2, 3]
|
||||
with self.assertRaisesRegex(
|
||||
errors.InvalidArgumentError,
|
||||
"Destination and source format must determine a permutation"):
|
||||
op = nn_ops.data_format_vec_permute(
|
||||
x, src_format="1234", dst_format="5321")
|
||||
with test_util.use_gpu():
|
||||
self.evaluate(op)
|
||||
|
||||
@test_util.disable_xla("XLA catches the error and rethrows as different one")
|
||||
def test2DNoWH(self):
|
||||
x = [[0, 1], [2, 3]]
|
||||
with self.assertRaisesRegex(
|
||||
errors.InvalidArgumentError,
|
||||
"Format specifier must contain H and W for 2D case"):
|
||||
op = nn_ops.data_format_vec_permute(
|
||||
x, src_format="1234", dst_format="4321")
|
||||
with test_util.use_gpu():
|
||||
self.evaluate(op)
|
||||
|
||||
|
||||
@test_util.run_all_in_graph_and_eager_modes
|
||||
class AvgPoolTest(test_lib.TestCase):
|
||||
|
@ -18,22 +18,16 @@ from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
|
||||
from absl.testing import parameterized
|
||||
|
||||
from tensorflow.python.eager import context
|
||||
from tensorflow.python.framework import constant_op
|
||||
from tensorflow.python.framework import errors
|
||||
from tensorflow.python.framework import ops
|
||||
from tensorflow.python.framework import test_util
|
||||
from tensorflow.python.ops import gen_data_flow_ops
|
||||
from tensorflow.python.ops import gen_math_ops
|
||||
from tensorflow.python.ops import gen_string_ops
|
||||
from tensorflow.python.platform import test
|
||||
|
||||
|
||||
@test_util.run_all_in_graph_and_eager_modes
|
||||
@test_util.disable_tfrt
|
||||
class RawOpsTest(test.TestCase, parameterized.TestCase):
|
||||
class RawOpsTest(test.TestCase):
|
||||
|
||||
def testSimple(self):
|
||||
x = constant_op.constant(1)
|
||||
@ -64,29 +58,6 @@ class RawOpsTest(test.TestCase, parameterized.TestCase):
|
||||
gen_math_ops.Any(input=x, axis=0),
|
||||
gen_math_ops.Any(input=x, axis=0, keep_dims=False))
|
||||
|
||||
@parameterized.parameters([[0, 8]], [[-1, 6]])
|
||||
def testStringNGramsBadDataSplits(self, splits):
|
||||
data = ["aa", "bb", "cc", "dd", "ee", "ff"]
|
||||
with self.assertRaisesRegex(errors.InvalidArgumentError,
|
||||
"Invalid split value"):
|
||||
self.evaluate(
|
||||
gen_string_ops.string_n_grams(
|
||||
data=data,
|
||||
data_splits=splits,
|
||||
separator="",
|
||||
ngram_widths=[2],
|
||||
left_pad="",
|
||||
right_pad="",
|
||||
pad_width=0,
|
||||
preserve_short_sequences=False))
|
||||
|
||||
def testGetSessionHandle(self):
|
||||
if context.executing_eagerly():
|
||||
with self.assertRaisesRegex(
|
||||
errors.FailedPreconditionError,
|
||||
"GetSessionHandle called on null session state"):
|
||||
gen_data_flow_ops.GetSessionHandle(value=[1])
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
ops.enable_eager_execution()
|
||||
|
@ -21,17 +21,14 @@ from __future__ import print_function
|
||||
from absl.testing import parameterized
|
||||
import numpy as np
|
||||
|
||||
from tensorflow.python.eager import context
|
||||
from tensorflow.python.framework import constant_op
|
||||
from tensorflow.python.framework import dtypes
|
||||
from tensorflow.python.framework import errors
|
||||
from tensorflow.python.framework import ops
|
||||
from tensorflow.python.framework import sparse_tensor
|
||||
from tensorflow.python.framework import test_util
|
||||
# Need array_grad to register gradient for Identity.
|
||||
from tensorflow.python.ops import array_grad # pylint: disable=unused-import
|
||||
from tensorflow.python.ops import array_ops
|
||||
from tensorflow.python.ops import gen_sparse_ops
|
||||
from tensorflow.python.ops import gradient_checker_v2 as gradient_checker
|
||||
from tensorflow.python.ops import math_ops
|
||||
# Need sparse_grad to register gradient for SparseToDense.
|
||||
@ -184,57 +181,5 @@ class SparseOpsTest(test_util.TensorFlowTestCase, parameterized.TestCase):
|
||||
self.assertAllEqual(expected, result)
|
||||
|
||||
|
||||
@test_util.run_all_in_graph_and_eager_modes
|
||||
class RawOpsTest(test_util.TensorFlowTestCase, parameterized.TestCase):
|
||||
|
||||
def testSparseFillEmptyRowsGrad(self):
|
||||
reverse_index_map = [2, 1]
|
||||
grad_values = [0, 1, 2, 3]
|
||||
d_values, d_default_value = self.evaluate(
|
||||
gen_sparse_ops.SparseFillEmptyRowsGrad(
|
||||
reverse_index_map=reverse_index_map, grad_values=grad_values))
|
||||
self.assertAllEqual([2, 1], d_values)
|
||||
self.assertEqual(3, d_default_value)
|
||||
|
||||
def testSparseFillEmptyRowsGradNegativeIndexMapValue(self):
|
||||
reverse_index_map = [2, -1]
|
||||
grad_values = [0, 1, 2, 3]
|
||||
with self.assertRaisesRegex(
|
||||
errors.InvalidArgumentError,
|
||||
r'Elements in reverse index must be in \[0, 4\)'):
|
||||
self.evaluate(
|
||||
gen_sparse_ops.SparseFillEmptyRowsGrad(
|
||||
reverse_index_map=reverse_index_map, grad_values=grad_values))
|
||||
|
||||
def testSparseFillEmptyRowsGradLargeIndexMapValue(self):
|
||||
reverse_index_map = [2, 10]
|
||||
grad_values = [0, 1, 2, 3]
|
||||
with self.assertRaisesRegex(
|
||||
errors.InvalidArgumentError,
|
||||
r'Elements in reverse index must be in \[0, 4\)'):
|
||||
self.evaluate(
|
||||
gen_sparse_ops.SparseFillEmptyRowsGrad(
|
||||
reverse_index_map=reverse_index_map, grad_values=grad_values))
|
||||
|
||||
def testSparseFillEmptyRowsGradMatrix(self):
|
||||
reverse_index_map = [0, 1]
|
||||
grad_values = [[0, 1], [2, 3]]
|
||||
# Note: Eager mode and graph mode throw different errors here. Graph mode
|
||||
# will fail with a ValueError from the shape checking logic, while Eager
|
||||
# will fail with an InvalidArgumentError from the kernel itself.
|
||||
if context.executing_eagerly():
|
||||
with self.assertRaisesRegex(errors.InvalidArgumentError,
|
||||
r'grad_values must be a vector'):
|
||||
self.evaluate(
|
||||
gen_sparse_ops.SparseFillEmptyRowsGrad(
|
||||
reverse_index_map=reverse_index_map, grad_values=grad_values))
|
||||
else:
|
||||
with self.assertRaisesRegex(ValueError,
|
||||
r'Shape must be rank 1 but is rank 2'):
|
||||
self.evaluate(
|
||||
gen_sparse_ops.SparseFillEmptyRowsGrad(
|
||||
reverse_index_map=reverse_index_map, grad_values=grad_values))
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
googletest.main()
|
||||
|
@ -18,6 +18,7 @@ from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
|
||||
import collections as collections_lib
|
||||
import copy
|
||||
import enum # pylint: disable=g-bad-import-order
|
||||
import functools
|
||||
@ -46,7 +47,6 @@ from tensorflow.python.util import deprecation
|
||||
from tensorflow.python.util import function_utils
|
||||
from tensorflow.python.util import tf_contextlib
|
||||
from tensorflow.python.util import tf_inspect
|
||||
from tensorflow.python.util.compat import collections_abc
|
||||
from tensorflow.python.util.tf_export import tf_export
|
||||
|
||||
__all__ = [
|
||||
@ -77,13 +77,13 @@ class _PartitionInfo(object):
|
||||
ValueError: If `full_shape` or `var_offset` differ in length. If
|
||||
`var_offset` exceeds `full_shape` in any dimension.
|
||||
"""
|
||||
if not isinstance(full_shape, collections_abc.Sequence) or isinstance(
|
||||
if not isinstance(full_shape, collections_lib.Sequence) or isinstance(
|
||||
full_shape, six.string_types):
|
||||
raise TypeError(
|
||||
"`full_shape` must be a sequence (like tuple or list) instead of " +
|
||||
type(full_shape).__name__)
|
||||
|
||||
if not isinstance(var_offset, collections_abc.Sequence) or isinstance(
|
||||
if not isinstance(var_offset, collections_lib.Sequence) or isinstance(
|
||||
var_offset, six.string_types):
|
||||
raise TypeError(
|
||||
"`var_offset` must be a sequence (like tuple or list) instead of " +
|
||||
@ -151,7 +151,7 @@ class _PartitionInfo(object):
|
||||
ValueError: If `shape` is not the same length as `self.full_shape`. If
|
||||
the variable is partitioned in more than one dimension.
|
||||
"""
|
||||
if not isinstance(shape, collections_abc.Sequence) or isinstance(
|
||||
if not isinstance(shape, collections_lib.Sequence) or isinstance(
|
||||
shape, six.string_types):
|
||||
raise TypeError(
|
||||
"`shape` must be a sequence (like tuple or list) instead of " +
|
||||
@ -451,7 +451,7 @@ class _VariableStore(object):
|
||||
synchronization=VariableSynchronization.AUTO,
|
||||
aggregation=VariableAggregation.NONE):
|
||||
is_scalar = (
|
||||
shape is not None and isinstance(shape, collections_abc.Sequence) and
|
||||
shape is not None and isinstance(shape, collections_lib.Sequence) and
|
||||
not shape)
|
||||
# Partitioned variable case
|
||||
if partitioner is not None and not is_scalar:
|
||||
@ -2511,7 +2511,7 @@ def _call_partitioner(partitioner, shape, dtype):
|
||||
"shape: %s" % shape)
|
||||
|
||||
slicing = partitioner(shape=shape, dtype=dtype)
|
||||
if not isinstance(slicing, collections_abc.Sequence):
|
||||
if not isinstance(slicing, collections_lib.Sequence):
|
||||
raise ValueError("Partitioner must return a sequence, but saw: %s" %
|
||||
slicing)
|
||||
if len(slicing) != shape.ndims:
|
||||
|
@ -1129,16 +1129,9 @@ PYBIND11_MODULE(_pywrap_tfe, m) {
|
||||
// DLPack functions
|
||||
m.def("TFE_ToDlpackCapsule", [](py::handle& o) {
|
||||
PyObject* eager_tensor_pyobject_ptr = o.ptr();
|
||||
TFE_TensorHandle* thandle = EagerTensor_Handle(eager_tensor_pyobject_ptr);
|
||||
tensorflow::Safe_TF_StatusPtr status =
|
||||
tensorflow::make_safe(TF_NewStatus());
|
||||
|
||||
if (!EagerTensor_CheckExact(eager_tensor_pyobject_ptr)) {
|
||||
status->status = tensorflow::errors::InvalidArgument(
|
||||
"The argument to `to_dlpack` must be a TF tensor, not Python object");
|
||||
tensorflow::MaybeRaiseRegisteredFromTFStatus(status.get());
|
||||
}
|
||||
|
||||
TFE_TensorHandle* thandle = EagerTensor_Handle(eager_tensor_pyobject_ptr);
|
||||
void* dlm_ptr = tensorflow::TFE_HandleToDLPack(thandle, status.get());
|
||||
tensorflow::MaybeRaiseRegisteredFromTFStatus(status.get());
|
||||
|
||||
|
@ -24,6 +24,7 @@ from __future__ import division
|
||||
from __future__ import print_function
|
||||
|
||||
import argparse
|
||||
import collections
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
@ -50,7 +51,6 @@ from tensorflow.python.saved_model import signature_constants
|
||||
from tensorflow.python.tools import saved_model_aot_compile
|
||||
from tensorflow.python.tools import saved_model_utils
|
||||
from tensorflow.python.tpu import tpu
|
||||
from tensorflow.python.util.compat import collections_abc
|
||||
|
||||
|
||||
_XLA_DEBUG_OPTIONS_URL = (
|
||||
@ -241,7 +241,7 @@ def _print_args(arguments, argument_type='Argument', indent=0):
|
||||
in_print(' %s' % element)
|
||||
elif isinstance(element, tensor_spec.TensorSpec):
|
||||
print((indent + 1) * ' ' + '%s: %s' % (element.name, repr(element)))
|
||||
elif (isinstance(element, collections_abc.Iterable) and
|
||||
elif (isinstance(element, collections.Iterable) and
|
||||
not isinstance(element, dict)):
|
||||
in_print(' DType: %s' % type(element).__name__)
|
||||
in_print(' Value: [', end='')
|
||||
|
@ -1474,9 +1474,7 @@ class CudnnRnnSequenceTensorDescriptor
|
||||
static port::StatusOr<CudnnRnnSequenceTensorDescriptor> Create(
|
||||
GpuExecutor* parent, int max_seq_length, int batch_size, int data_size,
|
||||
cudnnDataType_t data_type) {
|
||||
if (max_seq_length <= 0) {
|
||||
return port::Status(port::error::INVALID_ARGUMENT, "max_seq_length <= 0");
|
||||
}
|
||||
CHECK_GT(max_seq_length, 0);
|
||||
int dims[] = {batch_size, data_size, 1};
|
||||
int strides[] = {dims[1] * dims[2], dims[2], 1};
|
||||
TensorDescriptor tensor_desc = CreateTensorDescriptor();
|
||||
@ -1497,9 +1495,7 @@ class CudnnRnnSequenceTensorDescriptor
|
||||
const absl::Span<const int>& seq_lengths, bool time_major,
|
||||
cudnnDataType_t data_type) {
|
||||
#if CUDNN_VERSION >= 7201
|
||||
if (max_seq_length <= 0) {
|
||||
return port::Status(port::error::INVALID_ARGUMENT, "max_seq_length <= 0");
|
||||
}
|
||||
CHECK_GT(max_seq_length, 0);
|
||||
int dims[] = {batch_size, data_size, 1};
|
||||
int strides[] = {dims[1] * dims[2], dims[2], 1};
|
||||
TensorDescriptor tensor_desc = CreateTensorDescriptor();
|
||||
|
@ -59,7 +59,7 @@ load(
|
||||
# not contain rc or alpha, only numbers.
|
||||
# Also update tensorflow/core/public/version.h
|
||||
# and tensorflow/tools/pip_package/setup.py
|
||||
VERSION = "2.3.2"
|
||||
VERSION = "2.3.0"
|
||||
VERSION_MAJOR = VERSION.split(".")[0]
|
||||
|
||||
# Sanitize a dependency so that it works correctly from code that includes
|
||||
|
@ -56,7 +56,6 @@ function build_libtensorflow_tarball() {
|
||||
if [ "${TF_NEED_CUDA}" == "1" ]; then
|
||||
BAZEL_OPTS="${BAZEL_OPTS} --config=cuda --crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain"
|
||||
export TF_NEED_ROCM=0
|
||||
export TF_CUDA_COMPUTE_CAPABILITIES="sm_35,sm_50,sm_60,sm_70,sm_75"
|
||||
fi
|
||||
bazel clean --expunge
|
||||
yes "" | ./configure
|
||||
|
@ -58,7 +58,6 @@ ${DOCKER_BINARY} run \
|
||||
-e "TF_NEED_HDFS=0" \
|
||||
-e "TF_NEED_CUDA=${TF_NEED_CUDA}" \
|
||||
-e "TF_NEED_TENSORRT=${TF_NEED_CUDA}" \
|
||||
-e "TF_CUDA_COMPUTE_CAPABILITIES=${TF_CUDA_COMPUTE_CAPABILITIES}" \
|
||||
-e "TF_NEED_ROCM=${TF_NEED_ROCM}" \
|
||||
-e "TF_NEED_OPENCL_SYCL=0" \
|
||||
"${DOCKER_IMAGE}" \
|
||||
|
@ -1,27 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
echo "chmod go+w lib_package/*" >> tensorflow/tools/ci_build/linux/libtensorflow.sh
|
||||
echo "bazel clean --expunge" >> tensorflow/tools/ci_build/linux/libtensorflow.sh
|
||||
|
||||
# Install latest bazel
|
||||
source tensorflow/tools/ci_build/release/common.sh
|
||||
install_bazelisk
|
||||
|
||||
# Pick a version of xcode
|
||||
export DEVELOPER_DIR=/Applications/Xcode_10.3.app/Contents/Developer
|
||||
sudo xcode-select -s "${DEVELOPER_DIR}"
|
||||
|
||||
tensorflow/tools/ci_build/osx/libtensorflow_cpu.sh
|
@ -1,51 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
set -e
|
||||
set -x
|
||||
|
||||
source tensorflow/tools/ci_build/release/common.sh
|
||||
install_bazelisk
|
||||
|
||||
# Pick a more recent version of xcode
|
||||
export DEVELOPER_DIR=/Applications/Xcode_10.3.app/Contents/Developer
|
||||
sudo xcode-select -s "${DEVELOPER_DIR}"
|
||||
python3.5 -m virtualenv tf_build_env --system-site-packages
|
||||
source tf_build_env/bin/activate
|
||||
|
||||
# Install macos pip dependencies
|
||||
install_macos_pip_deps sudo pip3.5
|
||||
|
||||
# Run configure.
|
||||
export TF_NEED_CUDA=0
|
||||
export CC_OPT_FLAGS='-mavx'
|
||||
export TF2_BEHAVIOR=1
|
||||
export PYTHON_BIN_PATH=$(which python3.5)
|
||||
yes "" | "$PYTHON_BIN_PATH" configure.py
|
||||
|
||||
tag_filters="-no_oss,-oss_serial,-nomac,-no_mac,-no_oss_py35,-v1only,-gpu,-tpu,-benchmark-test"
|
||||
|
||||
# Get the default test targets for bazel.
|
||||
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
|
||||
|
||||
# Run tests
|
||||
set +e
|
||||
bazel test --test_output=errors --config=opt \
|
||||
--action_env=TF2_BEHAVIOR="${TF2_BEHAVIOR}" \
|
||||
--build_tag_filters="${tag_filters}" \
|
||||
--test_tag_filters="${tag_filters}" -- \
|
||||
${DEFAULT_BAZEL_TARGETS} \
|
||||
-//tensorflow/lite/...
|
||||
test_xml_summary_exit
|
@ -1,51 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
set -e
|
||||
set -x
|
||||
|
||||
source tensorflow/tools/ci_build/release/common.sh
|
||||
install_bazelisk
|
||||
|
||||
# Pick a more recent version of xcode
|
||||
export DEVELOPER_DIR=/Applications/Xcode_10.3.app/Contents/Developer
|
||||
sudo xcode-select -s "${DEVELOPER_DIR}"
|
||||
|
||||
# Install macos pip dependencies
|
||||
install_macos_pip_deps sudo pip3.5
|
||||
|
||||
# Export required variables for running pip_new.sh
|
||||
export OS_TYPE="MACOS"
|
||||
export CONTAINER_TYPE="CPU"
|
||||
export TF_PYTHON_VERSION='python3.5'
|
||||
export TF_BUILD_BOTH_CPU_PACKAGES=1
|
||||
|
||||
# Run configure.
|
||||
export TF_NEED_CUDA=0
|
||||
export CC_OPT_FLAGS='-mavx'
|
||||
export PYTHON_BIN_PATH=$(which ${TF_PYTHON_VERSION})
|
||||
yes "" | "$PYTHON_BIN_PATH" configure.py
|
||||
|
||||
# Export optional variables for running pip.sh
|
||||
export TF_BUILD_FLAGS="--config=opt --config=v2"
|
||||
export TF_TEST_FLAGS="--define=no_tensorflow_py_deps=true --test_lang_filters=py --test_output=errors --verbose_failures=true --keep_going --test_env=TF2_BEHAVIOR=1"
|
||||
export TF_TEST_TARGETS="//tensorflow/python/..."
|
||||
export TF_PIP_TESTS="test_pip_virtualenv_non_clean test_pip_virtualenv_clean"
|
||||
export TF_TEST_FILTER_TAGS='-nomac,-no_mac,-no_oss,-oss_serial,-no_oss_py35,-gpu,-tpu,-benchmark-test'
|
||||
#export IS_NIGHTLY=0 # Not nightly; uncomment if building from tf repo.
|
||||
export TF_PROJECT_NAME="tensorflow"
|
||||
export TF_PIP_TEST_ROOT="pip_test"
|
||||
|
||||
./tensorflow/tools/ci_build/builds/pip_new.sh
|
@ -1,51 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
set -e
|
||||
set -x
|
||||
|
||||
source tensorflow/tools/ci_build/release/common.sh
|
||||
install_bazelisk
|
||||
|
||||
# Pick a more recent version of xcode
|
||||
export DEVELOPER_DIR=/Applications/Xcode_10.3.app/Contents/Developer
|
||||
sudo xcode-select -s "${DEVELOPER_DIR}"
|
||||
python3.6 -m virtualenv tf_build_env --system-site-packages
|
||||
source tf_build_env/bin/activate
|
||||
|
||||
# Install macos pip dependencies
|
||||
install_macos_pip_deps sudo pip3.6
|
||||
|
||||
# Run configure.
|
||||
export TF_NEED_CUDA=0
|
||||
export CC_OPT_FLAGS='-mavx'
|
||||
export TF2_BEHAVIOR=1
|
||||
export PYTHON_BIN_PATH=$(which python3.6)
|
||||
yes "" | "$PYTHON_BIN_PATH" configure.py
|
||||
|
||||
tag_filters="-no_oss,-oss_serial,-nomac,-no_mac,-no_oss_py36,-v1only,-gpu,-tpu,-benchmark-test"
|
||||
|
||||
# Get the default test targets for bazel.
|
||||
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
|
||||
|
||||
# Run tests
|
||||
set +e
|
||||
bazel test --test_output=errors --config=opt \
|
||||
--action_env=TF2_BEHAVIOR="${TF2_BEHAVIOR}" \
|
||||
--build_tag_filters="${tag_filters}" \
|
||||
--test_tag_filters="${tag_filters}" -- \
|
||||
${DEFAULT_BAZEL_TARGETS} \
|
||||
-//tensorflow/lite/...
|
||||
test_xml_summary_exit
|
@ -1,51 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
set -e
|
||||
set -x
|
||||
|
||||
source tensorflow/tools/ci_build/release/common.sh
|
||||
install_bazelisk
|
||||
|
||||
# Pick a more recent version of xcode
|
||||
export DEVELOPER_DIR=/Applications/Xcode_10.3.app/Contents/Developer
|
||||
sudo xcode-select -s "${DEVELOPER_DIR}"
|
||||
|
||||
# Install macos pip dependencies
|
||||
install_macos_pip_deps sudo pip3.6
|
||||
|
||||
# Export required variables for running pip_new.sh
|
||||
export OS_TYPE="MACOS"
|
||||
export CONTAINER_TYPE="CPU"
|
||||
export TF_PYTHON_VERSION='python3.6'
|
||||
export TF_BUILD_BOTH_CPU_PACKAGES=1
|
||||
|
||||
# Run configure.
|
||||
export TF_NEED_CUDA=0
|
||||
export CC_OPT_FLAGS='-mavx'
|
||||
export PYTHON_BIN_PATH=$(which ${TF_PYTHON_VERSION})
|
||||
yes "" | "$PYTHON_BIN_PATH" configure.py
|
||||
|
||||
# Export optional variables for running pip.sh
|
||||
export TF_BUILD_FLAGS="--config=opt --config=v2"
|
||||
export TF_TEST_FLAGS="--define=no_tensorflow_py_deps=true --test_lang_filters=py --test_output=errors --verbose_failures=true --keep_going --test_env=TF2_BEHAVIOR=1"
|
||||
export TF_TEST_TARGETS="//tensorflow/python/..."
|
||||
export TF_PIP_TESTS="test_pip_virtualenv_non_clean test_pip_virtualenv_clean"
|
||||
export TF_TEST_FILTER_TAGS='-nomac,-no_mac,-no_oss,-oss_serial,-no_oss_py35,-v1only,-gpu,-tpu,-benchmark-test'
|
||||
#export IS_NIGHTLY=0 # Not nightly; uncomment if building from tf repo.
|
||||
export TF_PROJECT_NAME="tensorflow"
|
||||
export TF_PIP_TEST_ROOT="pip_test"
|
||||
|
||||
./tensorflow/tools/ci_build/builds/pip_new.sh
|
@ -1,51 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
set -e
|
||||
set -x
|
||||
|
||||
source tensorflow/tools/ci_build/release/common.sh
|
||||
install_bazelisk
|
||||
|
||||
# Pick a more recent version of xcode
|
||||
export DEVELOPER_DIR=/Applications/Xcode_10.3.app/Contents/Developer
|
||||
sudo xcode-select -s "${DEVELOPER_DIR}"
|
||||
python -m virtualenv tf_build_env --system-site-packages
|
||||
source tf_build_env/bin/activate
|
||||
|
||||
# Install macos pip dependencies
|
||||
install_macos_pip_deps sudo pip3.7
|
||||
|
||||
# Run configure.
|
||||
export TF_NEED_CUDA=0
|
||||
export CC_OPT_FLAGS='-mavx'
|
||||
export TF2_BEHAVIOR=1
|
||||
export PYTHON_BIN_PATH=$(which python3.7)
|
||||
yes "" | "$PYTHON_BIN_PATH" configure.py
|
||||
|
||||
tag_filters="-no_oss,-oss_serial,-nomac,-no_mac$(maybe_skip_v1),-gpu,-tpu,-benchmark-test"
|
||||
|
||||
# Get the default test targets for bazel.
|
||||
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
|
||||
|
||||
# Run tests
|
||||
set +e
|
||||
bazel test --test_output=errors --config=opt \
|
||||
--action_env=TF2_BEHAVIOR="${TF2_BEHAVIOR}" \
|
||||
--build_tag_filters="${tag_filters}" \
|
||||
--test_tag_filters="${tag_filters}" -- \
|
||||
${DEFAULT_BAZEL_TARGETS} \
|
||||
-//tensorflow/lite/...
|
||||
test_xml_summary_exit
|
@ -1,51 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
set -e
|
||||
set -x
|
||||
|
||||
source tensorflow/tools/ci_build/release/common.sh
|
||||
install_bazelisk
|
||||
|
||||
# Pick a more recent version of xcode
|
||||
export DEVELOPER_DIR=/Applications/Xcode_10.3.app/Contents/Developer
|
||||
sudo xcode-select -s "${DEVELOPER_DIR}"
|
||||
|
||||
# Install macos pip dependencies
|
||||
install_macos_pip_deps sudo pip3.7
|
||||
|
||||
# Export required variables for running pip_new.sh
|
||||
export OS_TYPE="MACOS"
|
||||
export CONTAINER_TYPE="CPU"
|
||||
export TF_PYTHON_VERSION='python3.7'
|
||||
export TF_BUILD_BOTH_CPU_PACKAGES=1
|
||||
|
||||
# Run configure.
|
||||
export TF_NEED_CUDA=0
|
||||
export CC_OPT_FLAGS='-mavx'
|
||||
export PYTHON_BIN_PATH=$(which ${TF_PYTHON_VERSION})
|
||||
yes "" | "$PYTHON_BIN_PATH" configure.py
|
||||
|
||||
# Export optional variables for running pip.sh
|
||||
export TF_BUILD_FLAGS="--config=opt --config=v2"
|
||||
export TF_TEST_FLAGS="--define=no_tensorflow_py_deps=true --test_lang_filters=py --test_output=errors --verbose_failures=true --keep_going --test_env=TF2_BEHAVIOR=1"
|
||||
export TF_TEST_TARGETS="//tensorflow/python/..."
|
||||
export TF_PIP_TESTS="test_pip_virtualenv_non_clean test_pip_virtualenv_clean"
|
||||
export TF_TEST_FILTER_TAGS='-nomac,-no_mac,-no_oss,-oss_serial,-no_oss_py37,-v1only,-gpu,-tpu,-benchmark-test'
|
||||
#export IS_NIGHTLY=0 # Not nightly; uncomment if building from tf repo.
|
||||
export TF_PROJECT_NAME="tensorflow"
|
||||
export TF_PIP_TEST_ROOT="pip_test"
|
||||
|
||||
./tensorflow/tools/ci_build/builds/pip_new.sh
|
@ -1,51 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
set -e
|
||||
set -x
|
||||
|
||||
source tensorflow/tools/ci_build/release/common.sh
|
||||
install_bazelisk
|
||||
|
||||
# Pick a more recent version of xcode
|
||||
export DEVELOPER_DIR=/Applications/Xcode_10.3.app/Contents/Developer
|
||||
sudo xcode-select -s "${DEVELOPER_DIR}"
|
||||
python -m virtualenv tf_build_env --system-site-packages
|
||||
source tf_build_env/bin/activate
|
||||
|
||||
# Install macos pip dependencies
|
||||
install_macos_pip_deps sudo pip3.8
|
||||
|
||||
# Run configure.
|
||||
export TF_NEED_CUDA=0
|
||||
export CC_OPT_FLAGS='-mavx'
|
||||
export TF2_BEHAVIOR=1
|
||||
export PYTHON_BIN_PATH=$(which python3.8)
|
||||
yes "" | "$PYTHON_BIN_PATH" configure.py
|
||||
|
||||
tag_filters="-no_oss,-oss_serial,-nomac,-no_mac$(maybe_skip_v1),-gpu,-tpu,-benchmark-test"
|
||||
|
||||
# Get the default test targets for bazel.
|
||||
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
|
||||
|
||||
# Run tests
|
||||
set +e
|
||||
bazel test --test_output=errors --config=opt \
|
||||
--action_env=TF2_BEHAVIOR="${TF2_BEHAVIOR}" \
|
||||
--build_tag_filters="${tag_filters}" \
|
||||
--test_tag_filters="${tag_filters}" -- \
|
||||
${DEFAULT_BAZEL_TARGETS} \
|
||||
-//tensorflow/lite/...
|
||||
test_xml_summary_exit
|
@ -1,51 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
set -e
|
||||
set -x
|
||||
|
||||
source tensorflow/tools/ci_build/release/common.sh
|
||||
install_bazelisk
|
||||
|
||||
# Pick a more recent version of xcode
|
||||
export DEVELOPER_DIR=/Applications/Xcode_10.3.app/Contents/Developer
|
||||
sudo xcode-select -s "${DEVELOPER_DIR}"
|
||||
|
||||
# Install macos pip dependencies
|
||||
install_macos_pip_deps sudo pip3.8
|
||||
|
||||
# Export required variables for running pip_new.sh
|
||||
export OS_TYPE="MACOS"
|
||||
export CONTAINER_TYPE="CPU"
|
||||
export TF_PYTHON_VERSION='python3.8'
|
||||
export TF_BUILD_BOTH_CPU_PACKAGES=1
|
||||
|
||||
# Run configure.
|
||||
export TF_NEED_CUDA=0
|
||||
export CC_OPT_FLAGS='-mavx'
|
||||
export PYTHON_BIN_PATH=$(which ${TF_PYTHON_VERSION})
|
||||
yes "" | "$PYTHON_BIN_PATH" configure.py
|
||||
|
||||
# Export optional variables for running pip.sh
|
||||
export TF_BUILD_FLAGS="--config=opt --config=v2"
|
||||
export TF_TEST_FLAGS="--define=no_tensorflow_py_deps=true --test_lang_filters=py --test_output=errors --verbose_failures=true --keep_going --test_env=TF2_BEHAVIOR=1"
|
||||
export TF_TEST_TARGETS="//tensorflow/python/..."
|
||||
export TF_PIP_TESTS="test_pip_virtualenv_non_clean test_pip_virtualenv_clean"
|
||||
export TF_TEST_FILTER_TAGS='-nomac,-no_mac,-no_oss,-oss_serial,-no_oss_py38,-v1only,-gpu,-tpu,-benchmark-test'
|
||||
#export IS_NIGHTLY=0 # Not nightly; uncomment if building from tf repo.
|
||||
export TF_PROJECT_NAME="tensorflow"
|
||||
export TF_PIP_TEST_ROOT="pip_test"
|
||||
|
||||
./tensorflow/tools/ci_build/builds/pip_new.sh
|
@ -1,40 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Copyright 2020 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
set -e
|
||||
|
||||
# Source the external common scripts.
|
||||
source tensorflow/tools/ci_build/release/common.sh
|
||||
|
||||
|
||||
# Install latest bazel
|
||||
install_bazelisk
|
||||
which bazel
|
||||
|
||||
# Install realpath
|
||||
sudo apt-get install realpath
|
||||
|
||||
# Update the version string to nightly
|
||||
if [ -n "${IS_NIGHTLY_BUILD}" ]; then
|
||||
./tensorflow/tools/ci_build/update_version.py --nightly
|
||||
fi
|
||||
|
||||
./tensorflow/tools/ci_build/linux/libtensorflow.sh
|
||||
|
||||
# Copy the nightly version update script
|
||||
if [ -n "${IS_NIGHTLY_BUILD}" ]; then
|
||||
cp tensorflow/tools/ci_build/builds/libtensorflow_nightly_symlink.sh lib_package
|
||||
fi
|
||||
|
@ -1,48 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
set -e
|
||||
set -x
|
||||
|
||||
source tensorflow/tools/ci_build/release/common.sh
|
||||
|
||||
install_ubuntu_16_pip_deps pip3.5
|
||||
# Update bazel
|
||||
install_bazelisk
|
||||
|
||||
# Run configure.
|
||||
export TF_NEED_GCP=1
|
||||
export TF_NEED_HDFS=1
|
||||
export TF_NEED_S3=1
|
||||
export TF_NEED_CUDA=0
|
||||
export CC_OPT_FLAGS='-mavx'
|
||||
export PYTHON_BIN_PATH=$(which python3.5)
|
||||
export TF2_BEHAVIOR=1
|
||||
yes "" | "$PYTHON_BIN_PATH" configure.py
|
||||
tag_filters="-no_oss,-oss_serial,-gpu,-tpu,-benchmark-test,-no_oss_py35,-v1only"
|
||||
|
||||
# Get the default test targets for bazel.
|
||||
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
|
||||
|
||||
# Run tests
|
||||
set +e
|
||||
bazel test --test_output=errors --config=opt --test_lang_filters=py \
|
||||
--crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain \
|
||||
--linkopt=-lrt \
|
||||
--action_env=TF2_BEHAVIOR="${TF2_BEHAVIOR}" \
|
||||
--build_tag_filters="${tag_filters}" \
|
||||
--test_tag_filters="${tag_filters}" -- \
|
||||
${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/...
|
||||
test_xml_summary_exit
|
@ -1,52 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
set -e
|
||||
set -x
|
||||
|
||||
source tensorflow/tools/ci_build/release/common.sh
|
||||
|
||||
install_ubuntu_16_pip_deps pip3.5
|
||||
# Update bazel
|
||||
install_bazelisk
|
||||
|
||||
# Export required variables for running pip.sh
|
||||
export OS_TYPE="UBUNTU"
|
||||
export CONTAINER_TYPE="CPU"
|
||||
export TF_PYTHON_VERSION='python3.5'
|
||||
|
||||
# Run configure.
|
||||
export TF_NEED_GCP=1
|
||||
export TF_NEED_HDFS=1
|
||||
export TF_NEED_S3=1
|
||||
export TF_NEED_CUDA=0
|
||||
export CC_OPT_FLAGS='-mavx'
|
||||
export PYTHON_BIN_PATH=$(which ${TF_PYTHON_VERSION})
|
||||
yes "" | "$PYTHON_BIN_PATH" configure.py
|
||||
|
||||
# Get the default test targets for bazel.
|
||||
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
|
||||
|
||||
# Export optional variables for running pip.sh
|
||||
export TF_BUILD_FLAGS="--config=opt --config=v2 --crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain"
|
||||
export TF_TEST_FLAGS="--define=no_tensorflow_py_deps=true --test_lang_filters=py --test_output=errors --verbose_failures=true --keep_going --test_env=TF2_BEHAVIOR=1"
|
||||
export TF_TEST_TARGETS="${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/... "
|
||||
export TF_PIP_TESTS="test_pip_virtualenv_non_clean test_pip_virtualenv_clean"
|
||||
export TF_TEST_FILTER_TAGS='-no_oss,-oss_serial,-no_oss_py35,-v1only'
|
||||
#export IS_NIGHTLY=0 # Not nightly; uncomment if building from tf repo.
|
||||
export TF_PROJECT_NAME="tensorflow_cpu"
|
||||
export TF_PIP_TEST_ROOT="pip_test"
|
||||
|
||||
./tensorflow/tools/ci_build/builds/pip_new.sh
|
@ -1,48 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
set -e
|
||||
set -x
|
||||
|
||||
source tensorflow/tools/ci_build/release/common.sh
|
||||
|
||||
install_ubuntu_16_pip_deps pip3.6
|
||||
# Update bazel
|
||||
install_bazelisk
|
||||
|
||||
# Run configure.
|
||||
export TF_NEED_GCP=1
|
||||
export TF_NEED_HDFS=1
|
||||
export TF_NEED_S3=1
|
||||
export TF_NEED_CUDA=0
|
||||
export CC_OPT_FLAGS='-mavx'
|
||||
export PYTHON_BIN_PATH=$(which python3.6)
|
||||
export TF2_BEHAVIOR=1
|
||||
yes "" | "$PYTHON_BIN_PATH" configure.py
|
||||
tag_filters="-no_oss,-oss_serial,-gpu,-tpu,-benchmark-test,-no_oss_py36,-v1only"
|
||||
|
||||
# Get the default test targets for bazel.
|
||||
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
|
||||
|
||||
# Run tests
|
||||
set +e
|
||||
bazel test --test_output=errors --config=opt --test_lang_filters=py \
|
||||
--crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain \
|
||||
--linkopt=-lrt \
|
||||
--action_env=TF2_BEHAVIOR="${TF2_BEHAVIOR}" \
|
||||
--build_tag_filters="${tag_filters}" \
|
||||
--test_tag_filters="${tag_filters}" -- \
|
||||
${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/...
|
||||
test_xml_summary_exit
|
@ -1,52 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
set -e
|
||||
set -x
|
||||
|
||||
source tensorflow/tools/ci_build/release/common.sh
|
||||
|
||||
install_ubuntu_16_pip_deps pip3.6
|
||||
# Update bazel
|
||||
install_bazelisk
|
||||
|
||||
# Export required variables for running pip.sh
|
||||
export OS_TYPE="UBUNTU"
|
||||
export CONTAINER_TYPE="CPU"
|
||||
export TF_PYTHON_VERSION='python3.6'
|
||||
|
||||
# Run configure.
|
||||
export TF_NEED_GCP=1
|
||||
export TF_NEED_HDFS=1
|
||||
export TF_NEED_S3=1
|
||||
export TF_NEED_CUDA=0
|
||||
export CC_OPT_FLAGS='-mavx'
|
||||
export PYTHON_BIN_PATH=$(which ${TF_PYTHON_VERSION})
|
||||
yes "" | "$PYTHON_BIN_PATH" configure.py
|
||||
|
||||
# Get the default test targets for bazel.
|
||||
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
|
||||
|
||||
# Export optional variables for running pip.sh
|
||||
export TF_BUILD_FLAGS="--config=opt --config=v2 --crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain"
|
||||
export TF_TEST_FLAGS="--define=no_tensorflow_py_deps=true --test_lang_filters=py --test_output=errors --verbose_failures=true --keep_going --test_env=TF2_BEHAVIOR=1"
|
||||
export TF_TEST_TARGETS="${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/... "
|
||||
export TF_PIP_TESTS="test_pip_virtualenv_non_clean test_pip_virtualenv_clean"
|
||||
export TF_TEST_FILTER_TAGS='-no_oss,-oss_serial,-no_oss_py36,-v1only'
|
||||
#export IS_NIGHTLY=0 # Not nightly; uncomment if building from tf repo.
|
||||
export TF_PROJECT_NAME="tensorflow_cpu"
|
||||
export TF_PIP_TEST_ROOT="pip_test"
|
||||
|
||||
./tensorflow/tools/ci_build/builds/pip_new.sh
|
@ -1,48 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
set -e
|
||||
set -x
|
||||
|
||||
source tensorflow/tools/ci_build/release/common.sh
|
||||
|
||||
install_ubuntu_16_pip_deps pip3.7
|
||||
# Update bazel
|
||||
install_bazelisk
|
||||
|
||||
# Run configure.
|
||||
export TF_NEED_GCP=1
|
||||
export TF_NEED_HDFS=1
|
||||
export TF_NEED_S3=1
|
||||
export TF_NEED_CUDA=0
|
||||
export CC_OPT_FLAGS='-mavx'
|
||||
export PYTHON_BIN_PATH=$(which python3.7)
|
||||
export TF2_BEHAVIOR=1
|
||||
yes "" | "$PYTHON_BIN_PATH" configure.py
|
||||
tag_filters="-no_oss,-oss_serial,-gpu,-tpu,-benchmark-test,-no_oss_py37,-v1only"
|
||||
|
||||
# Get the default test targets for bazel.
|
||||
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
|
||||
|
||||
# Run tests
|
||||
set +e
|
||||
bazel test --test_output=errors --config=opt --test_lang_filters=py \
|
||||
--crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain \
|
||||
--linkopt=-lrt \
|
||||
--action_env=TF2_BEHAVIOR="${TF2_BEHAVIOR}" \
|
||||
--build_tag_filters="${tag_filters}" \
|
||||
--test_tag_filters="${tag_filters}" -- \
|
||||
${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/...
|
||||
test_xml_summary_exit
|
@ -1,52 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
set -e
|
||||
set -x
|
||||
|
||||
source tensorflow/tools/ci_build/release/common.sh
|
||||
|
||||
install_ubuntu_16_pip_deps pip3.7
|
||||
# Update bazel
|
||||
install_bazelisk
|
||||
|
||||
# Export required variables for running pip.sh
|
||||
export OS_TYPE="UBUNTU"
|
||||
export CONTAINER_TYPE="CPU"
|
||||
export TF_PYTHON_VERSION='python3.7'
|
||||
|
||||
# Run configure.
|
||||
export TF_NEED_GCP=1
|
||||
export TF_NEED_HDFS=1
|
||||
export TF_NEED_S3=1
|
||||
export TF_NEED_CUDA=0
|
||||
export CC_OPT_FLAGS='-mavx'
|
||||
export PYTHON_BIN_PATH=$(which ${TF_PYTHON_VERSION})
|
||||
yes "" | "$PYTHON_BIN_PATH" configure.py
|
||||
|
||||
# Get the default test targets for bazel.
|
||||
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
|
||||
|
||||
# Export optional variables for running pip.sh
|
||||
export TF_BUILD_FLAGS="--config=opt --config=v2 --crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain"
|
||||
export TF_TEST_FLAGS="--define=no_tensorflow_py_deps=true --test_lang_filters=py --test_output=errors --verbose_failures=true --keep_going --test_env=TF2_BEHAVIOR=1"
|
||||
export TF_TEST_TARGETS="${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/... "
|
||||
export TF_PIP_TESTS="test_pip_virtualenv_non_clean test_pip_virtualenv_clean"
|
||||
export TF_TEST_FILTER_TAGS='-no_oss,-oss_serial,-no_oss_py37,-v1only'
|
||||
#export IS_NIGHTLY=0 # Not nightly; uncomment if building from tf repo.
|
||||
export TF_PROJECT_NAME="tensorflow_cpu"
|
||||
export TF_PIP_TEST_ROOT="pip_test"
|
||||
|
||||
./tensorflow/tools/ci_build/builds/pip_new.sh
|
@ -1,48 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
set -e
|
||||
set -x
|
||||
|
||||
source tensorflow/tools/ci_build/release/common.sh
|
||||
|
||||
install_ubuntu_16_pip_deps pip3.8
|
||||
# Update bazel
|
||||
install_bazelisk
|
||||
|
||||
# Run configure.
|
||||
export TF_NEED_GCP=1
|
||||
export TF_NEED_HDFS=1
|
||||
export TF_NEED_S3=1
|
||||
export TF_NEED_CUDA=0
|
||||
export CC_OPT_FLAGS='-mavx'
|
||||
export PYTHON_BIN_PATH=$(which python3.8)
|
||||
export TF2_BEHAVIOR=1
|
||||
yes "" | "$PYTHON_BIN_PATH" configure.py
|
||||
tag_filters="-no_oss,-oss_serial,-gpu,-tpu,-benchmark-test,-no_oss_py38,-v1only"
|
||||
|
||||
# Get the default test targets for bazel.
|
||||
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
|
||||
|
||||
# Run tests
|
||||
set +e
|
||||
bazel test --test_output=errors --config=opt --test_lang_filters=py \
|
||||
--crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain \
|
||||
--linkopt=-lrt \
|
||||
--action_env=TF2_BEHAVIOR="${TF2_BEHAVIOR}" \
|
||||
--build_tag_filters="${tag_filters}" \
|
||||
--test_tag_filters="${tag_filters}" -- \
|
||||
${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/...
|
||||
test_xml_summary_exit
|
@ -1,52 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
set -e
|
||||
set -x
|
||||
|
||||
source tensorflow/tools/ci_build/release/common.sh
|
||||
|
||||
install_ubuntu_16_pip_deps pip3.8
|
||||
# Update bazel
|
||||
install_bazelisk
|
||||
|
||||
# Export required variables for running pip.sh
|
||||
export OS_TYPE="UBUNTU"
|
||||
export CONTAINER_TYPE="CPU"
|
||||
export TF_PYTHON_VERSION='python3.8'
|
||||
|
||||
# Run configure.
|
||||
export TF_NEED_GCP=1
|
||||
export TF_NEED_HDFS=1
|
||||
export TF_NEED_S3=1
|
||||
export TF_NEED_CUDA=0
|
||||
export CC_OPT_FLAGS='-mavx'
|
||||
export PYTHON_BIN_PATH=$(which ${TF_PYTHON_VERSION})
|
||||
yes "" | "$PYTHON_BIN_PATH" configure.py
|
||||
|
||||
# Get the default test targets for bazel.
|
||||
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
|
||||
|
||||
# Export optional variables for running pip.sh
|
||||
export TF_BUILD_FLAGS="--config=opt --config=v2 --crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain"
|
||||
export TF_TEST_FLAGS="--define=no_tensorflow_py_deps=true --test_lang_filters=py --test_output=errors --verbose_failures=true --keep_going --test_env=TF2_BEHAVIOR=1"
|
||||
export TF_TEST_TARGETS="${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/... "
|
||||
export TF_PIP_TESTS="test_pip_virtualenv_non_clean test_pip_virtualenv_clean"
|
||||
export TF_TEST_FILTER_TAGS='-no_oss,-oss_serial,-no_oss_py38,-v1only'
|
||||
#export IS_NIGHTLY=0 # Not nightly; uncomment if building from tf repo.
|
||||
export TF_PROJECT_NAME="tensorflow_cpu"
|
||||
export TF_PIP_TEST_ROOT="pip_test"
|
||||
|
||||
./tensorflow/tools/ci_build/builds/pip_new.sh
|
@ -1,40 +0,0 @@
|
||||
# Copyright 2020 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
set -e
|
||||
|
||||
# Source the external common scripts.
|
||||
source tensorflow/tools/ci_build/release/common.sh
|
||||
|
||||
|
||||
# Install latest bazel
|
||||
install_bazelisk
|
||||
which bazel
|
||||
|
||||
# Install realpath
|
||||
sudo apt-get install realpath
|
||||
|
||||
export TF_NEED_CUDA=1
|
||||
|
||||
# Update the version string to nightly
|
||||
if [ -n "${IS_NIGHTLY_BUILD}" ]; then
|
||||
./tensorflow/tools/ci_build/update_version.py --nightly
|
||||
fi
|
||||
|
||||
./tensorflow/tools/ci_build/linux/libtensorflow.sh
|
||||
|
||||
# Copy the nightly version update script
|
||||
if [ -n "${IS_NIGHTLY_BUILD}" ]; then
|
||||
cp tensorflow/tools/ci_build/builds/libtensorflow_nightly_symlink.sh lib_package
|
||||
fi
|
@ -1,61 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
set -e
|
||||
set -x
|
||||
|
||||
source tensorflow/tools/ci_build/release/common.sh
|
||||
|
||||
install_ubuntu_16_pip_deps pip3.6
|
||||
# Update Bazel to the desired version
|
||||
install_bazelisk
|
||||
|
||||
# Run configure.
|
||||
export TF_NEED_GCP=1
|
||||
export TF_NEED_HDFS=1
|
||||
export TF_NEED_S3=1
|
||||
export TF_NEED_CUDA=1
|
||||
export TF_CUDA_VERSION=10
|
||||
export TF_CUDNN_VERSION=7
|
||||
export TF_NEED_TENSORRT=1
|
||||
export TENSORRT_INSTALL_PATH=/usr/local/tensorrt
|
||||
export CC_OPT_FLAGS='-mavx'
|
||||
export PYTHON_BIN_PATH=$(which python3.6)
|
||||
export LD_LIBRARY_PATH="/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:$TENSORRT_INSTALL_PATH/lib"
|
||||
export TF_CUDA_COMPUTE_CAPABILITIES=sm_35,sm_37,sm_52,sm_60,sm_61,compute_70
|
||||
|
||||
yes "" | "$PYTHON_BIN_PATH" configure.py
|
||||
|
||||
########################
|
||||
## Build GPU pip package
|
||||
########################
|
||||
bazel build --config=opt \
|
||||
--crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain \
|
||||
tensorflow/tools/pip_package:build_pip_package
|
||||
|
||||
# Set TF nightly flag so we get the proper version of estimator
|
||||
if [[ "$IS_NIGHTLY" == 1 ]]; then
|
||||
NIGHTLY_FLAG="--nightly_flag"
|
||||
fi
|
||||
|
||||
PIP_WHL_DIR=whl
|
||||
mkdir -p ${PIP_WHL_DIR}
|
||||
PIP_WHL_DIR=$(readlink -f ${PIP_WHL_DIR}) # Get absolute path
|
||||
bazel-bin/tensorflow/tools/pip_package/build_pip_package "${PIP_WHL_DIR}" "${NIGHTLY_FLAG}"
|
||||
WHL_PATH=$(ls "${PIP_WHL_DIR}"/*.whl)
|
||||
|
||||
cp "${WHL_PATH}" "$(pwd)"/.
|
||||
chmod +x tensorflow/tools/ci_build/builds/docker_cpu_pip.sh
|
||||
docker run -e "BAZEL_VERSION=${BAZEL_VERSION}" -e "CI_BUILD_USER=$(id -u -n)" -e "CI_BUILD_UID=$(id -u)" -e "CI_BUILD_GROUP=$(id -g -n)" -e "CI_BUILD_GID=$(id -g)" -e "CI_BUILD_HOME=/bazel_pip" -v "$(pwd)":/bazel_pip tensorflow/tensorflow:devel "./bazel_pip/tensorflow/tools/ci_build/builds/with_the_same_user" "./bazel_pip/tensorflow/tools/ci_build/builds/docker_cpu_pip.sh"
|
@ -1,60 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
set -e
|
||||
set -x
|
||||
|
||||
source tensorflow/tools/ci_build/release/common.sh
|
||||
|
||||
install_ubuntu_16_pip_deps pip3.5
|
||||
# Update bazel
|
||||
install_bazelisk
|
||||
|
||||
# Run configure.
|
||||
export TF_NEED_GCP=1
|
||||
export TF_NEED_HDFS=1
|
||||
export TF_NEED_S3=1
|
||||
export TF_NEED_CUDA=1
|
||||
export TF_CUDA_VERSION=10
|
||||
export TF_CUDNN_VERSION=7
|
||||
export TF_NEED_TENSORRT=1
|
||||
export TENSORRT_INSTALL_PATH=/usr/local/tensorrt
|
||||
export CC_OPT_FLAGS='-mavx'
|
||||
export PYTHON_BIN_PATH=$(which python3.5)
|
||||
export TF2_BEHAVIOR=1
|
||||
export PROJECT_NAME="tensorflow_gpu"
|
||||
export LD_LIBRARY_PATH="/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:$TENSORRT_INSTALL_PATH/lib"
|
||||
export TF_CUDA_COMPUTE_CAPABILITIES=sm_35,sm_37,sm_52,sm_60,sm_61,compute_70
|
||||
|
||||
yes "" | "$PYTHON_BIN_PATH" configure.py
|
||||
|
||||
# Get the default test targets for bazel.
|
||||
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
|
||||
|
||||
tag_filters="gpu,requires-gpu,-no_gpu,-no_oss,-oss_serial,-no_oss_py35"
|
||||
|
||||
set +e
|
||||
bazel test --config=cuda --config=opt \
|
||||
--crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain \
|
||||
--linkopt=-lrt \
|
||||
--action_env=TF2_BEHAVIOR="${TF2_BEHAVIOR}" \
|
||||
--test_lang_filters=py \
|
||||
--test_tag_filters=${tag_filters} \
|
||||
--build_tag_filters=${tag_filters} \
|
||||
--test_timeout="300,450,1200,3600" --local_test_jobs=4 \
|
||||
--test_output=errors --verbose_failures=true --keep_going \
|
||||
--run_under=//tensorflow/tools/ci_build/gpu_build:parallel_gpu_execute \
|
||||
-- ${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/...
|
||||
test_xml_summary_exit
|
@ -1,69 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
set -e
|
||||
set -x
|
||||
|
||||
source tensorflow/tools/ci_build/release/common.sh
|
||||
|
||||
install_ubuntu_16_pip_deps pip3.5
|
||||
# Update bazel
|
||||
install_bazelisk
|
||||
|
||||
# Export required variables for running pip.sh
|
||||
export OS_TYPE="UBUNTU"
|
||||
export CONTAINER_TYPE="GPU"
|
||||
export TF_PYTHON_VERSION='python3.5'
|
||||
|
||||
# Run configure.
|
||||
export TF_NEED_GCP=1
|
||||
export TF_NEED_HDFS=1
|
||||
export TF_NEED_S3=1
|
||||
export TF_NEED_CUDA=1
|
||||
export TF_CUDA_VERSION=10
|
||||
export TF_CUDNN_VERSION=7
|
||||
export TF_NEED_TENSORRT=1
|
||||
export TENSORRT_INSTALL_PATH=/usr/local/tensorrt
|
||||
export CC_OPT_FLAGS='-mavx'
|
||||
export PYTHON_BIN_PATH=$(which ${TF_PYTHON_VERSION})
|
||||
export PROJECT_NAME="tensorflow_gpu"
|
||||
export LD_LIBRARY_PATH="/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:$TENSORRT_INSTALL_PATH/lib"
|
||||
export TF_CUDA_COMPUTE_CAPABILITIES=sm_35,sm_37,sm_52,sm_60,sm_61,compute_70
|
||||
|
||||
yes "" | "$PYTHON_BIN_PATH" configure.py
|
||||
|
||||
# Get the default test targets for bazel.
|
||||
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
|
||||
|
||||
# Export optional variables for running pip.sh
|
||||
export TF_TEST_FILTER_TAGS='gpu,requires-gpu,-no_gpu,-no_oss,-oss_serial,-no_oss_py35'
|
||||
export TF_BUILD_FLAGS="--config=opt --config=v2 --config=cuda --distinct_host_configuration=false \
|
||||
--action_env=TF_CUDA_VERSION --action_env=TF_CUDNN_VERSION --crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain "
|
||||
export TF_TEST_FLAGS="--test_tag_filters=${TF_TEST_FILTER_TAGS} --build_tag_filters=${TF_TEST_FILTER_TAGS} \
|
||||
--distinct_host_configuration=false \
|
||||
--action_env=TF_CUDA_VERSION --action_env=TF_CUDNN_VERSION --test_env=TF2_BEHAVIOR=1 \
|
||||
--config=cuda --test_output=errors --local_test_jobs=4 --test_lang_filters=py \
|
||||
--verbose_failures=true --keep_going --define=no_tensorflow_py_deps=true \
|
||||
--run_under=//tensorflow/tools/ci_build/gpu_build:parallel_gpu_execute "
|
||||
export TF_TEST_TARGETS="${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/... "
|
||||
export TF_PIP_TESTS="test_pip_virtualenv_non_clean test_pip_virtualenv_clean"
|
||||
#export IS_NIGHTLY=0 # Not nightly; uncomment if building from tf repo.
|
||||
export TF_PROJECT_NAME=${PROJECT_NAME}
|
||||
export TF_PIP_TEST_ROOT="pip_test"
|
||||
|
||||
# To build both tensorflow and tensorflow-gpu pip packages
|
||||
export TF_BUILD_BOTH_GPU_PACKAGES=1
|
||||
|
||||
./tensorflow/tools/ci_build/builds/pip_new.sh
|
@ -1,60 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
set -e
|
||||
set -x
|
||||
|
||||
source tensorflow/tools/ci_build/release/common.sh
|
||||
|
||||
install_ubuntu_16_pip_deps pip3.6
|
||||
# Update bazel
|
||||
install_bazelisk
|
||||
|
||||
# Run configure.
|
||||
export TF_NEED_GCP=1
|
||||
export TF_NEED_HDFS=1
|
||||
export TF_NEED_S3=1
|
||||
export TF_NEED_CUDA=1
|
||||
export TF_CUDA_VERSION=10
|
||||
export TF_CUDNN_VERSION=7
|
||||
export TF_NEED_TENSORRT=1
|
||||
export TENSORRT_INSTALL_PATH=/usr/local/tensorrt
|
||||
export CC_OPT_FLAGS='-mavx'
|
||||
export PYTHON_BIN_PATH=$(which python3.6)
|
||||
export TF2_BEHAVIOR=1
|
||||
export PROJECT_NAME="tensorflow_gpu"
|
||||
export LD_LIBRARY_PATH="/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:$TENSORRT_INSTALL_PATH/lib"
|
||||
export TF_CUDA_COMPUTE_CAPABILITIES=sm_35,sm_37,sm_52,sm_60,sm_61,compute_70
|
||||
|
||||
yes "" | "$PYTHON_BIN_PATH" configure.py
|
||||
|
||||
# Get the default test targets for bazel.
|
||||
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
|
||||
|
||||
tag_filters="gpu,requires-gpu,-no_gpu,-no_oss,-oss_serial,-no_oss_py36"
|
||||
|
||||
set +e
|
||||
bazel test --config=cuda --config=opt \
|
||||
--crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain \
|
||||
--linkopt=-lrt \
|
||||
--action_env=TF2_BEHAVIOR="${TF2_BEHAVIOR}" \
|
||||
--test_lang_filters=py \
|
||||
--test_tag_filters=${tag_filters} \
|
||||
--build_tag_filters=${tag_filters} \
|
||||
--test_timeout="300,450,1200,3600" --local_test_jobs=4 \
|
||||
--test_output=errors --verbose_failures=true --keep_going \
|
||||
--run_under=//tensorflow/tools/ci_build/gpu_build:parallel_gpu_execute \
|
||||
-- ${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/...
|
||||
test_xml_summary_exit
|
@ -1,69 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
set -e
|
||||
set -x
|
||||
|
||||
source tensorflow/tools/ci_build/release/common.sh
|
||||
|
||||
install_ubuntu_16_pip_deps pip3.6
|
||||
# Update bazel
|
||||
install_bazelisk
|
||||
|
||||
# Export required variables for running pip.sh
|
||||
export OS_TYPE="UBUNTU"
|
||||
export CONTAINER_TYPE="GPU"
|
||||
export TF_PYTHON_VERSION='python3.6'
|
||||
|
||||
# Run configure.
|
||||
export TF_NEED_GCP=1
|
||||
export TF_NEED_HDFS=1
|
||||
export TF_NEED_S3=1
|
||||
export TF_NEED_CUDA=1
|
||||
export TF_CUDA_VERSION=10
|
||||
export TF_CUDNN_VERSION=7
|
||||
export TF_NEED_TENSORRT=1
|
||||
export TENSORRT_INSTALL_PATH=/usr/local/tensorrt
|
||||
export CC_OPT_FLAGS='-mavx'
|
||||
export PYTHON_BIN_PATH=$(which ${TF_PYTHON_VERSION})
|
||||
export PROJECT_NAME="tensorflow_gpu"
|
||||
export LD_LIBRARY_PATH="/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:$TENSORRT_INSTALL_PATH/lib"
|
||||
export TF_CUDA_COMPUTE_CAPABILITIES=sm_35,sm_37,sm_52,sm_60,sm_61,compute_70
|
||||
|
||||
yes "" | "$PYTHON_BIN_PATH" configure.py
|
||||
|
||||
# Get the default test targets for bazel.
|
||||
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
|
||||
|
||||
# Export optional variables for running pip.sh
|
||||
export TF_TEST_FILTER_TAGS='gpu,requires-gpu,-no_gpu,-no_oss,-oss_serial,-no_oss_py36'
|
||||
export TF_BUILD_FLAGS="--config=opt --config=v2 --config=cuda --distinct_host_configuration=false \
|
||||
--action_env=TF_CUDA_VERSION --action_env=TF_CUDNN_VERSION --crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain "
|
||||
export TF_TEST_FLAGS="--test_tag_filters=${TF_TEST_FILTER_TAGS} --build_tag_filters=${TF_TEST_FILTER_TAGS} \
|
||||
--distinct_host_configuration=false \
|
||||
--action_env=TF_CUDA_VERSION --action_env=TF_CUDNN_VERSION --test_env=TF2_BEHAVIOR=1 \
|
||||
--config=cuda --test_output=errors --local_test_jobs=4 --test_lang_filters=py \
|
||||
--verbose_failures=true --keep_going --define=no_tensorflow_py_deps=true \
|
||||
--run_under=//tensorflow/tools/ci_build/gpu_build:parallel_gpu_execute "
|
||||
export TF_TEST_TARGETS="${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/... "
|
||||
export TF_PIP_TESTS="test_pip_virtualenv_non_clean test_pip_virtualenv_clean"
|
||||
#export IS_NIGHTLY=0 # Not nightly; uncomment if building from tf repo.
|
||||
export TF_PROJECT_NAME=${PROJECT_NAME}
|
||||
export TF_PIP_TEST_ROOT="pip_test"
|
||||
|
||||
# To build both tensorflow and tensorflow-gpu pip packages
|
||||
export TF_BUILD_BOTH_GPU_PACKAGES=1
|
||||
|
||||
./tensorflow/tools/ci_build/builds/pip_new.sh
|
@ -1,60 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
set -e
|
||||
set -x
|
||||
|
||||
source tensorflow/tools/ci_build/release/common.sh
|
||||
|
||||
install_ubuntu_16_pip_deps pip3.7
|
||||
# Update bazel
|
||||
install_bazelisk
|
||||
|
||||
# Run configure.
|
||||
export TF_NEED_GCP=1
|
||||
export TF_NEED_HDFS=1
|
||||
export TF_NEED_S3=1
|
||||
export TF_NEED_CUDA=1
|
||||
export TF_CUDA_VERSION=10
|
||||
export TF_CUDNN_VERSION=7
|
||||
export TF_NEED_TENSORRT=1
|
||||
export TENSORRT_INSTALL_PATH=/usr/local/tensorrt
|
||||
export CC_OPT_FLAGS='-mavx'
|
||||
export PYTHON_BIN_PATH=$(which python3.7)
|
||||
export TF2_BEHAVIOR=1
|
||||
export PROJECT_NAME="tensorflow_gpu"
|
||||
export LD_LIBRARY_PATH="/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:$TENSORRT_INSTALL_PATH/lib"
|
||||
export TF_CUDA_COMPUTE_CAPABILITIES=sm_35,sm_37,sm_52,sm_60,sm_61,compute_70
|
||||
|
||||
yes "" | "$PYTHON_BIN_PATH" configure.py
|
||||
|
||||
# Get the default test targets for bazel.
|
||||
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
|
||||
|
||||
tag_filters="gpu,requires-gpu,-no_gpu,-no_oss,-oss_serial,-no_oss_py37"
|
||||
|
||||
set +e
|
||||
bazel test --config=cuda --config=opt \
|
||||
--crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain \
|
||||
--linkopt=-lrt \
|
||||
--action_env=TF2_BEHAVIOR="${TF2_BEHAVIOR}" \
|
||||
--test_lang_filters=py \
|
||||
--build_tag_filters=${tag_filters} \
|
||||
--test_tag_filters=${tag_filters} \
|
||||
--test_timeout="300,450,1200,3600" --local_test_jobs=4 \
|
||||
--test_output=errors --verbose_failures=true --keep_going \
|
||||
--run_under=//tensorflow/tools/ci_build/gpu_build:parallel_gpu_execute \
|
||||
-- ${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/...
|
||||
test_xml_summary_exit
|
@ -1,69 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
set -e
|
||||
set -x
|
||||
|
||||
source tensorflow/tools/ci_build/release/common.sh
|
||||
|
||||
install_ubuntu_16_pip_deps pip3.7
|
||||
# Update bazel
|
||||
install_bazelisk
|
||||
|
||||
# Export required variables for running pip.sh
|
||||
export OS_TYPE="UBUNTU"
|
||||
export CONTAINER_TYPE="GPU"
|
||||
export TF_PYTHON_VERSION='python3.7'
|
||||
|
||||
# Run configure.
|
||||
export TF_NEED_GCP=1
|
||||
export TF_NEED_HDFS=1
|
||||
export TF_NEED_S3=1
|
||||
export TF_NEED_CUDA=1
|
||||
export TF_CUDA_VERSION=10
|
||||
export TF_CUDNN_VERSION=7
|
||||
export TF_NEED_TENSORRT=1
|
||||
export TENSORRT_INSTALL_PATH=/usr/local/tensorrt
|
||||
export CC_OPT_FLAGS='-mavx'
|
||||
export PYTHON_BIN_PATH=$(which ${TF_PYTHON_VERSION})
|
||||
export PROJECT_NAME="tensorflow_gpu"
|
||||
export LD_LIBRARY_PATH="/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:$TENSORRT_INSTALL_PATH/lib"
|
||||
export TF_CUDA_COMPUTE_CAPABILITIES=sm_35,sm_37,sm_52,sm_60,sm_61,compute_70
|
||||
|
||||
yes "" | "$PYTHON_BIN_PATH" configure.py
|
||||
|
||||
# Get the default test targets for bazel.
|
||||
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
|
||||
|
||||
# Export optional variables for running pip.sh
|
||||
export TF_TEST_FILTER_TAGS='gpu,requires-gpu,-no_gpu,-no_oss,-oss_serial,-no_oss_py37'
|
||||
export TF_BUILD_FLAGS="--config=opt --config=v2 --config=cuda --distinct_host_configuration=false \
|
||||
--action_env=TF_CUDA_VERSION --action_env=TF_CUDNN_VERSION --crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain "
|
||||
export TF_TEST_FLAGS="--test_tag_filters=${TF_TEST_FILTER_TAGS} --build_tag_filters=${TF_TEST_FILTER_TAGS} \
|
||||
--distinct_host_configuration=false \
|
||||
--action_env=TF_CUDA_VERSION --action_env=TF_CUDNN_VERSION --test_env=TF2_BEHAVIOR=1 \
|
||||
--config=cuda --test_output=errors --local_test_jobs=4 --test_lang_filters=py \
|
||||
--verbose_failures=true --keep_going --define=no_tensorflow_py_deps=true \
|
||||
--run_under=//tensorflow/tools/ci_build/gpu_build:parallel_gpu_execute "
|
||||
export TF_TEST_TARGETS="${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/... "
|
||||
export TF_PIP_TESTS="test_pip_virtualenv_non_clean test_pip_virtualenv_clean"
|
||||
#export IS_NIGHTLY=0 # Not nightly; uncomment if building from tf repo.
|
||||
export TF_PROJECT_NAME=${PROJECT_NAME}
|
||||
export TF_PIP_TEST_ROOT="pip_test"
|
||||
|
||||
# To build both tensorflow and tensorflow-gpu pip packages
|
||||
export TF_BUILD_BOTH_GPU_PACKAGES=1
|
||||
|
||||
./tensorflow/tools/ci_build/builds/pip_new.sh
|
@ -1,60 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
set -e
|
||||
set -x
|
||||
|
||||
source tensorflow/tools/ci_build/release/common.sh
|
||||
|
||||
install_ubuntu_16_pip_deps pip3.8
|
||||
# Update bazel
|
||||
update_bazel_linux
|
||||
|
||||
# Run configure.
|
||||
export TF_NEED_GCP=1
|
||||
export TF_NEED_HDFS=1
|
||||
export TF_NEED_S3=1
|
||||
export TF_NEED_CUDA=1
|
||||
export TF_CUDA_VERSION=10
|
||||
export TF_CUDNN_VERSION=7
|
||||
export TF_NEED_TENSORRT=1
|
||||
export TENSORRT_INSTALL_PATH=/usr/local/tensorrt
|
||||
export CC_OPT_FLAGS='-mavx'
|
||||
export PYTHON_BIN_PATH=$(which python3.8)
|
||||
export TF2_BEHAVIOR=1
|
||||
export PROJECT_NAME="tensorflow_gpu"
|
||||
export LD_LIBRARY_PATH="/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:$TENSORRT_INSTALL_PATH/lib"
|
||||
export TF_CUDA_COMPUTE_CAPABILITIES=sm_35,sm_37,sm_52,sm_60,sm_61,compute_70
|
||||
|
||||
yes "" | "$PYTHON_BIN_PATH" configure.py
|
||||
|
||||
# Get the default test targets for bazel.
|
||||
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
|
||||
|
||||
tag_filters="gpu,requires-gpu,-no_gpu,-no_oss,-oss_serial,-no_oss_py38"
|
||||
|
||||
test +e
|
||||
bazel test --config=cuda --config=opt \
|
||||
--crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain \
|
||||
--linkopt=-lrt \
|
||||
--action_env=TF2_BEHAVIOR="${TF2_BEHAVIOR}" \
|
||||
--test_lang_filters=py \
|
||||
--build_tag_filters=${tag_filters} \
|
||||
--test_tag_filters=${tag_filters} \
|
||||
--test_timeout="300,450,1200,3600" --local_test_jobs=4 \
|
||||
--test_output=errors --verbose_failures=true --keep_going \
|
||||
--run_under=//tensorflow/tools/ci_build/gpu_build:parallel_gpu_execute \
|
||||
-- ${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/...
|
||||
test_xml_summary_exit
|
@ -1,69 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
set -e
|
||||
set -x
|
||||
|
||||
source tensorflow/tools/ci_build/release/common.sh
|
||||
|
||||
install_ubuntu_16_pip_deps pip3.8
|
||||
# Update bazel
|
||||
update_bazel_linux
|
||||
|
||||
# Export required variables for running pip.sh
|
||||
export OS_TYPE="UBUNTU"
|
||||
export CONTAINER_TYPE="GPU"
|
||||
export TF_PYTHON_VERSION='python3.8'
|
||||
|
||||
# Run configure.
|
||||
export TF_NEED_GCP=1
|
||||
export TF_NEED_HDFS=1
|
||||
export TF_NEED_S3=1
|
||||
export TF_NEED_CUDA=1
|
||||
export TF_CUDA_VERSION=10
|
||||
export TF_CUDNN_VERSION=7
|
||||
export TF_NEED_TENSORRT=1
|
||||
export TENSORRT_INSTALL_PATH=/usr/local/tensorrt
|
||||
export CC_OPT_FLAGS='-mavx'
|
||||
export PYTHON_BIN_PATH=$(which ${TF_PYTHON_VERSION})
|
||||
export PROJECT_NAME="tensorflow_gpu"
|
||||
export LD_LIBRARY_PATH="/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:$TENSORRT_INSTALL_PATH/lib"
|
||||
export TF_CUDA_COMPUTE_CAPABILITIES=sm_35,sm_37,sm_52,sm_60,sm_61,compute_70
|
||||
|
||||
yes "" | "$PYTHON_BIN_PATH" configure.py
|
||||
|
||||
# Get the default test targets for bazel.
|
||||
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
|
||||
|
||||
# Export optional variables for running pip.sh
|
||||
export TF_TEST_FILTER_TAGS='gpu,requires-gpu,-no_gpu,-no_oss,-oss_serial,-no_oss_py38'
|
||||
export TF_BUILD_FLAGS="--config=opt --config=v2 --config=cuda --distinct_host_configuration=false \
|
||||
--action_env=TF_CUDA_VERSION --action_env=TF_CUDNN_VERSION --crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain "
|
||||
export TF_TEST_FLAGS="--test_tag_filters=${TF_TEST_FILTER_TAGS} --build_tag_filters=${TF_TEST_FILTER_TAGS} \
|
||||
--distinct_host_configuration=false \
|
||||
--action_env=TF_CUDA_VERSION --action_env=TF_CUDNN_VERSION --test_env=TF2_BEHAVIOR=1 \
|
||||
--config=cuda --test_output=errors --local_test_jobs=4 --test_lang_filters=py \
|
||||
--verbose_failures=true --keep_going --define=no_tensorflow_py_deps=true \
|
||||
--run_under=//tensorflow/tools/ci_build/gpu_build:parallel_gpu_execute "
|
||||
export TF_TEST_TARGETS="${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/... "
|
||||
export TF_PIP_TESTS="test_pip_virtualenv_non_clean test_pip_virtualenv_clean"
|
||||
#export IS_NIGHTLY=0 # Not nightly; uncomment if building from tf repo.
|
||||
export TF_PROJECT_NAME=${PROJECT_NAME}
|
||||
export TF_PIP_TEST_ROOT="pip_test"
|
||||
|
||||
# To build both tensorflow and tensorflow-gpu pip packages
|
||||
export TF_BUILD_BOTH_GPU_PACKAGES=1
|
||||
|
||||
./tensorflow/tools/ci_build/builds/pip_new.sh
|
@ -1,36 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
set -e
|
||||
|
||||
# Install latest bazel
|
||||
source tensorflow/tools/ci_build/release/common.sh
|
||||
install_bazelisk
|
||||
which bazel
|
||||
|
||||
# We need py3 lint
|
||||
sudo pip3 install pep8
|
||||
|
||||
# TODO(gunan): figure out why we get stuck with later versions of pylint.
|
||||
# Install pylint.
|
||||
sudo python3 -m pip install setuptools --upgrade
|
||||
sudo python2 -m pip install pylint==1.6.4
|
||||
sudo python3 -m pip install pylint==1.6.4
|
||||
|
||||
# TODO(yifeif): print pylint version for debug. remove later.
|
||||
python3 -m pylint --version
|
||||
|
||||
# Run tensorflow sanity checks.
|
||||
tensorflow/tools/ci_build/ci_sanity.sh
|
@ -1,20 +0,0 @@
|
||||
:: Copyright 2019 The TensorFlow Authors. All Rights Reserved.
|
||||
::
|
||||
:: Licensed under the Apache License, Version 2.0 (the "License");
|
||||
:: you may not use this file except in compliance with the License.
|
||||
:: You may obtain a copy of the License at
|
||||
::
|
||||
:: http://www.apache.org/licenses/LICENSE-2.0
|
||||
::
|
||||
:: Unless required by applicable law or agreed to in writing, software
|
||||
:: distributed under the License is distributed on an "AS IS" BASIS,
|
||||
:: WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
:: See the License for the specific language governing permissions and
|
||||
:: limitations under the License.
|
||||
:: =============================================================================
|
||||
|
||||
CALL tensorflow\tools\ci_build\release\common_win.bat
|
||||
|
||||
call tensorflow\tools\ci_build\windows\cpu\bazel\run_libtensorflow.bat || exit /b 1
|
||||
|
||||
copy lib_package %TF_ARTIFACTS_DIR%\lib_package
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user