Compare commits

...

90 Commits

Author SHA1 Message Date
Mihai Maruseac
9edbe5075f
Merge pull request from tensorflow-jenkins/relnotes-2.3.2-15740
Update release notes for TensorFlow 2.3.2
2021-01-04 12:20:31 -08:00
Mihai Maruseac
eedec23bd1
Update RELEASE.md 2021-01-04 12:17:48 -08:00
Mihai Maruseac
b7d531a6d4
Merge pull request from tensorflow/mm-cherry-pick-sqlite-bump-on-r2.3
Update SQLite to the lastest sqlite-amalgamation-3340000
2021-01-04 11:35:04 -08:00
Yong Tang
603164448b Update SQLite to the lastest sqlite-amalgamation-3340000
This PR updates SQLite to the latest sqlite-amalgamation-3340000

Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
2021-01-04 11:23:32 -08:00
Mihai Maruseac
1574dfb39a
Merge pull request from tensorflow/mm-cherrypick-0cc38aaa4064fd9e79101994ce9872c6d91f816b-on-r2.3
Prevent unitialized memory access in `GraphConstructor::MakeEdge`
2020-12-17 09:42:36 -08:00
Mihai Maruseac
7c3e81c7de
Merge pull request from tensorflow/mm-cherrypick-14755416e364f17fb1870882fa778c7fec7f16e3-on-r2.3
Prevent CHECK-fail in LSTM/GRU with zero-length input.
2020-12-17 09:42:20 -08:00
Mihai Maruseac
e0d717917d
Merge pull request from tensorflow/mm-cherrypick-c1e1fc899ad5f8c725dcbb6470069890b5060bc7-on-r2.3
Mark `MemmappedTensorAllocator` as returning opaque handle.
2020-12-17 09:41:51 -08:00
Mihai Maruseac
6eda33cc3d
Merge pull request from tensorflow/mm-cherrypick-ebc70b7a592420d3d2f359e4b1694c236b82c7ae-on-r2.3
Validate that `DataFormat*` attributes form a permutation.
2020-12-17 09:41:31 -08:00
Mihai Maruseac
4e2714214b
Merge pull request from tensorflow/mm-cherrypick-ace0c15a22f7f054abcc1f53eabbcb0a1239a9e2-on-r2.3
Default initialize fixed point Eigen types.
2020-12-17 09:41:09 -08:00
Mihai Maruseac
804dab6a4f
Merge pull request from geetachavan1/cherrypicks_CI7J6
[Cherrypick:r2.3] Don't do fused average updates inside XLA context as it may create extra tf.cond which causes OOM on TPUs.
2020-12-17 09:24:54 -08:00
Mihai Maruseac
201f7297d1
Merge pull request from tensorflow-jenkins/version-numbers-2.3.2-2004
Update version numbers for TensorFlow 2.3.2
2020-12-17 09:21:07 -08:00
TensorFlow Release Automation
e9cbe6a5f4 Update version numbers to 2.3.2 2020-12-16 16:48:59 -08:00
TensorFlow Release Automation
001d766a4b Insert release notes place-fill 2020-12-16 16:35:03 -08:00
Mihai Maruseac
effc167fb5
Merge pull request from tensorflow/mm-cherry-pick-pcre-fixes-on-r2.3
Update PCRE library from 8.42 to 8.44
2020-12-16 09:54:04 -08:00
Mihai Maruseac
91941e070e
Merge pull request from tensorflow/mm-cherry-pick-java-fixes-on-r2.3
Bump junit from 4.11 to 4.13.1.
2020-12-16 09:49:25 -08:00
Mihai Maruseac
6e4de84b0b
Merge pull request from tensorflow/mm-cherry-pick-libjpeg-turbo-on-r2.3
Bump libjpeg-turbo from 2.0.4 to 2.0.5
2020-12-16 09:46:18 -08:00
Mihai Maruseac
8a8d683296
Merge pull request from tensorflow/mm-fix-sanity-build-on-r2.3
Fix sanity build
2020-12-15 17:48:59 -08:00
Mihai Maruseac
6265eea3c6 Fix sanity build 2020-12-15 17:47:28 -08:00
Mihai Maruseac
887ca8a8a7
Merge pull request from tensorflow/mm-disable-segfault-tests-on-r2.3
Disable a few tests.
2020-12-15 15:46:26 -08:00
Mihai Maruseac
36cb57fc75 Disable a few tests.
These tests now segfault after some dependency updated below us.
2020-12-15 15:33:25 -08:00
Ruoxin Sang
62cdfd542c Don't do fused average updates inside XLA context as it may create extra tf.cond which causes OOM on TPUs.
PiperOrigin-RevId: 327294174
Change-Id: I7caa62d77e5c86a6afe7aaca22c7231d8f2304b6
2020-12-15 14:39:08 -08:00
Mihai Maruseac
f3c5b4ea29
Merge pull request from angerson/r2.3
Dockerfiles: Pin pip version and fix apt-get
2020-12-09 15:36:10 -08:00
Austin Anderson
a260612a4c Pin pip version and fix apt-get 2020-12-09 15:16:22 -08:00
Yong Tang
948cde7077 Update PCRE library from 8.42 to 8.44
This PR updates PCRE library from 8.42 to 8.44.

Note there is a CVS related to old 8.42 (https://nvd.nist.gov/vuln/detail/CVE-2019-20838#VulnChangeHistorySection)

Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
2020-12-09 11:23:40 -08:00
Mihai Maruseac
16bca6296a Prevent unitialized memory access in GraphConstructor::MakeEdge
The `MakeEdge` implementation assumes that there exists an output at `output_index` of `src` node and an input at `input_index` of `dst` node. However, if this is not the case this results in accessing data out of bounds. Because we are accessing an array that is a private member of a class and only in read only mode, this usually results only in unitialized memory access. However, it is reasonable to think that malicious users could manipulate these indexes to actually read data outside the class, thus resulting in information leakage and further exploits.

PiperOrigin-RevId: 346343288
Change-Id: I2127da27c2023d27f26efd39afa6c853385cab6f
2020-12-08 18:04:11 -08:00
Mihai Maruseac
b95ccc06e0 Prevent CHECK-fail in LSTM/GRU with zero-length input.
PiperOrigin-RevId: 346239181
Change-Id: I5f233dbc076aab7bb4e31ba24f5abd4eaf99ea4f
2020-12-08 18:03:03 -08:00
Mihai Maruseac
211469d43f Mark MemmappedTensorAllocator as returning opaque handle.
This allocator is used for `ImmutableConstantOp` and it returns a handle to the contents of a memory mapped file which is supposed to represent a tensor.

For tensors of complex types (resources, variables and strings), allocators which are not marked as returning opaque handles will call placement new to initialize each element. This means writing to the buffer. However, in our case, the buffer is immutable and already contains the tensor data. Hence, writing to it is both destructive and causes a crash.

PiperOrigin-RevId: 345786451
Change-Id: I46369c50fa60b3431709ffe068a728d3061f49c4
2020-12-08 18:00:18 -08:00
Mihai Maruseac
0b289c33be Validate that DataFormat* attributes form a permutation.
The `src_format` and `dst_format` attributes for the `DataFormatDimMap` and `DataFormatVecPermute` raw ops are supposed to determine a permutation. However, this was not validated and could result in unitialized memory accesses as well as writes outside of bounds and potential crashes.

While here, we also test that the format attributes have the needed length, add tests for all validation failure cases, remove unnecessary calls to `strings::StrCat`, and fix a few grammar errors.

This will be cherry-picked on the supported release branches.

PiperOrigin-RevId: 346135579
Change-Id: I1c76392382c89ad8f072d5bc93d70669851eb404
2020-12-08 17:58:22 -08:00
Mihai Maruseac
05778aea11 Default initialize fixed point Eigen types.
In certain cases, tensors are filled with default values of the type. But, for these fixed point types, these values were uninitialized. Thus, we would have uninitialized memory access bugs, some of which were caught by MSAN.

PiperOrigin-RevId: 344101137
Change-Id: I14555fda74dca3b5f1582da9008901937e3f14e2
2020-12-08 17:51:45 -08:00
Yong Tang
344778c33f Bump libjpeg-turbo from 2.0.4 to 2.0.5
It looks like the latest libjpeg-turbo is 2.0.5 so this PR
bumps the version (currently on 2.0.4).

Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
2020-11-13 09:55:48 -08:00
dependabot[bot]
e99de30c59 Bump junit in /tensorflow/java/maven/spark-tensorflow-connector
Bumps [junit](https://github.com/junit-team/junit4) from 4.11 to 4.13.1.
- [Release notes](https://github.com/junit-team/junit4/releases)
- [Changelog](https://github.com/junit-team/junit4/blob/main/doc/ReleaseNotes4.11.md)
- [Commits](https://github.com/junit-team/junit4/compare/r4.11...r4.13.1)

Signed-off-by: dependabot[bot] <support@github.com>
2020-10-13 16:11:07 -07:00
dependabot[bot]
1b20c051f4 Bump junit in /tensorflow/java/maven/tensorflow-hadoop
Bumps [junit](https://github.com/junit-team/junit4) from 4.11 to 4.13.1.
- [Release notes](https://github.com/junit-team/junit4/releases)
- [Changelog](https://github.com/junit-team/junit4/blob/main/doc/ReleaseNotes4.11.md)
- [Commits](https://github.com/junit-team/junit4/compare/r4.11...r4.13.1)

Signed-off-by: dependabot[bot] <support@github.com>
2020-10-13 16:11:07 -07:00
Mihai Maruseac
5681c179ef
Merge pull request from tensorflow/update_v
Remove Unsupported gpu architecture compute_80
2020-10-02 11:27:39 -07:00
Geeta Chavan
9133b1663a Remove Unsupported gpu architecture compute_80 2020-10-02 11:07:24 -07:00
Mihai Maruseac
483ccdffbc
Merge pull request from geetachavan1/cherrypicks_XBAAK
[Cherrypick:r2.3] Fix libtensorflow CUDA compute capabilities.
2020-10-01 15:55:26 -07:00
Amit Patankar
9f72a2cce3 Fix libtensorflow CUDA compute capabilities.
PiperOrigin-RevId: 330987201
Change-Id: I1a10343229216f70da922c163cc9ebcbfe069947
2020-10-01 15:45:45 -07:00
Mihai Maruseac
7a0fe48cd6
Merge pull request from geetachavan1/cherrypicks_89NDC
[CherryPick:r2.3]Add cuda compute configs to the variable list for libtensorflow GPU builds.
2020-10-01 14:59:24 -07:00
Amit Patankar
4a20f6eb6b Add cuda compute configs to the variable list for libtensorflow GPU builds.
PiperOrigin-RevId: 330570975
Change-Id: Iafdd2711c505152cc719b2ce636a9aa18691e00f
2020-10-01 14:10:46 -07:00
Mihai Maruseac
fcc4b966f1
Merge pull request from tensorflow-jenkins/version-numbers-2.3.1-16251
Update version numbers for TensorFlow 2.3.1
2020-09-21 18:57:17 -07:00
TensorFlow Release Automation
4cf223069a Update version numbers to 2.3.1 2020-09-21 18:55:42 -07:00
Mihai Maruseac
eee8224728
Merge pull request from tensorflow-jenkins/relnotes-2.3.1-24672
Update release notes for TensorFlow 2.3.1
2020-09-21 18:52:12 -07:00
Mihai Maruseac
0d41b1dfc9
Update RELEASE.md 2020-09-21 18:51:57 -07:00
TensorFlow Release Automation
d99bd631ea Insert release notes place-fill 2020-09-21 17:27:07 -07:00
Mihai Maruseac
d71d3ce252
Merge pull request from tensorflow/mihaimaruseac-patch-1-1
Fix missing import
2020-09-20 20:45:21 -07:00
Mihai Maruseac
9c91596d4d
Fix missing import 2020-09-20 20:45:05 -07:00
Mihai Maruseac
f9f12f6186
Merge pull request from tensorflow/mihaimaruseac-patch-4
Solve leftover from merge conflict
2020-09-20 15:26:15 -07:00
Mihai Maruseac
3ed271b0b0
Solve leftover from merge conflict 2020-09-20 15:25:12 -07:00
Mihai Maruseac
9cf3773b71
Merge pull request from tensorflow/mm-patch-r2.3
Patch for TF 2.3.1
2020-09-20 12:32:27 -07:00
Mihai Maruseac
92d5b97a0a Fix undefined behavior in tf.raw_ops.Switch in eager mode.
PiperOrigin-RevId: 332578058
Change-Id: I9727571d2f21476b10d8aa27c1b7176564b76ac9
2020-09-20 10:35:44 -07:00
Mihai Maruseac
d8c69c287f Fix multiple vulnerabilities in tf.experimental.dlpack.to_dlpack.
We have a use after free caused by memory coruption, a segmentation fault caused by memory corruption, several memory leaks and an undefined behavior when taking the reference of a nullptr.

PiperOrigin-RevId: 332568894
Change-Id: Ife0fc05e103b35325094ae5d822ee5fdea764572
2020-09-20 10:07:45 -07:00
Mihai Maruseac
156872df9b Fix heap buffer overflow in tf.raw_ops.SparseFillEmptyRowsGrad.
Also add tests as they were lacking

PiperOrigin-RevId: 332566071
Change-Id: I44277578e26ff5fb3fdb0dcbba6e91b2ec3e7859
2020-09-20 09:50:29 -07:00
Mihai Maruseac
b3075637e9 Fix multiple vulnerabilities in tf.raw_ops.*CountSparseOutput.
Also add tests for these API points, both for the happy paths and for the vulnerable ones.

PiperOrigin-RevId: 332563222
Change-Id: Ib3b52116a83a134c2e742a7c66e5e956db8fba05
2020-09-20 08:24:21 -07:00
Mihai Maruseac
42783a6d9e Prevent integer truncation from 64 to 32 bits.
The `tensorflow::Shard` functions last argument must be a 2 argument function where both arguments are `int64` (`long long`, 64 bits). However, there are usages where code passes in a function where arguments are `int` or `int32` (32 bits). In these cases, it is possible that the integer truncation would later cause a segfault or other unexpected behavior.

PiperOrigin-RevId: 332560414
Change-Id: Ief649406babc8d4f60b3e7a9d573cbcc5ce5b767
2020-09-19 19:59:09 -07:00
Mihai Maruseac
cb1422c9f2 Prevent int64 to int truncation in Shard API usage.
The function argument in `Shard` must be a function of two `int64` arguments. However, we are passing in a function with two `int` arguments. Thus, for large workloads, these arguments get truncated from positive `int64` values to negative `int` ones, resulting in a buffer out of bounds write.

PiperOrigin-RevId: 332557334
Change-Id: I236c9a2e7f53580e520571da8ba941a3aa9fa0b5
2020-09-19 19:42:12 -07:00
Mihai Maruseac
0315fa7140 Prevent format string vulnerability in tf.strings.as_string.
The `printf` format specifier only allows `#`, `0`, `-`, `+` and space as flag characters. Others are interpreted as width/precision/length modifier or conversion specifiers. If a character does not fit into any of these sets `printf` just displays it.

Also add a test suite for `tf.strings.as_string`. Also fix the issue where the flag character was used only if width was specified.

PiperOrigin-RevId: 332553548
Change-Id: Ie57cf2a7c14d1a36097642794c14329db669bbba
2020-09-19 19:30:25 -07:00
Mihai Maruseac
ce90127d70 Prevent segfault in GetSessionHandle{,V2}.
In eager mode, session state is null.

PiperOrigin-RevId: 332548597
Change-Id: If094812c2e094044220b9ba28f7d7601be042f38
2020-09-19 19:27:58 -07:00
Mihai Maruseac
892d5e599f Validate data_splits for tf.StringNGrams.
Without validation, we can cause a heap buffer overflow which results in data leakage and/or segfaults.

PiperOrigin-RevId: 332543478
Change-Id: Iee5bda24497a195d09d122355502480830b1b317
2020-09-19 19:16:38 -07:00
Mihai Maruseac
0fde760d7e Remove merge conflict marker from failed merge resolution 2020-09-19 18:57:03 -07:00
Mihai Maruseac
b98674e737 Fix bad import 2020-09-19 18:54:46 -07:00
Mihai Maruseac
9a64529bd4 Validate NodeDefs from FunctionDefLibrary of a GraphDef.
We already validated `NodeDef`s from a `GraphDef` but missed validating those from the `FunctionDefLibrary`. Thus, some maliciously crafted models could evade detection and cause denial of service due to a `CHECK`-fail.

PiperOrigin-RevId: 332536309
Change-Id: I052efe919ff1fe2f90815e286a1aa4c54c7b94ff
2020-09-19 18:47:14 -07:00
Mihai Maruseac
b3925917c4 [tflite] Ensure ResolveAxis properly handles negative inputs.
In Python, a list `l` of length `n` allows indexing with negative indices, `l[i]`. The only constraint is that `n + i` becomes positive. Code in `ResolveAxis` assumes the constraints and only checks it using a `DCHECK`. But the macro is a no-op in non-debug builds and that can result in reading from negative offsets (buffer underflows).

PiperOrigin-RevId: 332530683
Change-Id: I464e073fee618054ae3719a3679739007bb3f3bc
2020-09-19 18:20:15 -07:00
Mihai Maruseac
cd671a90fc [tflite] Ensure MatchingDim does not allow buffer overflow.
We check in `MatchingDim` that both arguments have the same dimensionality, however that is a `DCHECK` only enabled if building in debug mode. Hence, it could be possible to cause buffer overflows by passing in a tensor with larger dimensions as the second argument. To fix, we now make `MatchingDim` return the minimum of the two sizes.

A much better fix would be to return a status object but that requires refactoring a large part of the codebase for minor benefits.

PiperOrigin-RevId: 332526127
Change-Id: If627d0d2c80a685217b6e0d1e64b0872dbf1c5e4
2020-09-19 18:09:59 -07:00
Mihai Maruseac
1a506aef22 [tflite] Ensure input tensors don't have nullptr buffers.
A crafted TFLite model can force a node to have as input a tensor backed by a `nullptr` buffer. That is, by carefully changing the buffer index in the flatbuffer serialization, we can force the TFLite interpreter to consider a read-only tensor to be a read-write one and assume that there is an operator that has this tensor as output, writing to it and allocating memory before the tensor is used as input. If this does not happen, we get memory corruption.

PiperOrigin-RevId: 332524692
Change-Id: I57ef175152a29020af9ab041dc959e5631dce40f
2020-09-19 18:02:04 -07:00
Mihai Maruseac
094329d0dc [tflite] Ensure inputs and outputs don't overlap.
If a model uses the same tensor for both an input and an output then this can result in data loss and memory corruption. This should not happen.

PiperOrigin-RevId: 332522916
Change-Id: If0905b142415a9dfceaf2d181872f2a8fb88f48a
2020-09-18 19:24:40 -07:00
Mihai Maruseac
7e283f97d8 [tflite] Make GetOptionalInputTensor the same as GetInput.
With the previous change, there is no more need for two separate APIs. We would deprecate `GetOptionalInputTensor` in the future.

PiperOrigin-RevId: 332513386
Change-Id: Id7110271c25ebd6126ad8c82a493e37e0e0756b3
2020-09-18 18:43:46 -07:00
Mihai Maruseac
42ed6ac868 [tflite] Test for kTfLiteOptionalTensor in GetInput.
`GetInput`, `GetVariableInput` and `GetOutput` all fail to check for the case where `node->inputs->data[index]` is the special `kTfLiteOptionalTensor` value (-1) which then causes `context->tensors[node->inputs->data[index]]` to read from invalid memory location.

This fix makes `GetInput` and related return `nullptr` in those cases, asking the caller to check for `nullptr`. This is better than having `GetOptionalInputTensor` and `GetOptionalOutputTensor` (does not exist but could be added) as using the patched `GetInput` in error would be caught by a sanitizer test in the default optimized build (due to the `-fsanitize=null` option).

PiperOrigin-RevId: 332512190
Change-Id: Iabca54da2f2de02b6ece3c38b54f76d4277d689e
2020-09-18 18:31:53 -07:00
Mihai Maruseac
00c7ed7ce8 [tflite] Validate segment ids for segment_sum.
Segment identifiers in segment_sum should be in a 1-D tensor of same size as the first dimension of the input. The values of the tensor should be integers from {0, 1, 2, ... k-1}, where k is the first dimension of the input. The segment identifiers must not contain jumps and must be increasing.

See https://www.tensorflow.org/api_docs/python/tf/math#Segmentation as the source for these constraints.

PiperOrigin-RevId: 332510942
Change-Id: I898beaba00642c918bcd4b4d4ce893ebb190d869
2020-09-18 17:16:38 -07:00
Mihai Maruseac
2369d14f9d [tflite] Don't check for buffers on every subgraph.
Buffers in the model are allocated globally, hence it makes sense to check for
their presence only once (O(1)) instead of on every subgraph (O(n)).

PiperOrigin-RevId: 323677724
Change-Id: I2da0c381093006828cc4c80f03dec8a917782861
2020-09-18 16:29:33 -07:00
Mihai Maruseac
b5013e29cc
Merge pull request from tensorflow/mm-cherry-pick-sqlite-fix-r2.3
Bump sqlite to 3.33.0
2020-09-18 11:17:17 -07:00
Mihai Maruseac
454d1f4af1 Bump sqlite to 3.33.0
This should handle CVE-2020-15358.

PiperOrigin-RevId: 332484006
Change-Id: Id2e7c4e877fcfaa53184fd21139a00f3234a5e3d
2020-09-18 11:14:37 -07:00
Mihai Maruseac
254c701480
Merge pull request from tensorflow/mm-scipy-r2.3
Remove scipy dependency.
2020-09-18 09:41:05 -07:00
Mihai Maruseac
b566517cdc
Merge pull request from tensorflow/revert-41669-cherrypicks_GFYF1
Revert "2.3.0-rc2 cherry-pick request: Cherry pick library threadsafestatus "
2020-09-18 07:28:14 -07:00
Mihai Maruseac
7dadd9a7c8
Revert "2.3.0-rc2 cherry-pick request: Cherry pick library threadsafestatus " 2020-09-18 07:27:59 -07:00
Mihai Maruseac
88b3e4b855
Merge pull request from lgeiger/cherry-pick-abc-collections
[CherryPick:r2.3] Fix deprecated usage of collections ABC
2020-09-18 06:11:53 -07:00
Mihai Maruseac
397958a455
Merge pull request from minglotus-6/cherrypicks_GFYF1
2.3.0-rc2 cherry-pick request: Cherry pick library threadsafestatus
2020-09-18 06:11:14 -07:00
Mihai Maruseac
ca6b1d5f7e
Merge pull request from tensorflow/mm-windows-gpu-rename-single-pip-py38-r23
Also fix single pip Windows GPU renaming on python 3.8
2020-09-17 19:41:17 -07:00
Mihai Maruseac
202086f36e Also fix single pip Windows GPU renaming on python 3.8 2020-09-17 19:40:37 -07:00
Mihai Maruseac
d623774c83
Merge pull request from tensorflow/mm-fix-win-gpu-rename-r2.3
Fix rename of gpu pips for single pip package on Windows GPU
2020-09-17 19:35:15 -07:00
Mihai Maruseac
114ff8c84f Fix rename of gpu pips for single pip package on Windows GPU 2020-09-17 19:33:54 -07:00
Mihai Maruseac
0ea0e6d434
Merge pull request from tensorflow/mm-disable-flaky-test
Disable a flaky test on mac py38
2020-09-17 16:59:51 -07:00
Mihai Maruseac
bbf2a5d4b7 Disable a flaky test on mac py38 2020-09-17 16:58:44 -07:00
Mihai Maruseac
6a0eb6f25a
Merge pull request from tensorflow/mm-r23-create-rel
Add rel/ pool
2020-09-17 13:11:41 -07:00
Mihai Maruseac
a1f4941a5c Add rel/ pool 2020-09-17 12:56:08 -07:00
Mihai Maruseac
19c30411dc
Merge pull request from tensorflow/mihaimaruseac-patch-1
Pin Estimator nightly to latest version used during release
2020-09-17 11:14:19 -07:00
Mihai Maruseac
c418ff6ae9
Update common_win.bat 2020-09-17 11:13:38 -07:00
Mihai Maruseac
24c2db1394
Pin Estimator nightly to latest version used during release 2020-09-17 11:11:27 -07:00
Mihai Maruseac
65341f73d1 Remove scipy dependency.
See , , .
2020-07-29 10:48:39 -07:00
Mingming Liu
6ee77b0b8e Move helper class ThreadSafeStatus into a separate file with unit test.
PiperOrigin-RevId: 321974213
Change-Id: I78ceb91618c40da799097aa4d2048be0bf182c16
2020-07-23 17:36:37 +00:00
Mingming Liu
0cb65c025f Resolved merge conflict in core/kernels/BUILD 2020-07-23 17:34:33 +00:00
Lukas Geiger
00615b43cb Fix deprecated usage of collections ABC 2020-07-14 00:15:22 +02:00
133 changed files with 3264 additions and 205 deletions
RELEASE.md
tensorflow
c/eager
cc/saved_model
core
java/maven
spark-tensorflow-connector
tensorflow-hadoop
lite
python
stream_executor/cuda
tensorflow.bzl
tools/ci_build

View File

@ -1,3 +1,78 @@
# Release 2.3.2
## Bug Fixes and Other Changes
* Fixes an access to unitialized memory in Eigen code
([CVE-2020-26266](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-26266))
* Fixes a security vulnerability caused by lack of validation in
`tf.raw_ops.DataFormatVecPermute` and `tf.raw_ops.DataFormatDimMap`
([CVE-2020-26267](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-26267))
* Fixes a vulnerability caused by attempting to write to immutable memory region in
`tf.raw_ops.ImmutableConst`
([CVE-2020-26268](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-26268)
* Fixes a `CHECK`-fail in LSTM with zero-length input
([CVE-2020-26270](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-26270))
* Fixes a security vulnerability caused by accessing heap data outside of bounds
when loading a specially crafted `SavedModel`
([CVE-2020-26271](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-26271))
* Solves an OOM issue on TPUs when XLA contexts use fused average updates
* Updates `libjpeg-turbo` to `2.0.5` to handle
[CVE-2020-13790](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13790).
* Updates `junit` to `4.13.1` to handle
[CVE-2020-15250](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15250).
* Updates `PCRE` to `8.44` to handle
[CVE-2019-20838](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20838)
and
[CVE-2020-14155](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14155).
* Updates `sqlite3` to `3.44.0` to keep in sync with master branch.
# Release 2.3.1
## Bug Fixes and Other Changes
* Fixes an undefined behavior causing a segfault in `tf.raw_ops.Switch`
([CVE-2020-15190](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15190))
* Fixes three vulnerabilities in conversion to DLPack format
([CVE-2020-15191](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15191),
[CVE-2020-15192](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15192),
[CVE-2020-15193](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15193))
* Fixes two vulnerabilities in `SparseFillEmptyRowsGrad`
([CVE-2020-15194](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15194),
[CVE-2020-15195](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15195))
* Fixes several vulnerabilities in `RaggedCountSparseOutput` and
`SparseCountSparseOutput` operations
([CVE-2020-15196](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15196),
[CVE-2020-15197](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15197),
[CVE-2020-15198](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15198),
[CVE-2020-15199](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15199),
[CVE-2020-15200](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15200),
[CVE-2020-15201](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15201))
* Fixes an integer truncation vulnerability in code using the work sharder API
([CVE-2020-15202](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15202))
* Fixes a format string vulnerability in `tf.strings.as_string`
([CVE-2020-15203](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15203))
* Fixes segfault raised by calling session-only ops in eager mode
([CVE-2020-15204](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15204))
* Fixes data leak and potential ASLR violation from `tf.raw_ops.StringNGrams`
([CVE-2020-15205](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15205))
* Fixes segfaults caused by incomplete `SavedModel` validation
([CVE-2020-15206](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15206))
* Fixes a data corruption due to a bug in negative indexing support in TFLite
([CVE-2020-15207](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15207))
* Fixes a data corruption due to dimension mismatch in TFLite
([CVE-2020-15208](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15208))
* Fixes several vulnerabilities in TFLite saved model format
([CVE-2020-15209](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15209),
[CVE-2020-15210](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15210),
[CVE-2020-15211](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15211))
* Fixes several vulnerabilities in TFLite implementation of segment sum
([CVE-2020-15212](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15212),
[CVE-2020-15213](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15213),
[CVE-2020-15214](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15214))
* Updates `sqlite3` to `3.33.00` to handle
[CVE-2020-15358](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15358).
* Fixes deprecated usage of `collections` API
* Removes `scipy` dependency from `setup.py` since TensorFlow does not need it
to install the pip package
# Release 2.3.0
## Major Features and Improvements

View File

@ -248,21 +248,36 @@ void TFE_CallDLManagedTensorDeleter(void* dlm_ptr) {
}
void* TFE_HandleToDLPack(TFE_TensorHandle* h, TF_Status* status) {
auto tf_dlm_context = GetDlContext(h, status);
if (!status->status.ok()) {
return nullptr;
}
auto* tf_dlm_data = TFE_TensorHandleDevicePointer(h, status);
if (!status->status.ok()) {
return nullptr;
}
const Tensor* tensor = GetTensorFromHandle(h, status);
TF_DataType data_type = static_cast<TF_DataType>(tensor->dtype());
TensorReference tensor_ref(*tensor); // This will call buf_->Ref()
auto tf_dlm_type = GetDlDataType(data_type, status);
if (!status->status.ok()) {
return nullptr;
}
TensorReference tensor_ref(*tensor); // This will call buf_->Ref()
auto* tf_dlm_tensor_ctx = new TfDlManagedTensorCtx(tensor_ref);
tf_dlm_tensor_ctx->reference = tensor_ref;
DLManagedTensor* dlm_tensor = &tf_dlm_tensor_ctx->tensor;
dlm_tensor->manager_ctx = tf_dlm_tensor_ctx;
dlm_tensor->deleter = &DLManagedTensorDeleter;
dlm_tensor->dl_tensor.ctx = GetDlContext(h, status);
dlm_tensor->dl_tensor.ctx = tf_dlm_context;
int ndim = tensor->dims();
dlm_tensor->dl_tensor.ndim = ndim;
dlm_tensor->dl_tensor.data = TFE_TensorHandleDevicePointer(h, status);
dlm_tensor->dl_tensor.dtype = GetDlDataType(data_type, status);
dlm_tensor->dl_tensor.data = tf_dlm_data;
dlm_tensor->dl_tensor.dtype = tf_dlm_type;
std::vector<int64_t>* shape_arr = &tf_dlm_tensor_ctx->shape;
std::vector<int64_t>* stride_arr = &tf_dlm_tensor_ctx->strides;
@ -275,13 +290,14 @@ void* TFE_HandleToDLPack(TFE_TensorHandle* h, TF_Status* status) {
(*stride_arr)[i] = (*shape_arr)[i + 1] * (*stride_arr)[i + 1];
}
dlm_tensor->dl_tensor.shape = &(*shape_arr)[0];
dlm_tensor->dl_tensor.shape = shape_arr->data();
// There are two ways to represent compact row-major data
// 1) nullptr indicates tensor is compact and row-majored.
// 2) fill in the strides array as the real case for compact row-major data.
// Here we choose option 2, since some frameworks didn't handle the strides
// argument properly.
dlm_tensor->dl_tensor.strides = &(*stride_arr)[0];
dlm_tensor->dl_tensor.strides = stride_arr->data();
dlm_tensor->dl_tensor.byte_offset =
0; // TF doesn't handle the strides and byte_offsets here
return static_cast<void*>(dlm_tensor);

View File

@ -21,6 +21,7 @@ limitations under the License.
#include "tensorflow/cc/saved_model/loader_util.h"
#include "tensorflow/cc/saved_model/reader.h"
#include "tensorflow/core/framework/attr_value.pb.h"
#include "tensorflow/core/framework/function.pb.h"
#include "tensorflow/core/framework/node_def.pb.h"
#include "tensorflow/core/framework/tensor.pb.h"
#include "tensorflow/core/lib/io/path.h"
@ -72,26 +73,41 @@ uint64 GetLatencyMicroseconds(const uint64 start_microseconds) {
// Ensure that constant tensors loaded from the saved model have valid shape.
// Also ensure that constant nodes have a value assigned to them.
// TODO(b/154763635): this is temporary and will be replaced with a better audit
static Status ValidateNode(const NodeDef& node) {
const auto node_iterator = node.attr().find("value");
if (node_iterator != node.attr().end()) {
AttrValue node_value = node_iterator->second;
if (node_value.has_tensor()) {
const PartialTensorShape node_shape(node_value.tensor().tensor_shape());
if (node_shape.num_elements() < 0) {
return errors::FailedPrecondition(
"Saved model contains node \"", node.name(), "\" (op \"", node.op(),
"\") which initializes from a tensor with ",
node_shape.num_elements(), " elements");
}
}
} else if (node.op() == "Const") {
return errors::FailedPrecondition(
"Saved model contains node \"", node.name(),
"\" which is a constant tensor but no value has been provided");
}
return Status::OK();
}
static Status ValidateSavedTensors(const GraphDef& graph_def) {
for (const auto& node : graph_def.node()) {
const auto node_iterator = node.attr().find("value");
if (node_iterator != node.attr().end()) {
AttrValue node_value = node_iterator->second;
if (node_value.has_tensor()) {
const PartialTensorShape node_shape(node_value.tensor().tensor_shape());
if (node_shape.num_elements() < 0) {
return errors::FailedPrecondition(
"Saved model contains node \"", node.name(), "\" (op \"",
node.op(), "\") which initializes from a tensor with ",
node_shape.num_elements(), " elements");
}
TF_RETURN_IF_ERROR(ValidateNode(node));
}
if (graph_def.has_library()) {
const FunctionDefLibrary& library = graph_def.library();
for (const auto& function : library.function()) {
for (const auto& node : function.node_def()) {
TF_RETURN_IF_ERROR(ValidateNode(node));
}
} else if (node.op() == "Const") {
return errors::FailedPrecondition(
"Saved model contains node \"", node.name(),
"\" which is a constant tensor but no value has been provided");
}
}
return Status::OK();
}

View File

@ -307,7 +307,12 @@ Status KernelAndDeviceOp::Run(
if (outputs != nullptr) {
outputs->clear();
for (int i = 0; i < context.num_outputs(); ++i) {
outputs->push_back(Tensor(*context.mutable_output(i)));
const auto* output_tensor = context.mutable_output(i);
if (output_tensor != nullptr) {
outputs->push_back(Tensor(*output_tensor));
} else {
outputs->push_back(Tensor());
}
}
}
return Status::OK();

View File

@ -44,6 +44,7 @@ limitations under the License.
#include "tensorflow/core/lib/gtl/inlined_vector.h"
#include "tensorflow/core/lib/strings/scanner.h"
#include "tensorflow/core/lib/strings/str_util.h"
#include "tensorflow/core/platform/errors.h"
#include "tensorflow/core/platform/logging.h"
#include "tensorflow/core/platform/macros.h"
#include "tensorflow/core/public/version.h"
@ -1425,6 +1426,17 @@ void GraphConstructor::Undo() {
Status GraphConstructor::MakeEdge(Node* src, int output_index, Node* dst,
int input_index) {
if (output_index >= src->num_outputs()) {
return errors::InvalidArgument(
"Output ", output_index, " of node ", src->name(),
" does not exist. Node only has ", src->num_outputs(), " outputs.");
}
if (input_index >= dst->num_inputs()) {
return errors::InvalidArgument(
"Input ", input_index, " of node ", dst->name(),
" does not exist. Node only has ", dst->num_inputs(), " inputs.");
}
DataType src_out = src->output_type(output_index);
DataType dst_in = dst->input_type(input_index);
if (!TypesCompatible(dst_in, src_out)) {

View File

@ -6085,6 +6085,24 @@ tf_kernel_library(
deps = STRING_DEPS,
)
tf_cc_test(
name = "as_string_op_test",
size = "small",
srcs = ["as_string_op_test.cc"],
deps = [
":as_string_op",
":ops_testutil",
":ops_util",
"//tensorflow/core:core_cpu",
"//tensorflow/core:framework",
"//tensorflow/core:lib",
"//tensorflow/core:protos_all_cc",
"//tensorflow/core:test",
"//tensorflow/core:test_main",
"//tensorflow/core:testlib",
],
)
tf_kernel_library(
name = "unicode_ops",
prefix = "unicode_ops",

View File

@ -65,9 +65,26 @@ class AsStringOp : public OpKernel {
OP_REQUIRES(ctx, !(scientific && shortest),
errors::InvalidArgument(
"Cannot select both scientific and shortest notation"));
format_ = "%";
if (!fill_string.empty()) {
switch (fill_string[0]) {
case ' ':
case '+':
case '-':
case '0':
case '#':
strings::Appendf(&format_, "%s", fill_string.c_str());
break;
default:
bool fill_not_supported = true;
OP_REQUIRES(ctx, !fill_not_supported,
errors::InvalidArgument("Fill argument not supported: \"",
fill_string, "\""));
}
}
if (width > -1) {
strings::Appendf(&format_, "%s%d", fill_string.c_str(), width);
strings::Appendf(&format_, "%d", width);
}
if (precision > -1) {
strings::Appendf(&format_, ".%d", precision);

View File

@ -0,0 +1,245 @@
/* Copyright 2020 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
#include "tensorflow/core/framework/fake_input.h"
#include "tensorflow/core/framework/node_def_builder.h"
#include "tensorflow/core/framework/tensor.h"
#include "tensorflow/core/framework/tensor_testutil.h"
#include "tensorflow/core/framework/types.h"
#include "tensorflow/core/kernels/ops_testutil.h"
#include "tensorflow/core/kernels/ops_util.h"
#include "tensorflow/core/lib/core/status_test_util.h"
namespace tensorflow {
namespace {
class AsStringGraphTest : public OpsTestBase {
protected:
Status Init(DataType input_type, const string& fill = "", int width = -1,
int precision = -1, bool scientific = false,
bool shortest = false) {
TF_CHECK_OK(NodeDefBuilder("op", "AsString")
.Input(FakeInput(input_type))
.Attr("fill", fill)
.Attr("precision", precision)
.Attr("scientific", scientific)
.Attr("shortest", shortest)
.Attr("width", width)
.Finalize(node_def()));
return InitOp();
}
};
TEST_F(AsStringGraphTest, Int8) {
TF_ASSERT_OK(Init(DT_INT8));
AddInputFromArray<int8>(TensorShape({3}), {-42, 0, 42});
TF_ASSERT_OK(RunOpKernel());
Tensor expected(allocator(), DT_STRING, TensorShape({3}));
test::FillValues<tstring>(&expected, {"-42", "0", "42"});
test::ExpectTensorEqual<tstring>(expected, *GetOutput(0));
}
TEST_F(AsStringGraphTest, Int64) {
TF_ASSERT_OK(Init(DT_INT64));
AddInputFromArray<int64>(TensorShape({3}), {-42, 0, 42});
TF_ASSERT_OK(RunOpKernel());
Tensor expected(allocator(), DT_STRING, TensorShape({3}));
test::FillValues<tstring>(&expected, {"-42", "0", "42"});
test::ExpectTensorEqual<tstring>(expected, *GetOutput(0));
}
TEST_F(AsStringGraphTest, FloatDefault) {
TF_ASSERT_OK(Init(DT_FLOAT));
AddInputFromArray<float>(TensorShape({4}), {-42, 0, 3.14159, 42});
TF_ASSERT_OK(RunOpKernel());
Tensor expected(allocator(), DT_STRING, TensorShape({4}));
test::FillValues<tstring>(
&expected, {"-42.000000", "0.000000", "3.141590", "42.000000"});
test::ExpectTensorEqual<tstring>(expected, *GetOutput(0));
}
TEST_F(AsStringGraphTest, FloatScientific) {
TF_ASSERT_OK(Init(DT_FLOAT, /*fill=*/"", /*width=*/-1, /*precision=*/-1,
/*scientific=*/true));
AddInputFromArray<float>(TensorShape({4}), {-42, 0, 3.14159, 42});
TF_ASSERT_OK(RunOpKernel());
Tensor expected(allocator(), DT_STRING, TensorShape({4}));
test::FillValues<tstring>(&expected, {"-4.200000e+01", "0.000000e+00",
"3.141590e+00", "4.200000e+01"});
test::ExpectTensorEqual<tstring>(expected, *GetOutput(0));
}
TEST_F(AsStringGraphTest, FloatShortest) {
TF_ASSERT_OK(Init(DT_FLOAT, /*fill=*/"", /*width=*/-1, /*precision=*/-1,
/*scientific=*/false, /*shortest=*/true));
AddInputFromArray<float>(TensorShape({4}), {-42, 0, 3.14159, 42});
TF_ASSERT_OK(RunOpKernel());
Tensor expected(allocator(), DT_STRING, TensorShape({4}));
test::FillValues<tstring>(&expected, {"-42", "0", "3.14159", "42"});
test::ExpectTensorEqual<tstring>(expected, *GetOutput(0));
}
TEST_F(AsStringGraphTest, FloatPrecisionOnly) {
TF_ASSERT_OK(Init(DT_FLOAT, /*fill=*/"", /*width=*/-1, /*precision=*/2));
AddInputFromArray<float>(TensorShape({4}), {-42, 0, 3.14159, 42});
TF_ASSERT_OK(RunOpKernel());
Tensor expected(allocator(), DT_STRING, TensorShape({4}));
test::FillValues<tstring>(&expected, {"-42.00", "0.00", "3.14", "42.00"});
test::ExpectTensorEqual<tstring>(expected, *GetOutput(0));
}
TEST_F(AsStringGraphTest, FloatWidthOnly) {
TF_ASSERT_OK(Init(DT_FLOAT, /*fill=*/"", /*width=*/5));
AddInputFromArray<float>(TensorShape({4}), {-42, 0, 3.14159, 42});
TF_ASSERT_OK(RunOpKernel());
Tensor expected(allocator(), DT_STRING, TensorShape({4}));
test::FillValues<tstring>(
&expected, {"-42.000000", "0.000000", "3.141590", "42.000000"});
test::ExpectTensorEqual<tstring>(expected, *GetOutput(0));
}
TEST_F(AsStringGraphTest, Float_5_2_Format) {
TF_ASSERT_OK(Init(DT_FLOAT, /*fill=*/"", /*width=*/5, /*precision=*/2));
AddInputFromArray<float>(TensorShape({4}), {-42, 0, 3.14159, 42});
TF_ASSERT_OK(RunOpKernel());
Tensor expected(allocator(), DT_STRING, TensorShape({4}));
test::FillValues<tstring>(&expected, {"-42.00", " 0.00", " 3.14", "42.00"});
test::ExpectTensorEqual<tstring>(expected, *GetOutput(0));
}
TEST_F(AsStringGraphTest, Complex) {
TF_ASSERT_OK(Init(DT_COMPLEX64, /*fill=*/"", /*width=*/5, /*precision=*/2));
AddInputFromArray<complex64>(TensorShape({3}), {{-4, 2}, {0}, {3.14159, -1}});
TF_ASSERT_OK(RunOpKernel());
Tensor expected(allocator(), DT_STRING, TensorShape({3}));
test::FillValues<tstring>(
&expected, {"(-4.00, 2.00)", "( 0.00, 0.00)", "( 3.14,-1.00)"});
test::ExpectTensorEqual<tstring>(expected, *GetOutput(0));
}
TEST_F(AsStringGraphTest, Bool) {
TF_ASSERT_OK(Init(DT_BOOL));
AddInputFromArray<bool>(TensorShape({2}), {true, false});
TF_ASSERT_OK(RunOpKernel());
Tensor expected(allocator(), DT_STRING, TensorShape({2}));
test::FillValues<tstring>(&expected, {"true", "false"});
test::ExpectTensorEqual<tstring>(expected, *GetOutput(0));
}
TEST_F(AsStringGraphTest, String) {
Status s = Init(DT_STRING);
ASSERT_EQ(error::INVALID_ARGUMENT, s.code());
ASSERT_TRUE(absl::StrContains(
s.error_message(),
"Value for attr 'T' of string is not in the list of allowed values"));
}
TEST_F(AsStringGraphTest, OnlyOneOfScientificAndShortest) {
Status s = Init(DT_FLOAT, /*fill=*/"", /*width=*/-1, /*precision=*/-1,
/*scientific=*/true, /*shortest=*/true);
ASSERT_EQ(error::INVALID_ARGUMENT, s.code());
ASSERT_TRUE(
absl::StrContains(s.error_message(),
"Cannot select both scientific and shortest notation"));
}
TEST_F(AsStringGraphTest, NoShortestForNonFloat) {
Status s = Init(DT_INT32, /*fill=*/"", /*width=*/-1, /*precision=*/-1,
/*scientific=*/false, /*shortest=*/true);
ASSERT_EQ(error::INVALID_ARGUMENT, s.code());
ASSERT_TRUE(absl::StrContains(
s.error_message(),
"scientific and shortest format not supported for datatype"));
}
TEST_F(AsStringGraphTest, NoScientificForNonFloat) {
Status s = Init(DT_INT32, /*fill=*/"", /*width=*/-1, /*precision=*/-1,
/*scientific=*/true);
ASSERT_EQ(error::INVALID_ARGUMENT, s.code());
ASSERT_TRUE(absl::StrContains(
s.error_message(),
"scientific and shortest format not supported for datatype"));
}
TEST_F(AsStringGraphTest, NoPrecisionForNonFloat) {
Status s = Init(DT_INT32, /*fill=*/"", /*width=*/-1, /*precision=*/5);
ASSERT_EQ(error::INVALID_ARGUMENT, s.code());
ASSERT_TRUE(absl::StrContains(s.error_message(),
"precision not supported for datatype"));
}
TEST_F(AsStringGraphTest, LongFill) {
Status s = Init(DT_INT32, /*fill=*/"asdf");
ASSERT_EQ(error::INVALID_ARGUMENT, s.code());
ASSERT_TRUE(absl::StrContains(s.error_message(),
"Fill string must be one or fewer characters"));
}
TEST_F(AsStringGraphTest, FillWithZero) {
TF_ASSERT_OK(Init(DT_INT64, /*fill=*/"0", /*width=*/4));
AddInputFromArray<int64>(TensorShape({3}), {-42, 0, 42});
TF_ASSERT_OK(RunOpKernel());
Tensor expected(allocator(), DT_STRING, TensorShape({3}));
test::FillValues<tstring>(&expected, {"-042", "0000", "0042"});
test::ExpectTensorEqual<tstring>(expected, *GetOutput(0));
}
TEST_F(AsStringGraphTest, FillWithSpace) {
TF_ASSERT_OK(Init(DT_INT64, /*fill=*/" ", /*width=*/4));
AddInputFromArray<int64>(TensorShape({3}), {-42, 0, 42});
TF_ASSERT_OK(RunOpKernel());
Tensor expected(allocator(), DT_STRING, TensorShape({3}));
test::FillValues<tstring>(&expected, {" -42", " 0", " 42"});
test::ExpectTensorEqual<tstring>(expected, *GetOutput(0));
}
TEST_F(AsStringGraphTest, FillWithChar1) {
TF_ASSERT_OK(Init(DT_INT64, /*fill=*/"-", /*width=*/4));
AddInputFromArray<int64>(TensorShape({3}), {-42, 0, 42});
TF_ASSERT_OK(RunOpKernel());
Tensor expected(allocator(), DT_STRING, TensorShape({3}));
test::FillValues<tstring>(&expected, {"-42 ", "0 ", "42 "});
test::ExpectTensorEqual<tstring>(expected, *GetOutput(0));
}
TEST_F(AsStringGraphTest, FillWithChar3) {
Status s = Init(DT_INT32, /*fill=*/"s");
ASSERT_EQ(error::INVALID_ARGUMENT, s.code());
ASSERT_TRUE(
absl::StrContains(s.error_message(), "Fill argument not supported"));
}
TEST_F(AsStringGraphTest, FillWithChar4) {
Status s = Init(DT_INT32, /*fill=*/"n");
ASSERT_EQ(error::INVALID_ARGUMENT, s.code());
ASSERT_TRUE(
absl::StrContains(s.error_message(), "Fill argument not supported"));
}
} // end namespace
} // end namespace tensorflow

View File

@ -193,7 +193,8 @@ struct LaunchBatchBandedTriangularSolve {
Shard(worker_threads.num_threads, worker_threads.workers, batch_size,
cost_per_unit,
[&in_x, &in_y, adjoint, lower, &bcast, out](int start, int limit) {
[&in_x, &in_y, adjoint, lower, &bcast, out](int64 start,
int64 limit) {
SequentialBandedTriangularSolveKernel<Scalar>::Run(
in_x, in_y, lower, adjoint, bcast, out, start, limit);
});

View File

@ -121,7 +121,7 @@ class BoostedTreesTrainingPredictOp : public OpKernel {
auto do_work = [&resource, &bucketized_features, &cached_tree_ids,
&cached_node_ids, &output_partial_logits,
&output_node_ids, latest_tree,
this](int32 start, int32 end) {
this](int64 start, int64 end) {
for (int32 i = start; i < end; ++i) {
int32 tree_id = cached_tree_ids(i);
int32 node_id = cached_node_ids(i);
@ -237,7 +237,7 @@ class BoostedTreesPredictOp : public OpKernel {
const int32 last_tree = resource->num_trees() - 1;
auto do_work = [&resource, &bucketized_features, &output_logits, last_tree,
this](int32 start, int32 end) {
this](int64 start, int64 end) {
for (int32 i = start; i < end; ++i) {
std::vector<float> tree_logits(logits_dimension_, 0.0);
int32 tree_id = 0;
@ -340,7 +340,7 @@ class BoostedTreesExampleDebugOutputsOp : public OpKernel {
// path. Note: feature_ids has one less value than logits_path because the
// first value of each logit path will be the bias.
auto do_work = [&resource, &bucketized_features, &output_debug_info,
last_tree](int32 start, int32 end) {
last_tree](int64 start, int64 end) {
for (int32 i = start; i < end; ++i) {
// Proto to store debug outputs, per example.
boosted_trees::DebugOutput example_debug_info;

View File

@ -178,10 +178,30 @@ class SparseCount : public OpKernel {
const Tensor& weights = context->input(3);
bool use_weights = weights.NumElements() > 0;
OP_REQUIRES(context, TensorShapeUtils::IsMatrix(indices.shape()),
errors::InvalidArgument(
"Input indices must be a 2-dimensional tensor. Got: ",
indices.shape().DebugString()));
if (use_weights) {
OP_REQUIRES(
context, weights.shape() == values.shape(),
errors::InvalidArgument(
"Weights and values must have the same shape. Weight shape: ",
weights.shape().DebugString(),
"; values shape: ", values.shape().DebugString()));
}
bool is_1d = shape.NumElements() == 1;
int num_batches = is_1d ? 1 : shape.flat<int64>()(0);
int num_values = values.NumElements();
OP_REQUIRES(context, num_values == indices.shape().dim_size(0),
errors::InvalidArgument(
"Number of values must match first dimension of indices.",
"Got ", num_values,
" values, indices shape: ", indices.shape().DebugString()));
const auto indices_values = indices.matrix<int64>();
const auto values_values = values.flat<T>();
const auto weight_values = weights.flat<W>();
@ -235,12 +255,33 @@ class RaggedCount : public OpKernel {
bool use_weights = weights.NumElements() > 0;
bool is_1d = false;
if (use_weights) {
OP_REQUIRES(
context, weights.shape() == values.shape(),
errors::InvalidArgument(
"Weights and values must have the same shape. Weight shape: ",
weights.shape().DebugString(),
"; values shape: ", values.shape().DebugString()));
}
const auto splits_values = splits.flat<int64>();
const auto values_values = values.flat<T>();
const auto weight_values = weights.flat<W>();
int num_batches = splits.NumElements() - 1;
int num_values = values.NumElements();
OP_REQUIRES(
context, num_batches > 0,
errors::InvalidArgument(
"Must provide at least 2 elements for the splits argument"));
OP_REQUIRES(context, splits_values(0) == 0,
errors::InvalidArgument("Splits must start with 0, not with ",
splits_values(0)));
OP_REQUIRES(context, splits_values(num_batches) == num_values,
errors::InvalidArgument(
"Splits must end with the number of values, got ",
splits_values(num_batches), " instead of ", num_values));
auto per_batch_counts = BatchedMap<W>(num_batches);
T max_value = 0;
int batch_idx = 0;

View File

@ -223,7 +223,7 @@ struct CropAndResize<CPUDevice, T> {
const int depth = crops.dimension(3);
// Sharding across boxes.
auto CropAndResizePerBox = [&](int start_box, int limit_box) {
auto CropAndResizePerBox = [&](int64 start_box, int64 limit_box) {
for (int b = start_box; b < limit_box; ++b) {
const float y1 = boxes(b, 0);
const float x1 = boxes(b, 1);
@ -449,7 +449,7 @@ struct CropAndResizeBackpropImage<CPUDevice, T> {
grads_image.setZero();
auto CropAndResizeBackImgPerBox = [&](int start_box, int limit_box) {
auto CropAndResizeBackImgPerBox = [&](int64 start_box, int64 limit_box) {
for (int b = start_box; b < limit_box; ++b) {
const float y1 = boxes(b, 0);
const float x1 = boxes(b, 1);

View File

@ -18,16 +18,52 @@ limitations under the License.
#define EIGEN_USE_THREADS
#include "tensorflow/core/kernels/data_format_ops.h"
#include <map>
#include "third_party/eigen3/unsupported/Eigen/CXX11/Tensor"
#include "tensorflow/core/framework/op_kernel.h"
#include "tensorflow/core/framework/register_types.h"
#include "tensorflow/core/framework/tensor.h"
#include "tensorflow/core/platform/errors.h"
namespace tensorflow {
typedef Eigen::ThreadPoolDevice CPUDevice;
typedef Eigen::GpuDevice GPUDevice;
// Ensure that `src` and `dst` define a valid permutation.
// Ops defined in this file assume that user specifies a permutation via two
// string attributes. This check validates that these attributes properly define
// it to prevent security vulnerabilities.
static bool IsValidPermutation(const std::string& src, const std::string& dst) {
if (src.size() != dst.size()) {
return false;
}
std::map<char, bool> characters;
// Every character in `src` must be present only once
for (const auto c : src) {
if (characters[c]) {
return false;
}
characters[c] = true;
}
// Every character in `dst` must show up in `src` exactly once
for (const auto c : dst) {
if (!characters[c]) {
return false;
}
characters[c] = false;
}
// At this point, characters[] has been switched to true and false exactly
// once for all character in `src` (and `dst`) so we have a valid permutation
return true;
}
template <typename Device, typename T>
class DataFormatDimMapOp : public OpKernel {
public:
@ -37,15 +73,20 @@ class DataFormatDimMapOp : public OpKernel {
OP_REQUIRES_OK(context, context->GetAttr("src_format", &src_format));
string dst_format;
OP_REQUIRES_OK(context, context->GetAttr("dst_format", &dst_format));
OP_REQUIRES(context, src_format.size() == 4,
errors::InvalidArgument(strings::StrCat(
"Source format must of length 4, received src_format = ",
src_format)));
OP_REQUIRES(context, src_format.size() == 4 || src_format.size() == 5,
errors::InvalidArgument(
"Source format must be of length 4 or 5, received "
"src_format = ",
src_format));
OP_REQUIRES(context, dst_format.size() == 4 || dst_format.size() == 5,
errors::InvalidArgument("Destination format must be of length "
"4 or 5, received dst_format = ",
dst_format));
OP_REQUIRES(
context, dst_format.size() == 4,
errors::InvalidArgument(strings::StrCat(
"Destination format must of length 4, received dst_format = ",
dst_format)));
context, IsValidPermutation(src_format, dst_format),
errors::InvalidArgument(
"Destination and source format must determine a permutation, got ",
src_format, " and ", dst_format));
dst_idx_ = Tensor(DT_INT32, {static_cast<int64>(src_format.size())});
for (int i = 0; i < src_format.size(); ++i) {
for (int j = 0; j < dst_format.size(); ++j) {
@ -77,8 +118,22 @@ class DataFormatVecPermuteOp : public OpKernel {
: OpKernel(context) {
string src_format;
OP_REQUIRES_OK(context, context->GetAttr("src_format", &src_format));
OP_REQUIRES(context, src_format.size() == 4 || src_format.size() == 5,
errors::InvalidArgument(
"Source format must be of length 4 or 5, received "
"src_format = ",
src_format));
string dst_format;
OP_REQUIRES_OK(context, context->GetAttr("dst_format", &dst_format));
OP_REQUIRES(context, dst_format.size() == 4 || dst_format.size() == 5,
errors::InvalidArgument("Destination format must be of length "
"4 or 5, received dst_format = ",
dst_format));
OP_REQUIRES(
context, IsValidPermutation(src_format, dst_format),
errors::InvalidArgument(
"Destination and source format must determine a permutation, got ",
src_format, " and ", dst_format));
src_format_ = src_format;
dst_format_ = dst_format;
}
@ -124,6 +179,10 @@ class DataFormatVecPermuteOp : public OpKernel {
};
keep_only_spatial_dimensions(&src_format_str);
keep_only_spatial_dimensions(&dst_format_str);
OP_REQUIRES(context,
src_format_str.size() == 2 && dst_format_str.size() == 2,
errors::InvalidArgument(
"Format specifier must contain H and W for 2D case"));
}
ComputeDstIndex(src_format_str, dst_format_str, input.dims(), &dst_idx);

View File

@ -62,6 +62,12 @@ class MemmappedTensorAllocator : public Allocator {
void set_delete_on_deallocate() { delete_on_deallocate_ = true; }
// Make sure tensors or complex types (strings, variants, resources) don't get
// their constructor called via a placement new since that would require
// writing to immutable data.
// See also: tensorflow/core/framework/typed_allocator.h
bool AllocatesOpaqueHandle() const override { return true; }
private:
std::unique_ptr<ReadOnlyMemoryRegion> memory_region_;
// If there is an error during allocation we keep it in this status.

View File

@ -95,7 +95,8 @@ struct NthElementFunctor<CPUDevice, T> {
const int last_dim = input_tensor.dim_size(input_tensor.dims() - 1);
// Allocate each row to different shard.
auto SubNthElement = [&, input, output, last_dim, n](int start, int limit) {
auto SubNthElement = [&, input, output, last_dim, n](int64 start,
int64 limit) {
// std::nth_element would rearrange the array, so we need a new buffer.
std::vector<T> buf(last_dim);

View File

@ -70,8 +70,8 @@ struct TruncatedNormalFunctor<CPUDevice, T> {
auto do_work = [samples_per_batch, num_elements, &ctx, &means, &stddevs,
&minvals, &maxvals, &gen, &output,
kStdDevsInsideBoundsToUseRandnSampler](int start_batch,
int limit_batch) {
kStdDevsInsideBoundsToUseRandnSampler](int64 start_batch,
int64 limit_batch) {
// Capturing "gen" by-value would only make a copy for the _shared_
// lambda. Since we want to let each worker have its own copy, we pass
// "gen" by reference and explicitly do a copy assignment here.
@ -333,8 +333,8 @@ struct TruncatedNormalFunctorV2<CPUDevice, T> {
auto do_work = [num_batches, samples_per_batch, &ctx, &bcast, &means,
&stddevs, &minvals, &maxvals, &gen, &output,
kStdDevsInsideBoundsToUseRandnSampler](int start_output,
int limit_output) {
kStdDevsInsideBoundsToUseRandnSampler](int64 start_output,
int64 limit_output) {
// Capturing "gen" by-value would only make a copy for the _shared_
// lambda. Since we want to let each worker have its own copy, we pass
// "gen" by reference and explicitly do a copy assignment here.

View File

@ -182,7 +182,7 @@ struct RandomBinomialFunctor<CPUDevice, T, U> {
// the sample shape and [H1, ... Hm] for the batch shape of the samples.
// We have B1 * ... * Bk samples per batch member we need.
auto DoWork = [num_batches, samples_per_batch, &bcast, &counts, &probs,
&gen, &output](int start_output, int limit_output) {
&gen, &output](int64 start_output, int64 limit_output) {
// Vectorized intermediate calculations for uniform rejection sampling.
// We always generate at most 4 samples.
Eigen::array<T, 4> z;

View File

@ -205,7 +205,7 @@ class RandomGammaOp : public OpKernel {
// avoid a couple flops which can be done on a per-alpha basis.
auto DoWork = [samples_per_alpha, num_alphas, &rng, samples_flat,
alpha_flat](int start_output, int limit_output) {
alpha_flat](int64 start_output, int64 limit_output) {
using Eigen::numext::exp;
using Eigen::numext::log;
using Eigen::numext::log1p;

View File

@ -97,7 +97,7 @@ struct PoissonFunctor<CPUDevice, T, U> {
typedef random::UniformDistribution<random::PhiloxRandom, CT> Uniform;
auto DoWork = [num_samples, num_rate, &rng, samples_flat, rate_flat](
int start_output, int limit_output) {
int64 start_output, int64 limit_output) {
// Capturing "rng" by value would only make a copy for the _shared_
// lambda. Since we want to let each worker have its own copy, we pass
// "rng" by reference and explicitly do a copy assignment.

View File

@ -16,6 +16,7 @@ limitations under the License.
// See docs in ../ops/data_flow_ops.cc.
#include <limits.h>
#include <vector>
#include "tensorflow/core/common_runtime/device.h"
@ -27,6 +28,7 @@ limitations under the License.
#include "tensorflow/core/framework/types.h"
#include "tensorflow/core/lib/core/errors.h"
#include "tensorflow/core/lib/gtl/map_util.h"
#include "tensorflow/core/platform/errors.h"
#include "tensorflow/core/platform/logging.h"
#include "tensorflow/core/platform/macros.h"
#include "tensorflow/core/platform/mutex.h"
@ -42,7 +44,11 @@ class GetSessionHandleOp : public OpKernel {
void Compute(OpKernelContext* ctx) override {
const Tensor& val = ctx->input(0);
int64 id = ctx->session_state()->GetNewId();
auto session_state = ctx->session_state();
OP_REQUIRES(ctx, session_state != nullptr,
errors::FailedPrecondition(
"GetSessionHandle called on null session state"));
int64 id = session_state->GetNewId();
TensorStore::TensorAndKey tk{val, id, requested_device()};
OP_REQUIRES_OK(ctx, ctx->tensor_store()->AddTensor(name(), tk));

View File

@ -232,6 +232,9 @@ class SparseFillEmptyRowsGradOp : public OpKernel {
context, TensorShapeUtils::IsVector(reverse_index_map_t->shape()),
errors::InvalidArgument("reverse_index_map must be a vector, saw: ",
reverse_index_map_t->shape().DebugString()));
OP_REQUIRES(context, TensorShapeUtils::IsVector(grad_values_t->shape()),
errors::InvalidArgument("grad_values must be a vector, saw: ",
grad_values_t->shape().DebugString()));
const auto reverse_index_map = reverse_index_map_t->vec<int64>();
const auto grad_values = grad_values_t->vec<T>();
@ -260,8 +263,13 @@ class SparseFillEmptyRowsGradOp : public OpKernel {
// Locate the index of the output of the forward prop associated
// with this location in the input of the forward prop. Copy
// the gradient into it. Mark it as visited.
d_values(i) = grad_values(reverse_index_map(i));
visited(reverse_index_map(i)) = true;
int64 reverse_index = reverse_index_map(i);
OP_REQUIRES(
context, 0 <= reverse_index && reverse_index < N_full,
errors::InvalidArgument("Elements in reverse index must be in [0, ",
N_full, ") but got ", reverse_index));
d_values(i) = grad_values(reverse_index);
visited(reverse_index) = true;
}
for (int j = 0; j < N_full; ++j) {
// The default value gradient gets the accumulated remainder of

View File

@ -252,7 +252,7 @@ class StatelessRandomGammaOp : public StatelessRandomOpBase {
// avoid a couple flops which can be done on a per-alpha basis.
auto DoWork = [samples_per_alpha, num_alphas, &random, samples_flat,
alpha_flat](int start_output, int limit_output) {
alpha_flat](int64 start_output, int64 limit_output) {
// Capturing "random" by-value would only make a copy for the _shared_
// lambda. Since we want to let each worker have its own copy, we pass
// "random" by reference and explicitly do a copy assignment.

View File

@ -19,6 +19,7 @@ limitations under the License.
#include "absl/strings/ascii.h"
#include "absl/strings/str_cat.h"
#include "tensorflow/core/framework/op_kernel.h"
#include "tensorflow/core/platform/errors.h"
namespace tensorflow {
namespace text {
@ -60,6 +61,18 @@ class StringNGramsOp : public tensorflow::OpKernel {
OP_REQUIRES_OK(context, context->input("data_splits", &splits));
const auto& splits_vec = splits->flat<SPLITS_TYPE>();
// Validate that the splits are valid indices into data
const int input_data_size = data->flat<tstring>().size();
const int splits_vec_size = splits_vec.size();
for (int i = 0; i < splits_vec_size; ++i) {
bool valid_splits = splits_vec(i) >= 0;
valid_splits = valid_splits && (splits_vec(i) <= input_data_size);
OP_REQUIRES(
context, valid_splits,
errors::InvalidArgument("Invalid split value ", splits_vec(i),
", must be in [0,", input_data_size, "]"));
}
int num_batch_items = splits_vec.size() - 1;
tensorflow::Tensor* ngrams_splits;
OP_REQUIRES_OK(

View File

@ -136,7 +136,7 @@ struct TopKFunctor<CPUDevice, T> {
return Status::OK();
}
auto SortIndices = [&](int start_batch, int limit_batch) {
auto SortIndices = [&](int64 start_batch, int64 limit_batch) {
for (int32 b = start_batch; b < limit_batch; ++b) {
const T* input_data = &input(b, 0);
const auto stable_comp = [input_data](const int32 a, const int32 b) {

View File

@ -22,7 +22,7 @@ limitations under the License.
// tensorflow/tools/pip_package/setup.py
#define TF_MAJOR_VERSION 2
#define TF_MINOR_VERSION 3
#define TF_PATCH_VERSION 0
#define TF_PATCH_VERSION 2
// TF_VERSION_SUFFIX is non-empty for pre-releases (e.g. "-alpha", "-alpha.1",
// "-beta", "-rc", "-rc.1")

View File

@ -35,7 +35,7 @@
<java.version>1.8</java.version>
<spark.version>2.4.5</spark.version>
<yarn.api.version>2.7.3</yarn.api.version>
<junit.version>4.11</junit.version>
<junit.version>4.13.1</junit.version>
</properties>
<build>

View File

@ -16,7 +16,7 @@
<maven.compiler.target>1.6</maven.compiler.target>
<hadoop.version>2.6.0</hadoop.version>
<protobuf.version>3.5.1</protobuf.version>
<junit.version>4.11</junit.version>
<junit.version>4.13.1</junit.version>
</properties>
<licenses>

View File

@ -18,6 +18,7 @@ limitations under the License.
#include <algorithm>
#include "tensorflow/lite/arena_planner.h"
#include "tensorflow/lite/builtin_ops.h"
#include "tensorflow/lite/c/common.h"
#include "tensorflow/lite/context_util.h"
#include "tensorflow/lite/core/api/tensor_utils.h"
@ -567,6 +568,33 @@ TfLiteStatus Subgraph::CheckTensorIndices(const char* label, const int* indices,
return kTfLiteOk;
}
// We have two arrays and we need to check that elements from one array don't
// show up in the other. We could sort both arrays and then iterate with two
// pointers from start to finish always increasing the smaller one but since
// these arrays are usually short (<25 elements for inputs, usually <3 for
// outputs), this might be slower than the naive approach (if arrays have size n
// and m, with n >> m ~ O(1), first approach is O(nlogn) whereas the other is
// O(n)). Plus, sorting the input and output arrays might not be something we
// want as it destroys ordering of elements.
//
// If it turns out that this is an issue, we can switch to the other algorithm.
TfLiteStatus Subgraph::CheckInputAndOutputForOverlap(const int* input_indices,
int num_inputs,
const int* output_indices,
int num_outputs) {
for (int i = 0; i < num_inputs; i++) {
for (int j = 0; j < num_outputs; j++) {
if (input_indices[i] == output_indices[j]) {
ReportError("Tensor %d is both input %d and output %d\n",
input_indices[i], i, j);
consistent_ = false;
return kTfLiteError;
}
}
}
return kTfLiteOk;
}
namespace {
// Multiply two sizes and return true if overflow occurred;
// This is based off tensorflow/overflow.h but is simpler as we already
@ -688,6 +716,16 @@ TfLiteStatus Subgraph::AddNodeWithParameters(
&context_,
CheckTensorIndices("node outputs", outputs.data(), outputs.size()));
// For builtin ops, inputs and outputs must not overlap. Custom ops must do
// this check by themselves if they don't support overlapping tensors. This
// distinction is to allow custom ops to just forward a tensor, reusing it as
// both input and output.
if (builtin_data != nullptr) {
TF_LITE_ENSURE_OK(&context_, CheckInputAndOutputForOverlap(
inputs.data(), inputs.size(),
outputs.data(), outputs.size()));
}
int new_node_index = nodes_and_registration_.size();
if (node_index) *node_index = new_node_index;
nodes_and_registration_.resize(nodes_and_registration_.size() + 1);
@ -934,6 +972,19 @@ TfLiteStatus Subgraph::Invoke() {
tensor->data_is_stale) {
TF_LITE_ENSURE_STATUS(EnsureTensorDataIsReadable(tensor_index));
}
if (tensor->data.raw == nullptr && tensor->bytes > 0) {
if (registration.builtin_code == kTfLiteBuiltinReshape && i == 1) {
// In general, having a tensor here with no buffer will be an error.
// However, for the reshape operator, the second input tensor is only
// used for the shape, not for the data. Thus, null buffer is ok.
continue;
} else {
// In all other cases, we need to return an error as otherwise we will
// trigger a null pointer dereference (likely).
ReportError("Input tensor %d lacks data", tensor_index);
return kTfLiteError;
}
}
}
if (check_cancelled_func_ != nullptr &&

View File

@ -433,6 +433,15 @@ class Subgraph {
TfLiteStatus CheckTensorIndices(const char* label, const int* indices,
int length);
// Check that the input indices and the output indices don't overlap.
// This is needed because same tensor must not be used both as input and
// output for an operator.
// NOTE: this changes consistent_ to be false if indices are out of bounds.
TfLiteStatus CheckInputAndOutputForOverlap(const int* input_indices,
int num_inputs,
const int* output_indices,
int num_outputs);
// Compute the number of bytes required to represent a tensor with dimensions
// specified by the array dims (of length dims_size). Returns the status code
// and bytes.

View File

@ -609,7 +609,12 @@ TfLiteStatus InterpreterBuilder::operator()(
auto* buffers = model_->buffers();
if (subgraphs->size() == 0) {
error_reporter_->Report("No subgraph in the model.\n");
TF_LITE_REPORT_ERROR(error_reporter_, "No subgraph in the model.\n");
return cleanup_and_error();
}
if (!buffers) {
TF_LITE_REPORT_ERROR(error_reporter_, "No buffers in the model.\n");
return cleanup_and_error();
}
@ -630,10 +635,10 @@ TfLiteStatus InterpreterBuilder::operator()(
(*interpreter)->subgraph(subgraph_index);
auto operators = subgraph->operators();
auto tensors = subgraph->tensors();
if (!operators || !tensors || !buffers) {
error_reporter_->Report(
"Did not get operators, tensors, or buffers in subgraph %d.\n",
subgraph_index);
if (!operators || !tensors) {
TF_LITE_REPORT_ERROR(error_reporter_,
"Did not get operators or tensors in subgraph %d.\n",
subgraph_index);
return cleanup_and_error();
}
if (modified_subgraph->AddTensors(tensors->size()) != kTfLiteOk) {

View File

@ -70,6 +70,9 @@ inline bool ResolveAxis(const int num_dims, const int* axis,
// eg: For num_dims=3, [0, 1, 2] is the same as [-3, -2, -1] */
int current = axis[idx] < 0 ? (axis[idx] + num_dims) : axis[idx];
TFLITE_DCHECK(current >= 0 && current < num_dims);
if (current < 0 || current >= num_dims) {
return false;
}
bool is_dup = false;
for (int j = 0; j < *out_num_axis; ++j) {
if (out_axis[j] == current) {

View File

@ -432,7 +432,7 @@ int MatchingArraySize(const ArrayType1& array1, int index1,
inline int MatchingDim(const RuntimeShape& shape1, int index1,
const RuntimeShape& shape2, int index2) {
TFLITE_DCHECK_EQ(shape1.Dims(index1), shape2.Dims(index2));
return shape1.Dims(index1);
return std::min(shape1.Dims(index1), shape2.Dims(index2));
}
template <typename... Args>

View File

@ -30,27 +30,48 @@ inline int SizeOfDimension(const TfLiteTensor* t, int dim) {
}
inline const TfLiteTensor* GetInput(const TfLiteContext* context,
const TfLiteNode* node, int index) {
return &context->tensors[node->inputs->data[index]];
const int tensor_index = node->inputs->data[index];
if (tensor_index < 0) {
return nullptr;
}
return &context->tensors[tensor_index];
}
// Note: You must check if result is not null:
// TfLiteTensor* my_tensor = GetVariableInput(context, node, kMyTensorIdx);
// TF_LITE_ENSURE(context, my_tensor != nullptr);
inline TfLiteTensor* GetVariableInput(TfLiteContext* context,
const TfLiteNode* node, int index) {
TfLiteTensor* tensor = &context->tensors[node->inputs->data[index]];
const int tensor_index = node->inputs->data[index];
if (tensor_index < 0) {
return nullptr;
}
TfLiteTensor* tensor = &context->tensors[tensor_index];
return (tensor->is_variable) ? tensor : nullptr;
}
inline TfLiteTensor* GetOutput(TfLiteContext* context, const TfLiteNode* node,
int index) {
return &context->tensors[node->outputs->data[index]];
const int tensor_index = node->outputs->data[index];
if (tensor_index < 0) {
return nullptr;
}
return &context->tensors[tensor_index];
}
inline TfLiteTensor* GetTemporary(TfLiteContext* context,
const TfLiteNode* node, int index) {
return &context->tensors[node->temporaries->data[index]];
const int tensor_index = node->temporaries->data[index];
if (tensor_index < 0) {
return nullptr;
}
return &context->tensors[tensor_index];
}
inline const TfLiteTensor* GetIntermediates(TfLiteContext* context,
const TfLiteNode* node, int index) {
return &context->tensors[node->intermediates->data[index]];
const int tensor_index = node->intermediates->data[index];
if (tensor_index < 0) {
return nullptr;
}
return &context->tensors[tensor_index];
}
inline int NumInputs(const TfLiteNode* node) { return node->inputs->size; }
inline int NumOutputs(const TfLiteNode* node) { return node->outputs->size; }
@ -73,12 +94,7 @@ inline int64_t NumElements(const TfLiteTensor* t) {
inline const TfLiteTensor* GetOptionalInputTensor(TfLiteContext* context,
const TfLiteNode* node,
int index) {
const bool use_tensor = index < node->inputs->size &&
node->inputs->data[index] != kTfLiteOptionalTensor;
if (use_tensor) {
return &context->tensors[node->inputs->data[index]];
}
return nullptr;
return GetInput(context, node, index);
}
// Determines whether tensor is constant.

View File

@ -34,11 +34,24 @@ TfLiteStatus ResizeOutputTensor(TfLiteContext* context,
const TfLiteTensor* data,
const TfLiteTensor* segment_ids,
TfLiteTensor* output) {
int max_index = -1;
// Segment ids should be of same cardinality as first input dimension and they
// should be increasing by at most 1, from 0 (e.g., [0, 0, 1, 2, 3] is valid)
const int segment_id_size = segment_ids->dims->data[0];
if (segment_id_size > 0) {
max_index = segment_ids->data.i32[segment_id_size - 1];
TF_LITE_ENSURE_EQ(context, segment_id_size, data->dims->data[0]);
int previous_segment_id = -1;
for (int i = 0; i < segment_id_size; i++) {
const int current_segment_id = GetTensorData<int32_t>(segment_ids)[i];
if (i == 0) {
TF_LITE_ENSURE_EQ(context, current_segment_id, 0);
} else {
int delta = current_segment_id - previous_segment_id;
TF_LITE_ENSURE(context, delta == 0 || delta == 1);
}
previous_segment_id = current_segment_id;
}
const int max_index = previous_segment_id;
const int data_rank = NumDimensions(data);
TfLiteIntArray* output_shape = TfLiteIntArrayCreate(NumDimensions(data));
output_shape->data[0] = max_index + 1;

View File

@ -110,5 +110,37 @@ TEST(SegmentSumOpModelTest, Float32Test_ThreeDimensions) {
EXPECT_THAT(model.GetOutputShape(), ElementsAreArray({2, 2, 1}));
}
TEST(SegmentSumOpModelTest, TestFailIfSegmentsAreNotSorted) {
SegmentSumOpModel<int32_t> model({TensorType_INT32, {3, 2}},
{TensorType_INT32, {3}});
model.PopulateTensor<int32_t>(model.data(), {1, 2, 3, 4, 5, 6});
model.PopulateTensor<int32_t>(model.segment_ids(), {0, 3, 1});
ASSERT_EQ(model.InvokeUnchecked(), kTfLiteError);
}
TEST(SegmentSumOpModelTest, TestFailIfSegmentsAreNotConsecutive) {
SegmentSumOpModel<int32_t> model({TensorType_INT32, {3, 2}},
{TensorType_INT32, {3}});
model.PopulateTensor<int32_t>(model.data(), {1, 2, 3, 4, 5, 6});
model.PopulateTensor<int32_t>(model.segment_ids(), {0, 3, 5});
ASSERT_EQ(model.InvokeUnchecked(), kTfLiteError);
}
TEST(SegmentSumOpModelTest, TestFailIfSegmentsAreNegative) {
SegmentSumOpModel<int32_t> model({TensorType_INT32, {3, 2}},
{TensorType_INT32, {3}});
model.PopulateTensor<int32_t>(model.data(), {1, 2, 3, 4, 5, 6});
model.PopulateTensor<int32_t>(model.segment_ids(), {-1, 0, 1});
ASSERT_EQ(model.InvokeUnchecked(), kTfLiteError);
}
TEST(SegmentSumOpModelTest, TestFailIfSegmentsAreNotTheRightCardinality) {
SegmentSumOpModel<int32_t> model({TensorType_INT32, {3, 2}},
{TensorType_INT32, {2}});
model.PopulateTensor<int32_t>(model.data(), {1, 2, 3, 4, 5, 6});
model.PopulateTensor<int32_t>(model.segment_ids(), {0, 1});
ASSERT_EQ(model.InvokeUnchecked(), kTfLiteError);
}
} // namespace
} // namespace tflite

View File

@ -18,7 +18,6 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import contextlib
from six.moves import xrange # pylint: disable=redefined-builtin
@ -37,6 +36,7 @@ from tensorflow.python.platform import tf_logging as logging
from tensorflow.python.util import compat
from tensorflow.python.util import nest
from tensorflow.python.util import tf_inspect
from tensorflow.python.util.compat import collections_abc
from tensorflow.python.util.tf_export import tf_export
_XLA_COMPILE_ATTR = '_xla_compile_id'
@ -329,7 +329,7 @@ def _compile_internal(computation, inputs=None):
if inputs is None:
inputs = []
if not isinstance(inputs, collections.Sequence):
if not isinstance(inputs, collections_abc.Sequence):
raise TypeError('inputs must be a list')
# Flatten inputs.
@ -428,15 +428,15 @@ def is_flat(outputs):
"""
# If outputs is a list or tuple, check if it has any nested structure. If
# there is, then outputs is non-flat.
if isinstance(outputs, collections.Sequence):
if isinstance(outputs, collections_abc.Sequence):
for o in outputs:
if (isinstance(o, collections.Sequence) or
isinstance(o, collections.Mapping) or
if (isinstance(o, collections_abc.Sequence) or
isinstance(o, collections_abc.Mapping) or
hasattr(o.__class__, '__attrs_attrs__')):
return False
# If outputs is a dict, it is non-flat.
if isinstance(outputs, collections.Mapping):
if isinstance(outputs, collections_abc.Mapping):
return False
# If outputs is from the attrs library, it is non-flat.
@ -467,7 +467,7 @@ def _postprocess_flat_outputs(outputs):
if outputs is None:
outputs = tuple()
# If the computation only returned one value, make it a tuple.
if not isinstance(outputs, collections.Sequence):
if not isinstance(outputs, collections_abc.Sequence):
outputs = (outputs,)
# Append `no_op` here so that return value of this function always contains

View File

@ -18,7 +18,6 @@ from __future__ import division
from __future__ import print_function
import abc
import collections
import functools
import sys
import threading
@ -72,6 +71,7 @@ from tensorflow.python.util import deprecation
from tensorflow.python.util import function_utils
from tensorflow.python.util import lazy_loader
from tensorflow.python.util import nest as tf_nest
from tensorflow.python.util.compat import collections_abc
from tensorflow.python.util.tf_export import tf_export
# Loaded lazily due to a circular dependency (roughly
@ -103,7 +103,7 @@ tf_export("data.UNKNOWN_CARDINALITY").export_constant(__name__, "UNKNOWN")
@tf_export("data.Dataset", v1=[])
@six.add_metaclass(abc.ABCMeta)
class DatasetV2(collections.Iterable, tracking_base.Trackable,
class DatasetV2(collections_abc.Iterable, tracking_base.Trackable,
composite_tensor.CompositeTensor):
"""Represents a potentially large set of elements.

View File

@ -18,7 +18,6 @@ from __future__ import division
from __future__ import print_function
import abc
import collections
import threading
import warnings
@ -41,6 +40,7 @@ from tensorflow.python.ops import gen_experimental_dataset_ops
from tensorflow.python.training.saver import BaseSaverBuilder
from tensorflow.python.training.tracking import base as trackable
from tensorflow.python.util import deprecation
from tensorflow.python.util.compat import collections_abc
from tensorflow.python.util.tf_export import tf_export
@ -543,7 +543,7 @@ class IteratorResourceDeleter(object):
@tf_export("data.Iterator", v1=[])
@six.add_metaclass(abc.ABCMeta)
class IteratorBase(collections.Iterator, trackable.Trackable,
class IteratorBase(collections_abc.Iterator, trackable.Trackable,
composite_tensor.CompositeTensor):
"""Represents an iterator of a `tf.data.Dataset`.

View File

@ -440,7 +440,7 @@ def type_spec_from_value(element, use_fallback=True):
if isinstance(element, tuple):
if hasattr(element, "_fields") and isinstance(
element._fields, collections.Sequence) and all(
element._fields, collections_abc.Sequence) and all(
isinstance(f, six.string_types) for f in element._fields):
if isinstance(element, wrapt.ObjectProxy):
element_type = type(element.__wrapped__)

View File

@ -99,7 +99,6 @@ from __future__ import division
from __future__ import print_function
import abc
import collections
import re
import threading
@ -113,6 +112,7 @@ from tensorflow.python.framework import ops
from tensorflow.python.platform import tf_logging
from tensorflow.python.training import monitored_session
from tensorflow.python.util import nest
from tensorflow.python.util.compat import collections_abc
# Helper function.
@ -445,7 +445,7 @@ class BaseDebugWrapperSession(session.SessionInterface):
"""Check whether a possibly nested structure is empty."""
if not nest.is_nested(x):
return False
if isinstance(x, collections.Mapping):
if isinstance(x, collections_abc.Mapping):
return is_empty(list(x.values()))
for item in x:
if not is_empty(item):

View File

@ -18,7 +18,6 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import functools
import sys
@ -53,6 +52,7 @@ from tensorflow.python.ops import math_ops
from tensorflow.python.ops.ragged import ragged_tensor
from tensorflow.python.types import distribute as distribute_types
from tensorflow.python.util import nest
from tensorflow.python.util.compat import collections_abc
from tensorflow.python.util.deprecation import deprecated
from tensorflow.python.util.tf_export import tf_export
from tensorflow.tools.docs import doc_controls
@ -143,7 +143,7 @@ def get_distributed_datasets_from_function(dataset_fn,
@tf_export("distribute.DistributedIterator", v1=[])
class DistributedIteratorInterface(collections.Iterator,
class DistributedIteratorInterface(collections_abc.Iterator,
distribute_types.Iterator):
"""An iterator over `tf.distribute.DistributedDataset`.
@ -272,7 +272,7 @@ class DistributedIteratorInterface(collections.Iterator,
@tf_export("distribute.DistributedDataset", v1=[])
class DistributedDatasetInterface(collections.Iterable,
class DistributedDatasetInterface(collections_abc.Iterable,
distribute_types.Iterable):
# pylint: disable=line-too-long
"""Represents a dataset distributed among devices and machines.

View File

@ -20,9 +20,11 @@ from __future__ import print_function
from absl.testing import parameterized
import numpy as np
from tensorflow.python.dlpack import dlpack
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import errors
from tensorflow.python.framework import ops
from tensorflow.python.platform import test
@ -95,6 +97,12 @@ class DLPackTest(parameterized.TestCase, test.TestCase):
self.assertRaisesRegex(Exception, ".* is not supported by dlpack",
UnsupportedComplex64)
def testMustPassTensorArgumentToDLPack(self):
with self.assertRaisesRegex(
errors.InvalidArgumentError,
"The argument to `to_dlpack` must be a TF tensor, not Python object"):
dlpack.to_dlpack([1])
if __name__ == "__main__":
ops.enable_eager_execution()

View File

@ -231,7 +231,10 @@ py_test(
srcs = ["sequence_feature_column_integration_test.py"],
python_version = "PY3",
srcs_version = "PY2AND3",
tags = ["no_pip"],
tags = [
"no_mac",
"no_pip",
],
deps = [
":feature_column_v2",
"//tensorflow/python:client_testlib",

View File

@ -32,6 +32,7 @@ from tensorflow.python.framework import tensor_conversion_registry
from tensorflow.python.framework import tensor_shape
from tensorflow.python.framework import type_spec
from tensorflow.python.types import internal
from tensorflow.python.util.compat import collections_abc
from tensorflow.python.util.lazy_loader import LazyLoader
from tensorflow.python.util.tf_export import tf_export
@ -344,7 +345,7 @@ def internal_convert_n_to_tensor_or_indexed_slices(values,
RuntimeError: If a registered conversion function returns an invalid
value.
"""
if not isinstance(values, collections.Iterable):
if not isinstance(values, collections_abc.Iterable):
raise TypeError("values must be iterable.")
ret = []
for i, value in enumerate(values):

View File

@ -19,7 +19,6 @@ from __future__ import print_function
import abc
import atexit
import collections
from collections import OrderedDict
import functools
import multiprocessing.pool
@ -617,7 +616,7 @@ def standardize_sample_or_class_weights(x_weight, output_names, weight_type):
'You should provide one `' + weight_type + '`'
'array per model output.')
return x_weight
if isinstance(x_weight, collections.Mapping):
if isinstance(x_weight, collections_abc.Mapping):
generic_utils.check_for_unexpected_keys(weight_type, x_weight, output_names)
x_weights = []
for name in output_names:
@ -864,7 +863,7 @@ def collect_per_output_metric_info(metrics,
[metrics_module.clone_metric(m) for m in metrics])
else:
nested_metrics = [metrics]
elif isinstance(metrics, collections.Mapping):
elif isinstance(metrics, collections_abc.Mapping):
generic_utils.check_for_unexpected_keys('metrics', metrics, output_names)
nested_metrics = []
for name in output_names:
@ -1443,7 +1442,7 @@ def prepare_sample_weight_modes(training_endpoints, sample_weight_mode):
ValueError: In case of invalid `sample_weight_mode` input.
"""
if isinstance(sample_weight_mode, collections.Mapping):
if isinstance(sample_weight_mode, collections_abc.Mapping):
generic_utils.check_for_unexpected_keys(
'sample_weight_mode', sample_weight_mode,
[e.output_name for e in training_endpoints])
@ -1536,7 +1535,7 @@ def prepare_loss_weights(training_endpoints, loss_weights=None):
if loss_weights is None:
for e in training_endpoints:
e.loss_weight = 1.
elif isinstance(loss_weights, collections.Mapping):
elif isinstance(loss_weights, collections_abc.Mapping):
generic_utils.check_for_unexpected_keys(
'loss_weights', loss_weights,
[e.output_name for e in training_endpoints])

View File

@ -30,12 +30,12 @@ from tensorflow.python.keras.engine.base_layer import Layer
from tensorflow.python.keras.engine.input_spec import InputSpec
from tensorflow.python.keras.utils import tf_utils
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import init_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import nn
from tensorflow.python.ops import state_ops
from tensorflow.python.ops import variables as tf_variables
from tensorflow.python.platform import device_context
from tensorflow.python.platform import tf_logging as logging
from tensorflow.python.util.tf_export import keras_export
@ -514,7 +514,7 @@ class BatchNormalizationBase(Layer):
use_fused_avg_updates = (
ops.executing_eagerly_outside_functions() and
isinstance(self.momentum, (float, int)) and
device_context.enclosing_tpu_context() is None)
enclosing_xla_context() is None)
if use_fused_avg_updates:
exponential_avg_factor = 1.0 - self.momentum
else:
@ -930,6 +930,23 @@ def replace_in_base_docstring(replacements):
return string
def enclosing_xla_context():
"""Recursively find and return the XLAControlFlowContext."""
graph = ops.get_default_graph()
while graph is not None:
# pylint: disable=protected-access
context_ = graph._get_control_flow_context()
# pylint: enable=protected-access
while context_ is not None:
if isinstance(context_, control_flow_ops.XLAControlFlowContext):
return context_
context_ = context_.outer_context
# This may be a FuncGraph due to defuns or v2 control flow. We need to
# find the original graph with the XLAControlFlowContext.
graph = getattr(graph, 'outer_graph', None)
return None
@keras_export(v1=['keras.layers.BatchNormalization']) # pylint: disable=missing-docstring
class BatchNormalization(BatchNormalizationBase):

View File

@ -18,11 +18,10 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import numpy as np
from tensorflow.python.platform import test
from tensorflow.python.util.compat import collections_abc
class PreprocessingLayerTest(test.TestCase):
@ -38,7 +37,7 @@ class PreprocessingLayerTest(test.TestCase):
self.assertEqual(len(a), len(b))
for a_value, b_value in zip(a, b):
self.assertAllCloseOrEqual(a_value, b_value, msg=msg)
elif isinstance(a, collections.Mapping):
elif isinstance(a, collections_abc.Mapping):
self.assertEqual(len(a), len(b))
for key, a_value in a.items():
b_value = b[key]

View File

@ -44,14 +44,10 @@ from tensorflow.python.platform import tf_logging as logging
from tensorflow.python.training.tracking import base as trackable
from tensorflow.python.training.tracking import data_structures
from tensorflow.python.util import nest
from tensorflow.python.util.compat import collections_abc
from tensorflow.python.util.tf_export import keras_export
from tensorflow.tools.docs import doc_controls
try:
from collections import abc as collections_abc # pylint: disable=g-import-not-at-top
except ImportError: # For Python 2
import collections as collections_abc # pylint: disable=g-import-not-at-top
RECURRENT_DROPOUT_WARNING_MSG = (
'RNN `implementation=2` is not supported when `recurrent_dropout` is set. '

View File

@ -727,6 +727,7 @@ cuda_py_test(
name = "matrix_solve_ls_op_test",
size = "medium",
srcs = ["matrix_solve_ls_op_test.py"],
tags = ["no_mac"],
deps = [
"//tensorflow/python:array_ops",
"//tensorflow/python:client_testlib",
@ -789,6 +790,7 @@ tf_py_test(
name = "parsing_ops_test",
size = "medium",
srcs = ["parsing_ops_test.py"],
tags = ["no_mac"],
deps = [
"//tensorflow/core:protos_all_py",
"//tensorflow/python:array_ops",

View File

@ -24,6 +24,7 @@ tf_py_test(
name = "resource_ops_test",
size = "small",
srcs = ["resource_ops_test.py"],
tags = ["no_mac"],
deps = [
"//tensorflow/core/kernels/boosted_trees:boosted_trees_proto_py",
"//tensorflow/python:boosted_trees_ops",
@ -39,6 +40,7 @@ tf_py_test(
name = "prediction_ops_test",
size = "small",
srcs = ["prediction_ops_test.py"],
tags = ["no_mac"],
deps = [
"//tensorflow/core/kernels/boosted_trees:boosted_trees_proto_py",
"//tensorflow/python:array_ops",
@ -69,6 +71,7 @@ tf_py_test(
name = "training_ops_test",
size = "small",
srcs = ["training_ops_test.py"],
tags = ["no_mac"],
deps = [
"//tensorflow/core/kernels/boosted_trees:boosted_trees_proto_py",
"//tensorflow/python:array_ops",

View File

@ -4581,6 +4581,14 @@ class ControlFlowTest(test.TestCase, parameterized.TestCase):
result = control_flow_ops.merge([v_f, v_t])
self.evaluate(result)
def testSwitchEagerMode(self):
if not context.executing_eagerly():
return
input_data = [1, 2, 3, 4]
vf, vt = control_flow_ops.switch(input_data, False)
self.assertAllEqual(vf, input_data)
self.assertAllEqual(vt, [])
@test_util.run_deprecated_v1
def testQIntArgAndRet(self):

View File

@ -25,7 +25,9 @@ from tensorflow.python.eager import context
from tensorflow.python.framework import errors
from tensorflow.python.framework import ops
from tensorflow.python.framework import sparse_tensor
from tensorflow.python.framework import test_util
from tensorflow.python.ops import bincount_ops
from tensorflow.python.ops import gen_count_ops
from tensorflow.python.ops import sparse_ops
from tensorflow.python.ops.ragged import ragged_factory_ops
from tensorflow.python.ops.ragged import ragged_tensor
@ -834,5 +836,121 @@ class TestSparseCountFailureModes(test.TestCase):
self.evaluate(bincount_ops.sparse_bincount(x, weights=weights, axis=-1))
@test_util.run_all_in_graph_and_eager_modes
@test_util.disable_tfrt
class RawOpsTest(test.TestCase, parameterized.TestCase):
def testSparseCountSparseOutputBadIndicesShape(self):
indices = [[[0], [0]], [[0], [1]], [[1], [0]], [[1], [2]]]
values = [1, 1, 1, 10]
weights = [1, 2, 4, 6]
dense_shape = [2, 3]
with self.assertRaisesRegex(errors.InvalidArgumentError,
"Input indices must be a 2-dimensional tensor"):
self.evaluate(
gen_count_ops.SparseCountSparseOutput(
indices=indices,
values=values,
dense_shape=dense_shape,
weights=weights,
binary_output=False))
def testSparseCountSparseOutputBadWeightsShape(self):
indices = [[0, 0], [0, 1], [1, 0], [1, 2]]
values = [1, 1, 1, 10]
weights = [1, 2, 4]
dense_shape = [2, 3]
with self.assertRaisesRegex(errors.InvalidArgumentError,
"Weights and values must have the same shape"):
self.evaluate(
gen_count_ops.SparseCountSparseOutput(
indices=indices,
values=values,
dense_shape=dense_shape,
weights=weights,
binary_output=False))
def testSparseCountSparseOutputBadNumberOfValues(self):
indices = [[0, 0], [0, 1], [1, 0]]
values = [1, 1, 1, 10]
weights = [1, 2, 4, 6]
dense_shape = [2, 3]
with self.assertRaisesRegex(
errors.InvalidArgumentError,
"Number of values must match first dimension of indices"):
self.evaluate(
gen_count_ops.SparseCountSparseOutput(
indices=indices,
values=values,
dense_shape=dense_shape,
weights=weights,
binary_output=False))
def testRaggedCountSparseOutput(self):
splits = [0, 4, 7]
values = [1, 1, 2, 1, 2, 10, 5]
weights = [1, 2, 3, 4, 5, 6, 7]
output_indices, output_values, output_shape = self.evaluate(
gen_count_ops.RaggedCountSparseOutput(
splits=splits, values=values, weights=weights, binary_output=False))
self.assertAllEqual([[0, 1], [0, 2], [1, 2], [1, 5], [1, 10]],
output_indices)
self.assertAllEqual([7, 3, 5, 7, 6], output_values)
self.assertAllEqual([2, 11], output_shape)
def testRaggedCountSparseOutputBadWeightsShape(self):
splits = [0, 4, 7]
values = [1, 1, 2, 1, 2, 10, 5]
weights = [1, 2, 3, 4, 5, 6]
with self.assertRaisesRegex(errors.InvalidArgumentError,
"Weights and values must have the same shape"):
self.evaluate(
gen_count_ops.RaggedCountSparseOutput(
splits=splits,
values=values,
weights=weights,
binary_output=False))
def testRaggedCountSparseOutputEmptySplits(self):
splits = []
values = [1, 1, 2, 1, 2, 10, 5]
weights = [1, 2, 3, 4, 5, 6, 7]
with self.assertRaisesRegex(
errors.InvalidArgumentError,
"Must provide at least 2 elements for the splits argument"):
self.evaluate(
gen_count_ops.RaggedCountSparseOutput(
splits=splits,
values=values,
weights=weights,
binary_output=False))
def testRaggedCountSparseOutputBadSplitsStart(self):
splits = [1, 7]
values = [1, 1, 2, 1, 2, 10, 5]
weights = [1, 2, 3, 4, 5, 6, 7]
with self.assertRaisesRegex(errors.InvalidArgumentError,
"Splits must start with 0"):
self.evaluate(
gen_count_ops.RaggedCountSparseOutput(
splits=splits,
values=values,
weights=weights,
binary_output=False))
def testRaggedCountSparseOutputBadSplitsEnd(self):
splits = [0, 5]
values = [1, 1, 2, 1, 2, 10, 5]
weights = [1, 2, 3, 4, 5, 6, 7]
with self.assertRaisesRegex(errors.InvalidArgumentError,
"Splits must end with the number of values"):
self.evaluate(
gen_count_ops.RaggedCountSparseOutput(
splits=splits,
values=values,
weights=weights,
binary_output=False))
if __name__ == "__main__":
test.main()

View File

@ -70,8 +70,6 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import numpy as np
import six
from six.moves import builtins
@ -100,6 +98,7 @@ from tensorflow.python.util import compat
from tensorflow.python.util import deprecation
from tensorflow.python.util import dispatch
from tensorflow.python.util import nest
from tensorflow.python.util.compat import collections_abc
from tensorflow.python.util.tf_export import tf_export
# Aliases for some automatically-generated names.
@ -3493,7 +3492,7 @@ def add_n(inputs, name=None):
ValueError: If `inputs` don't all have same shape and dtype or the shape
cannot be inferred.
"""
if not inputs or not isinstance(inputs, collections.Iterable):
if not inputs or not isinstance(inputs, collections_abc.Iterable):
raise ValueError("inputs must be an iterable of at least one "
"Tensor/IndexedSlices with the same dtype and shape")
inputs = ops.convert_n_to_tensor_or_indexed_slices(inputs)
@ -3626,9 +3625,9 @@ def sigmoid(x, name=None):
Returns:
A Tensor with the same type as `x`.
Usage Example:
>>> x = tf.constant([-128.0, 0.0, 128.0], dtype=tf.float32)
>>> tf.sigmoid(x)
<tf.Tensor: shape=(3,), dtype=float32,

View File

@ -18,7 +18,6 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import functools
import numbers
import os
@ -3270,7 +3269,7 @@ def conv_transpose(input, # pylint: disable=redefined-builtin
[input, filter, output_shape]) as name:
if tensor_util.is_tensor(output_shape):
n = output_shape.shape[0] - 2
elif isinstance(output_shape, collections.Sized):
elif isinstance(output_shape, collections_abc.Sized):
n = len(output_shape) - 2
else:
raise ValueError("output_shape must be a tensor or sized collection.")

View File

@ -27,6 +27,7 @@ from six.moves import xrange # pylint: disable=redefined-builtin
from tensorflow.python.eager import def_function
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import errors
from tensorflow.python.framework import ops
from tensorflow.python.framework import tensor_spec
from tensorflow.python.framework import test_util
@ -1216,6 +1217,46 @@ class DataFormatDimMapTest(test_lib.TestCase):
y_val = self.evaluate(y)
self.assertAllEqual(y_val, y_val_expected)
@test_util.disable_xla("XLA catches the error and rethrows as different one")
def testInvalidLength(self):
x = [-4, -3, -2, -1, 0, 1, 2, 3]
with self.assertRaisesRegex(errors.InvalidArgumentError,
"Source format must be of length 4 or 5"):
op = nn_ops.data_format_dim_map(
x, src_format="12345678", dst_format="87654321")
with test_util.use_gpu():
self.evaluate(op)
@test_util.disable_xla("XLA catches the error and rethrows as different one")
def testDuplicateSrc(self):
x = [-4, -3, -2, -1, 0, 1, 2, 3]
with self.assertRaisesRegex(
errors.InvalidArgumentError,
"Destination and source format must determine a permutation"):
op = nn_ops.data_format_dim_map(x, src_format="1233", dst_format="4321")
with test_util.use_gpu():
self.evaluate(op)
@test_util.disable_xla("XLA catches the error and rethrows as different one")
def testDuplicateDst(self):
x = [-4, -3, -2, -1, 0, 1, 2, 3]
with self.assertRaisesRegex(
errors.InvalidArgumentError,
"Destination and source format must determine a permutation"):
op = nn_ops.data_format_dim_map(x, src_format="1234", dst_format="3321")
with test_util.use_gpu():
self.evaluate(op)
@test_util.disable_xla("XLA catches the error and rethrows as different one")
def testExtraSpecifiers(self):
x = [-4, -3, -2, -1, 0, 1, 2, 3]
with self.assertRaisesRegex(
errors.InvalidArgumentError,
"Destination and source format must determine a permutation"):
op = nn_ops.data_format_dim_map(x, src_format="1234", dst_format="5321")
with test_util.use_gpu():
self.evaluate(op)
class DataFormatVectorPermuteTest(test_lib.TestCase):
@ -1317,6 +1358,60 @@ class DataFormatVectorPermuteTest(test_lib.TestCase):
y_val = self.evaluate(y)
self.assertAllEqual(y_val, [[7, 4], [4, 5], [5, 1], [9, 3]])
@test_util.disable_xla("XLA catches the error and rethrows as different one")
def testInvalidLength(self):
x = [0, 1, 2, 3]
with self.assertRaisesRegex(errors.InvalidArgumentError,
"Source format must be of length 4 or 5"):
op = nn_ops.data_format_vec_permute(
x, src_format="12345678", dst_format="87654321")
with test_util.use_gpu():
self.evaluate(op)
@test_util.disable_xla("XLA catches the error and rethrows as different one")
def testDuplicateSrc(self):
x = [0, 1, 2, 3]
with self.assertRaisesRegex(
errors.InvalidArgumentError,
"Destination and source format must determine a permutation"):
op = nn_ops.data_format_vec_permute(
x, src_format="1233", dst_format="4321")
with test_util.use_gpu():
self.evaluate(op)
@test_util.disable_xla("XLA catches the error and rethrows as different one")
def testDuplicateDst(self):
x = [0, 1, 2, 3]
with self.assertRaisesRegex(
errors.InvalidArgumentError,
"Destination and source format must determine a permutation"):
op = nn_ops.data_format_vec_permute(
x, src_format="1234", dst_format="3321")
with test_util.use_gpu():
self.evaluate(op)
@test_util.disable_xla("XLA catches the error and rethrows as different one")
def testExtraSpecifiers(self):
x = [0, 1, 2, 3]
with self.assertRaisesRegex(
errors.InvalidArgumentError,
"Destination and source format must determine a permutation"):
op = nn_ops.data_format_vec_permute(
x, src_format="1234", dst_format="5321")
with test_util.use_gpu():
self.evaluate(op)
@test_util.disable_xla("XLA catches the error and rethrows as different one")
def test2DNoWH(self):
x = [[0, 1], [2, 3]]
with self.assertRaisesRegex(
errors.InvalidArgumentError,
"Format specifier must contain H and W for 2D case"):
op = nn_ops.data_format_vec_permute(
x, src_format="1234", dst_format="4321")
with test_util.use_gpu():
self.evaluate(op)
@test_util.run_all_in_graph_and_eager_modes
class AvgPoolTest(test_lib.TestCase):

View File

@ -18,16 +18,22 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from absl.testing import parameterized
from tensorflow.python.eager import context
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import errors
from tensorflow.python.framework import ops
from tensorflow.python.framework import test_util
from tensorflow.python.ops import gen_data_flow_ops
from tensorflow.python.ops import gen_math_ops
from tensorflow.python.ops import gen_string_ops
from tensorflow.python.platform import test
@test_util.run_all_in_graph_and_eager_modes
class RawOpsTest(test.TestCase):
@test_util.disable_tfrt
class RawOpsTest(test.TestCase, parameterized.TestCase):
def testSimple(self):
x = constant_op.constant(1)
@ -58,6 +64,29 @@ class RawOpsTest(test.TestCase):
gen_math_ops.Any(input=x, axis=0),
gen_math_ops.Any(input=x, axis=0, keep_dims=False))
@parameterized.parameters([[0, 8]], [[-1, 6]])
def testStringNGramsBadDataSplits(self, splits):
data = ["aa", "bb", "cc", "dd", "ee", "ff"]
with self.assertRaisesRegex(errors.InvalidArgumentError,
"Invalid split value"):
self.evaluate(
gen_string_ops.string_n_grams(
data=data,
data_splits=splits,
separator="",
ngram_widths=[2],
left_pad="",
right_pad="",
pad_width=0,
preserve_short_sequences=False))
def testGetSessionHandle(self):
if context.executing_eagerly():
with self.assertRaisesRegex(
errors.FailedPreconditionError,
"GetSessionHandle called on null session state"):
gen_data_flow_ops.GetSessionHandle(value=[1])
if __name__ == "__main__":
ops.enable_eager_execution()

View File

@ -21,14 +21,17 @@ from __future__ import print_function
from absl.testing import parameterized
import numpy as np
from tensorflow.python.eager import context
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import errors
from tensorflow.python.framework import ops
from tensorflow.python.framework import sparse_tensor
from tensorflow.python.framework import test_util
# Need array_grad to register gradient for Identity.
from tensorflow.python.ops import array_grad # pylint: disable=unused-import
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import gen_sparse_ops
from tensorflow.python.ops import gradient_checker_v2 as gradient_checker
from tensorflow.python.ops import math_ops
# Need sparse_grad to register gradient for SparseToDense.
@ -181,5 +184,57 @@ class SparseOpsTest(test_util.TensorFlowTestCase, parameterized.TestCase):
self.assertAllEqual(expected, result)
@test_util.run_all_in_graph_and_eager_modes
class RawOpsTest(test_util.TensorFlowTestCase, parameterized.TestCase):
def testSparseFillEmptyRowsGrad(self):
reverse_index_map = [2, 1]
grad_values = [0, 1, 2, 3]
d_values, d_default_value = self.evaluate(
gen_sparse_ops.SparseFillEmptyRowsGrad(
reverse_index_map=reverse_index_map, grad_values=grad_values))
self.assertAllEqual([2, 1], d_values)
self.assertEqual(3, d_default_value)
def testSparseFillEmptyRowsGradNegativeIndexMapValue(self):
reverse_index_map = [2, -1]
grad_values = [0, 1, 2, 3]
with self.assertRaisesRegex(
errors.InvalidArgumentError,
r'Elements in reverse index must be in \[0, 4\)'):
self.evaluate(
gen_sparse_ops.SparseFillEmptyRowsGrad(
reverse_index_map=reverse_index_map, grad_values=grad_values))
def testSparseFillEmptyRowsGradLargeIndexMapValue(self):
reverse_index_map = [2, 10]
grad_values = [0, 1, 2, 3]
with self.assertRaisesRegex(
errors.InvalidArgumentError,
r'Elements in reverse index must be in \[0, 4\)'):
self.evaluate(
gen_sparse_ops.SparseFillEmptyRowsGrad(
reverse_index_map=reverse_index_map, grad_values=grad_values))
def testSparseFillEmptyRowsGradMatrix(self):
reverse_index_map = [0, 1]
grad_values = [[0, 1], [2, 3]]
# Note: Eager mode and graph mode throw different errors here. Graph mode
# will fail with a ValueError from the shape checking logic, while Eager
# will fail with an InvalidArgumentError from the kernel itself.
if context.executing_eagerly():
with self.assertRaisesRegex(errors.InvalidArgumentError,
r'grad_values must be a vector'):
self.evaluate(
gen_sparse_ops.SparseFillEmptyRowsGrad(
reverse_index_map=reverse_index_map, grad_values=grad_values))
else:
with self.assertRaisesRegex(ValueError,
r'Shape must be rank 1 but is rank 2'):
self.evaluate(
gen_sparse_ops.SparseFillEmptyRowsGrad(
reverse_index_map=reverse_index_map, grad_values=grad_values))
if __name__ == '__main__':
googletest.main()

View File

@ -18,7 +18,6 @@ from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections as collections_lib
import copy
import enum # pylint: disable=g-bad-import-order
import functools
@ -47,6 +46,7 @@ from tensorflow.python.util import deprecation
from tensorflow.python.util import function_utils
from tensorflow.python.util import tf_contextlib
from tensorflow.python.util import tf_inspect
from tensorflow.python.util.compat import collections_abc
from tensorflow.python.util.tf_export import tf_export
__all__ = [
@ -77,13 +77,13 @@ class _PartitionInfo(object):
ValueError: If `full_shape` or `var_offset` differ in length. If
`var_offset` exceeds `full_shape` in any dimension.
"""
if not isinstance(full_shape, collections_lib.Sequence) or isinstance(
if not isinstance(full_shape, collections_abc.Sequence) or isinstance(
full_shape, six.string_types):
raise TypeError(
"`full_shape` must be a sequence (like tuple or list) instead of " +
type(full_shape).__name__)
if not isinstance(var_offset, collections_lib.Sequence) or isinstance(
if not isinstance(var_offset, collections_abc.Sequence) or isinstance(
var_offset, six.string_types):
raise TypeError(
"`var_offset` must be a sequence (like tuple or list) instead of " +
@ -151,7 +151,7 @@ class _PartitionInfo(object):
ValueError: If `shape` is not the same length as `self.full_shape`. If
the variable is partitioned in more than one dimension.
"""
if not isinstance(shape, collections_lib.Sequence) or isinstance(
if not isinstance(shape, collections_abc.Sequence) or isinstance(
shape, six.string_types):
raise TypeError(
"`shape` must be a sequence (like tuple or list) instead of " +
@ -451,7 +451,7 @@ class _VariableStore(object):
synchronization=VariableSynchronization.AUTO,
aggregation=VariableAggregation.NONE):
is_scalar = (
shape is not None and isinstance(shape, collections_lib.Sequence) and
shape is not None and isinstance(shape, collections_abc.Sequence) and
not shape)
# Partitioned variable case
if partitioner is not None and not is_scalar:
@ -2511,7 +2511,7 @@ def _call_partitioner(partitioner, shape, dtype):
"shape: %s" % shape)
slicing = partitioner(shape=shape, dtype=dtype)
if not isinstance(slicing, collections_lib.Sequence):
if not isinstance(slicing, collections_abc.Sequence):
raise ValueError("Partitioner must return a sequence, but saw: %s" %
slicing)
if len(slicing) != shape.ndims:

View File

@ -1129,9 +1129,16 @@ PYBIND11_MODULE(_pywrap_tfe, m) {
// DLPack functions
m.def("TFE_ToDlpackCapsule", [](py::handle& o) {
PyObject* eager_tensor_pyobject_ptr = o.ptr();
TFE_TensorHandle* thandle = EagerTensor_Handle(eager_tensor_pyobject_ptr);
tensorflow::Safe_TF_StatusPtr status =
tensorflow::make_safe(TF_NewStatus());
if (!EagerTensor_CheckExact(eager_tensor_pyobject_ptr)) {
status->status = tensorflow::errors::InvalidArgument(
"The argument to `to_dlpack` must be a TF tensor, not Python object");
tensorflow::MaybeRaiseRegisteredFromTFStatus(status.get());
}
TFE_TensorHandle* thandle = EagerTensor_Handle(eager_tensor_pyobject_ptr);
void* dlm_ptr = tensorflow::TFE_HandleToDLPack(thandle, status.get());
tensorflow::MaybeRaiseRegisteredFromTFStatus(status.get());

View File

@ -24,7 +24,6 @@ from __future__ import division
from __future__ import print_function
import argparse
import collections
import os
import re
import sys
@ -51,6 +50,7 @@ from tensorflow.python.saved_model import signature_constants
from tensorflow.python.tools import saved_model_aot_compile
from tensorflow.python.tools import saved_model_utils
from tensorflow.python.tpu import tpu
from tensorflow.python.util.compat import collections_abc
_XLA_DEBUG_OPTIONS_URL = (
@ -241,7 +241,7 @@ def _print_args(arguments, argument_type='Argument', indent=0):
in_print(' %s' % element)
elif isinstance(element, tensor_spec.TensorSpec):
print((indent + 1) * ' ' + '%s: %s' % (element.name, repr(element)))
elif (isinstance(element, collections.Iterable) and
elif (isinstance(element, collections_abc.Iterable) and
not isinstance(element, dict)):
in_print(' DType: %s' % type(element).__name__)
in_print(' Value: [', end='')

View File

@ -1474,7 +1474,9 @@ class CudnnRnnSequenceTensorDescriptor
static port::StatusOr<CudnnRnnSequenceTensorDescriptor> Create(
GpuExecutor* parent, int max_seq_length, int batch_size, int data_size,
cudnnDataType_t data_type) {
CHECK_GT(max_seq_length, 0);
if (max_seq_length <= 0) {
return port::Status(port::error::INVALID_ARGUMENT, "max_seq_length <= 0");
}
int dims[] = {batch_size, data_size, 1};
int strides[] = {dims[1] * dims[2], dims[2], 1};
TensorDescriptor tensor_desc = CreateTensorDescriptor();
@ -1495,7 +1497,9 @@ class CudnnRnnSequenceTensorDescriptor
const absl::Span<const int>& seq_lengths, bool time_major,
cudnnDataType_t data_type) {
#if CUDNN_VERSION >= 7201
CHECK_GT(max_seq_length, 0);
if (max_seq_length <= 0) {
return port::Status(port::error::INVALID_ARGUMENT, "max_seq_length <= 0");
}
int dims[] = {batch_size, data_size, 1};
int strides[] = {dims[1] * dims[2], dims[2], 1};
TensorDescriptor tensor_desc = CreateTensorDescriptor();

View File

@ -59,7 +59,7 @@ load(
# not contain rc or alpha, only numbers.
# Also update tensorflow/core/public/version.h
# and tensorflow/tools/pip_package/setup.py
VERSION = "2.3.0"
VERSION = "2.3.2"
VERSION_MAJOR = VERSION.split(".")[0]
# Sanitize a dependency so that it works correctly from code that includes

View File

@ -56,6 +56,7 @@ function build_libtensorflow_tarball() {
if [ "${TF_NEED_CUDA}" == "1" ]; then
BAZEL_OPTS="${BAZEL_OPTS} --config=cuda --crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain"
export TF_NEED_ROCM=0
export TF_CUDA_COMPUTE_CAPABILITIES="sm_35,sm_50,sm_60,sm_70,sm_75"
fi
bazel clean --expunge
yes "" | ./configure

View File

@ -58,6 +58,7 @@ ${DOCKER_BINARY} run \
-e "TF_NEED_HDFS=0" \
-e "TF_NEED_CUDA=${TF_NEED_CUDA}" \
-e "TF_NEED_TENSORRT=${TF_NEED_CUDA}" \
-e "TF_CUDA_COMPUTE_CAPABILITIES=${TF_CUDA_COMPUTE_CAPABILITIES}" \
-e "TF_NEED_ROCM=${TF_NEED_ROCM}" \
-e "TF_NEED_OPENCL_SYCL=0" \
"${DOCKER_IMAGE}" \

View File

@ -0,0 +1,27 @@
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
echo "chmod go+w lib_package/*" >> tensorflow/tools/ci_build/linux/libtensorflow.sh
echo "bazel clean --expunge" >> tensorflow/tools/ci_build/linux/libtensorflow.sh
# Install latest bazel
source tensorflow/tools/ci_build/release/common.sh
install_bazelisk
# Pick a version of xcode
export DEVELOPER_DIR=/Applications/Xcode_10.3.app/Contents/Developer
sudo xcode-select -s "${DEVELOPER_DIR}"
tensorflow/tools/ci_build/osx/libtensorflow_cpu.sh

View File

@ -0,0 +1,51 @@
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
install_bazelisk
# Pick a more recent version of xcode
export DEVELOPER_DIR=/Applications/Xcode_10.3.app/Contents/Developer
sudo xcode-select -s "${DEVELOPER_DIR}"
python3.5 -m virtualenv tf_build_env --system-site-packages
source tf_build_env/bin/activate
# Install macos pip dependencies
install_macos_pip_deps sudo pip3.5
# Run configure.
export TF_NEED_CUDA=0
export CC_OPT_FLAGS='-mavx'
export TF2_BEHAVIOR=1
export PYTHON_BIN_PATH=$(which python3.5)
yes "" | "$PYTHON_BIN_PATH" configure.py
tag_filters="-no_oss,-oss_serial,-nomac,-no_mac,-no_oss_py35,-v1only,-gpu,-tpu,-benchmark-test"
# Get the default test targets for bazel.
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
# Run tests
set +e
bazel test --test_output=errors --config=opt \
--action_env=TF2_BEHAVIOR="${TF2_BEHAVIOR}" \
--build_tag_filters="${tag_filters}" \
--test_tag_filters="${tag_filters}" -- \
${DEFAULT_BAZEL_TARGETS} \
-//tensorflow/lite/...
test_xml_summary_exit

View File

@ -0,0 +1,51 @@
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
install_bazelisk
# Pick a more recent version of xcode
export DEVELOPER_DIR=/Applications/Xcode_10.3.app/Contents/Developer
sudo xcode-select -s "${DEVELOPER_DIR}"
# Install macos pip dependencies
install_macos_pip_deps sudo pip3.5
# Export required variables for running pip_new.sh
export OS_TYPE="MACOS"
export CONTAINER_TYPE="CPU"
export TF_PYTHON_VERSION='python3.5'
export TF_BUILD_BOTH_CPU_PACKAGES=1
# Run configure.
export TF_NEED_CUDA=0
export CC_OPT_FLAGS='-mavx'
export PYTHON_BIN_PATH=$(which ${TF_PYTHON_VERSION})
yes "" | "$PYTHON_BIN_PATH" configure.py
# Export optional variables for running pip.sh
export TF_BUILD_FLAGS="--config=opt --config=v2"
export TF_TEST_FLAGS="--define=no_tensorflow_py_deps=true --test_lang_filters=py --test_output=errors --verbose_failures=true --keep_going --test_env=TF2_BEHAVIOR=1"
export TF_TEST_TARGETS="//tensorflow/python/..."
export TF_PIP_TESTS="test_pip_virtualenv_non_clean test_pip_virtualenv_clean"
export TF_TEST_FILTER_TAGS='-nomac,-no_mac,-no_oss,-oss_serial,-no_oss_py35,-gpu,-tpu,-benchmark-test'
#export IS_NIGHTLY=0 # Not nightly; uncomment if building from tf repo.
export TF_PROJECT_NAME="tensorflow"
export TF_PIP_TEST_ROOT="pip_test"
./tensorflow/tools/ci_build/builds/pip_new.sh

View File

@ -0,0 +1,51 @@
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
install_bazelisk
# Pick a more recent version of xcode
export DEVELOPER_DIR=/Applications/Xcode_10.3.app/Contents/Developer
sudo xcode-select -s "${DEVELOPER_DIR}"
python3.6 -m virtualenv tf_build_env --system-site-packages
source tf_build_env/bin/activate
# Install macos pip dependencies
install_macos_pip_deps sudo pip3.6
# Run configure.
export TF_NEED_CUDA=0
export CC_OPT_FLAGS='-mavx'
export TF2_BEHAVIOR=1
export PYTHON_BIN_PATH=$(which python3.6)
yes "" | "$PYTHON_BIN_PATH" configure.py
tag_filters="-no_oss,-oss_serial,-nomac,-no_mac,-no_oss_py36,-v1only,-gpu,-tpu,-benchmark-test"
# Get the default test targets for bazel.
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
# Run tests
set +e
bazel test --test_output=errors --config=opt \
--action_env=TF2_BEHAVIOR="${TF2_BEHAVIOR}" \
--build_tag_filters="${tag_filters}" \
--test_tag_filters="${tag_filters}" -- \
${DEFAULT_BAZEL_TARGETS} \
-//tensorflow/lite/...
test_xml_summary_exit

View File

@ -0,0 +1,51 @@
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
install_bazelisk
# Pick a more recent version of xcode
export DEVELOPER_DIR=/Applications/Xcode_10.3.app/Contents/Developer
sudo xcode-select -s "${DEVELOPER_DIR}"
# Install macos pip dependencies
install_macos_pip_deps sudo pip3.6
# Export required variables for running pip_new.sh
export OS_TYPE="MACOS"
export CONTAINER_TYPE="CPU"
export TF_PYTHON_VERSION='python3.6'
export TF_BUILD_BOTH_CPU_PACKAGES=1
# Run configure.
export TF_NEED_CUDA=0
export CC_OPT_FLAGS='-mavx'
export PYTHON_BIN_PATH=$(which ${TF_PYTHON_VERSION})
yes "" | "$PYTHON_BIN_PATH" configure.py
# Export optional variables for running pip.sh
export TF_BUILD_FLAGS="--config=opt --config=v2"
export TF_TEST_FLAGS="--define=no_tensorflow_py_deps=true --test_lang_filters=py --test_output=errors --verbose_failures=true --keep_going --test_env=TF2_BEHAVIOR=1"
export TF_TEST_TARGETS="//tensorflow/python/..."
export TF_PIP_TESTS="test_pip_virtualenv_non_clean test_pip_virtualenv_clean"
export TF_TEST_FILTER_TAGS='-nomac,-no_mac,-no_oss,-oss_serial,-no_oss_py35,-v1only,-gpu,-tpu,-benchmark-test'
#export IS_NIGHTLY=0 # Not nightly; uncomment if building from tf repo.
export TF_PROJECT_NAME="tensorflow"
export TF_PIP_TEST_ROOT="pip_test"
./tensorflow/tools/ci_build/builds/pip_new.sh

View File

@ -0,0 +1,51 @@
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
install_bazelisk
# Pick a more recent version of xcode
export DEVELOPER_DIR=/Applications/Xcode_10.3.app/Contents/Developer
sudo xcode-select -s "${DEVELOPER_DIR}"
python -m virtualenv tf_build_env --system-site-packages
source tf_build_env/bin/activate
# Install macos pip dependencies
install_macos_pip_deps sudo pip3.7
# Run configure.
export TF_NEED_CUDA=0
export CC_OPT_FLAGS='-mavx'
export TF2_BEHAVIOR=1
export PYTHON_BIN_PATH=$(which python3.7)
yes "" | "$PYTHON_BIN_PATH" configure.py
tag_filters="-no_oss,-oss_serial,-nomac,-no_mac$(maybe_skip_v1),-gpu,-tpu,-benchmark-test"
# Get the default test targets for bazel.
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
# Run tests
set +e
bazel test --test_output=errors --config=opt \
--action_env=TF2_BEHAVIOR="${TF2_BEHAVIOR}" \
--build_tag_filters="${tag_filters}" \
--test_tag_filters="${tag_filters}" -- \
${DEFAULT_BAZEL_TARGETS} \
-//tensorflow/lite/...
test_xml_summary_exit

View File

@ -0,0 +1,51 @@
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
install_bazelisk
# Pick a more recent version of xcode
export DEVELOPER_DIR=/Applications/Xcode_10.3.app/Contents/Developer
sudo xcode-select -s "${DEVELOPER_DIR}"
# Install macos pip dependencies
install_macos_pip_deps sudo pip3.7
# Export required variables for running pip_new.sh
export OS_TYPE="MACOS"
export CONTAINER_TYPE="CPU"
export TF_PYTHON_VERSION='python3.7'
export TF_BUILD_BOTH_CPU_PACKAGES=1
# Run configure.
export TF_NEED_CUDA=0
export CC_OPT_FLAGS='-mavx'
export PYTHON_BIN_PATH=$(which ${TF_PYTHON_VERSION})
yes "" | "$PYTHON_BIN_PATH" configure.py
# Export optional variables for running pip.sh
export TF_BUILD_FLAGS="--config=opt --config=v2"
export TF_TEST_FLAGS="--define=no_tensorflow_py_deps=true --test_lang_filters=py --test_output=errors --verbose_failures=true --keep_going --test_env=TF2_BEHAVIOR=1"
export TF_TEST_TARGETS="//tensorflow/python/..."
export TF_PIP_TESTS="test_pip_virtualenv_non_clean test_pip_virtualenv_clean"
export TF_TEST_FILTER_TAGS='-nomac,-no_mac,-no_oss,-oss_serial,-no_oss_py37,-v1only,-gpu,-tpu,-benchmark-test'
#export IS_NIGHTLY=0 # Not nightly; uncomment if building from tf repo.
export TF_PROJECT_NAME="tensorflow"
export TF_PIP_TEST_ROOT="pip_test"
./tensorflow/tools/ci_build/builds/pip_new.sh

View File

@ -0,0 +1,51 @@
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
install_bazelisk
# Pick a more recent version of xcode
export DEVELOPER_DIR=/Applications/Xcode_10.3.app/Contents/Developer
sudo xcode-select -s "${DEVELOPER_DIR}"
python -m virtualenv tf_build_env --system-site-packages
source tf_build_env/bin/activate
# Install macos pip dependencies
install_macos_pip_deps sudo pip3.8
# Run configure.
export TF_NEED_CUDA=0
export CC_OPT_FLAGS='-mavx'
export TF2_BEHAVIOR=1
export PYTHON_BIN_PATH=$(which python3.8)
yes "" | "$PYTHON_BIN_PATH" configure.py
tag_filters="-no_oss,-oss_serial,-nomac,-no_mac$(maybe_skip_v1),-gpu,-tpu,-benchmark-test"
# Get the default test targets for bazel.
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
# Run tests
set +e
bazel test --test_output=errors --config=opt \
--action_env=TF2_BEHAVIOR="${TF2_BEHAVIOR}" \
--build_tag_filters="${tag_filters}" \
--test_tag_filters="${tag_filters}" -- \
${DEFAULT_BAZEL_TARGETS} \
-//tensorflow/lite/...
test_xml_summary_exit

View File

@ -0,0 +1,51 @@
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
install_bazelisk
# Pick a more recent version of xcode
export DEVELOPER_DIR=/Applications/Xcode_10.3.app/Contents/Developer
sudo xcode-select -s "${DEVELOPER_DIR}"
# Install macos pip dependencies
install_macos_pip_deps sudo pip3.8
# Export required variables for running pip_new.sh
export OS_TYPE="MACOS"
export CONTAINER_TYPE="CPU"
export TF_PYTHON_VERSION='python3.8'
export TF_BUILD_BOTH_CPU_PACKAGES=1
# Run configure.
export TF_NEED_CUDA=0
export CC_OPT_FLAGS='-mavx'
export PYTHON_BIN_PATH=$(which ${TF_PYTHON_VERSION})
yes "" | "$PYTHON_BIN_PATH" configure.py
# Export optional variables for running pip.sh
export TF_BUILD_FLAGS="--config=opt --config=v2"
export TF_TEST_FLAGS="--define=no_tensorflow_py_deps=true --test_lang_filters=py --test_output=errors --verbose_failures=true --keep_going --test_env=TF2_BEHAVIOR=1"
export TF_TEST_TARGETS="//tensorflow/python/..."
export TF_PIP_TESTS="test_pip_virtualenv_non_clean test_pip_virtualenv_clean"
export TF_TEST_FILTER_TAGS='-nomac,-no_mac,-no_oss,-oss_serial,-no_oss_py38,-v1only,-gpu,-tpu,-benchmark-test'
#export IS_NIGHTLY=0 # Not nightly; uncomment if building from tf repo.
export TF_PROJECT_NAME="tensorflow"
export TF_PIP_TEST_ROOT="pip_test"
./tensorflow/tools/ci_build/builds/pip_new.sh

View File

@ -0,0 +1,40 @@
#!/bin/bash
# Copyright 2020 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
# Source the external common scripts.
source tensorflow/tools/ci_build/release/common.sh
# Install latest bazel
install_bazelisk
which bazel
# Install realpath
sudo apt-get install realpath
# Update the version string to nightly
if [ -n "${IS_NIGHTLY_BUILD}" ]; then
./tensorflow/tools/ci_build/update_version.py --nightly
fi
./tensorflow/tools/ci_build/linux/libtensorflow.sh
# Copy the nightly version update script
if [ -n "${IS_NIGHTLY_BUILD}" ]; then
cp tensorflow/tools/ci_build/builds/libtensorflow_nightly_symlink.sh lib_package
fi

View File

@ -0,0 +1,48 @@
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
install_ubuntu_16_pip_deps pip3.5
# Update bazel
install_bazelisk
# Run configure.
export TF_NEED_GCP=1
export TF_NEED_HDFS=1
export TF_NEED_S3=1
export TF_NEED_CUDA=0
export CC_OPT_FLAGS='-mavx'
export PYTHON_BIN_PATH=$(which python3.5)
export TF2_BEHAVIOR=1
yes "" | "$PYTHON_BIN_PATH" configure.py
tag_filters="-no_oss,-oss_serial,-gpu,-tpu,-benchmark-test,-no_oss_py35,-v1only"
# Get the default test targets for bazel.
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
# Run tests
set +e
bazel test --test_output=errors --config=opt --test_lang_filters=py \
--crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain \
--linkopt=-lrt \
--action_env=TF2_BEHAVIOR="${TF2_BEHAVIOR}" \
--build_tag_filters="${tag_filters}" \
--test_tag_filters="${tag_filters}" -- \
${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/...
test_xml_summary_exit

View File

@ -0,0 +1,52 @@
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
install_ubuntu_16_pip_deps pip3.5
# Update bazel
install_bazelisk
# Export required variables for running pip.sh
export OS_TYPE="UBUNTU"
export CONTAINER_TYPE="CPU"
export TF_PYTHON_VERSION='python3.5'
# Run configure.
export TF_NEED_GCP=1
export TF_NEED_HDFS=1
export TF_NEED_S3=1
export TF_NEED_CUDA=0
export CC_OPT_FLAGS='-mavx'
export PYTHON_BIN_PATH=$(which ${TF_PYTHON_VERSION})
yes "" | "$PYTHON_BIN_PATH" configure.py
# Get the default test targets for bazel.
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
# Export optional variables for running pip.sh
export TF_BUILD_FLAGS="--config=opt --config=v2 --crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain"
export TF_TEST_FLAGS="--define=no_tensorflow_py_deps=true --test_lang_filters=py --test_output=errors --verbose_failures=true --keep_going --test_env=TF2_BEHAVIOR=1"
export TF_TEST_TARGETS="${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/... "
export TF_PIP_TESTS="test_pip_virtualenv_non_clean test_pip_virtualenv_clean"
export TF_TEST_FILTER_TAGS='-no_oss,-oss_serial,-no_oss_py35,-v1only'
#export IS_NIGHTLY=0 # Not nightly; uncomment if building from tf repo.
export TF_PROJECT_NAME="tensorflow_cpu"
export TF_PIP_TEST_ROOT="pip_test"
./tensorflow/tools/ci_build/builds/pip_new.sh

View File

@ -0,0 +1,48 @@
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
install_ubuntu_16_pip_deps pip3.6
# Update bazel
install_bazelisk
# Run configure.
export TF_NEED_GCP=1
export TF_NEED_HDFS=1
export TF_NEED_S3=1
export TF_NEED_CUDA=0
export CC_OPT_FLAGS='-mavx'
export PYTHON_BIN_PATH=$(which python3.6)
export TF2_BEHAVIOR=1
yes "" | "$PYTHON_BIN_PATH" configure.py
tag_filters="-no_oss,-oss_serial,-gpu,-tpu,-benchmark-test,-no_oss_py36,-v1only"
# Get the default test targets for bazel.
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
# Run tests
set +e
bazel test --test_output=errors --config=opt --test_lang_filters=py \
--crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain \
--linkopt=-lrt \
--action_env=TF2_BEHAVIOR="${TF2_BEHAVIOR}" \
--build_tag_filters="${tag_filters}" \
--test_tag_filters="${tag_filters}" -- \
${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/...
test_xml_summary_exit

View File

@ -0,0 +1,52 @@
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
install_ubuntu_16_pip_deps pip3.6
# Update bazel
install_bazelisk
# Export required variables for running pip.sh
export OS_TYPE="UBUNTU"
export CONTAINER_TYPE="CPU"
export TF_PYTHON_VERSION='python3.6'
# Run configure.
export TF_NEED_GCP=1
export TF_NEED_HDFS=1
export TF_NEED_S3=1
export TF_NEED_CUDA=0
export CC_OPT_FLAGS='-mavx'
export PYTHON_BIN_PATH=$(which ${TF_PYTHON_VERSION})
yes "" | "$PYTHON_BIN_PATH" configure.py
# Get the default test targets for bazel.
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
# Export optional variables for running pip.sh
export TF_BUILD_FLAGS="--config=opt --config=v2 --crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain"
export TF_TEST_FLAGS="--define=no_tensorflow_py_deps=true --test_lang_filters=py --test_output=errors --verbose_failures=true --keep_going --test_env=TF2_BEHAVIOR=1"
export TF_TEST_TARGETS="${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/... "
export TF_PIP_TESTS="test_pip_virtualenv_non_clean test_pip_virtualenv_clean"
export TF_TEST_FILTER_TAGS='-no_oss,-oss_serial,-no_oss_py36,-v1only'
#export IS_NIGHTLY=0 # Not nightly; uncomment if building from tf repo.
export TF_PROJECT_NAME="tensorflow_cpu"
export TF_PIP_TEST_ROOT="pip_test"
./tensorflow/tools/ci_build/builds/pip_new.sh

View File

@ -0,0 +1,48 @@
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
install_ubuntu_16_pip_deps pip3.7
# Update bazel
install_bazelisk
# Run configure.
export TF_NEED_GCP=1
export TF_NEED_HDFS=1
export TF_NEED_S3=1
export TF_NEED_CUDA=0
export CC_OPT_FLAGS='-mavx'
export PYTHON_BIN_PATH=$(which python3.7)
export TF2_BEHAVIOR=1
yes "" | "$PYTHON_BIN_PATH" configure.py
tag_filters="-no_oss,-oss_serial,-gpu,-tpu,-benchmark-test,-no_oss_py37,-v1only"
# Get the default test targets for bazel.
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
# Run tests
set +e
bazel test --test_output=errors --config=opt --test_lang_filters=py \
--crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain \
--linkopt=-lrt \
--action_env=TF2_BEHAVIOR="${TF2_BEHAVIOR}" \
--build_tag_filters="${tag_filters}" \
--test_tag_filters="${tag_filters}" -- \
${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/...
test_xml_summary_exit

View File

@ -0,0 +1,52 @@
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
install_ubuntu_16_pip_deps pip3.7
# Update bazel
install_bazelisk
# Export required variables for running pip.sh
export OS_TYPE="UBUNTU"
export CONTAINER_TYPE="CPU"
export TF_PYTHON_VERSION='python3.7'
# Run configure.
export TF_NEED_GCP=1
export TF_NEED_HDFS=1
export TF_NEED_S3=1
export TF_NEED_CUDA=0
export CC_OPT_FLAGS='-mavx'
export PYTHON_BIN_PATH=$(which ${TF_PYTHON_VERSION})
yes "" | "$PYTHON_BIN_PATH" configure.py
# Get the default test targets for bazel.
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
# Export optional variables for running pip.sh
export TF_BUILD_FLAGS="--config=opt --config=v2 --crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain"
export TF_TEST_FLAGS="--define=no_tensorflow_py_deps=true --test_lang_filters=py --test_output=errors --verbose_failures=true --keep_going --test_env=TF2_BEHAVIOR=1"
export TF_TEST_TARGETS="${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/... "
export TF_PIP_TESTS="test_pip_virtualenv_non_clean test_pip_virtualenv_clean"
export TF_TEST_FILTER_TAGS='-no_oss,-oss_serial,-no_oss_py37,-v1only'
#export IS_NIGHTLY=0 # Not nightly; uncomment if building from tf repo.
export TF_PROJECT_NAME="tensorflow_cpu"
export TF_PIP_TEST_ROOT="pip_test"
./tensorflow/tools/ci_build/builds/pip_new.sh

View File

@ -0,0 +1,48 @@
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
install_ubuntu_16_pip_deps pip3.8
# Update bazel
install_bazelisk
# Run configure.
export TF_NEED_GCP=1
export TF_NEED_HDFS=1
export TF_NEED_S3=1
export TF_NEED_CUDA=0
export CC_OPT_FLAGS='-mavx'
export PYTHON_BIN_PATH=$(which python3.8)
export TF2_BEHAVIOR=1
yes "" | "$PYTHON_BIN_PATH" configure.py
tag_filters="-no_oss,-oss_serial,-gpu,-tpu,-benchmark-test,-no_oss_py38,-v1only"
# Get the default test targets for bazel.
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
# Run tests
set +e
bazel test --test_output=errors --config=opt --test_lang_filters=py \
--crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain \
--linkopt=-lrt \
--action_env=TF2_BEHAVIOR="${TF2_BEHAVIOR}" \
--build_tag_filters="${tag_filters}" \
--test_tag_filters="${tag_filters}" -- \
${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/...
test_xml_summary_exit

View File

@ -0,0 +1,52 @@
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
install_ubuntu_16_pip_deps pip3.8
# Update bazel
install_bazelisk
# Export required variables for running pip.sh
export OS_TYPE="UBUNTU"
export CONTAINER_TYPE="CPU"
export TF_PYTHON_VERSION='python3.8'
# Run configure.
export TF_NEED_GCP=1
export TF_NEED_HDFS=1
export TF_NEED_S3=1
export TF_NEED_CUDA=0
export CC_OPT_FLAGS='-mavx'
export PYTHON_BIN_PATH=$(which ${TF_PYTHON_VERSION})
yes "" | "$PYTHON_BIN_PATH" configure.py
# Get the default test targets for bazel.
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
# Export optional variables for running pip.sh
export TF_BUILD_FLAGS="--config=opt --config=v2 --crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain"
export TF_TEST_FLAGS="--define=no_tensorflow_py_deps=true --test_lang_filters=py --test_output=errors --verbose_failures=true --keep_going --test_env=TF2_BEHAVIOR=1"
export TF_TEST_TARGETS="${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/... "
export TF_PIP_TESTS="test_pip_virtualenv_non_clean test_pip_virtualenv_clean"
export TF_TEST_FILTER_TAGS='-no_oss,-oss_serial,-no_oss_py38,-v1only'
#export IS_NIGHTLY=0 # Not nightly; uncomment if building from tf repo.
export TF_PROJECT_NAME="tensorflow_cpu"
export TF_PIP_TEST_ROOT="pip_test"
./tensorflow/tools/ci_build/builds/pip_new.sh

View File

@ -0,0 +1,40 @@
# Copyright 2020 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
# Source the external common scripts.
source tensorflow/tools/ci_build/release/common.sh
# Install latest bazel
install_bazelisk
which bazel
# Install realpath
sudo apt-get install realpath
export TF_NEED_CUDA=1
# Update the version string to nightly
if [ -n "${IS_NIGHTLY_BUILD}" ]; then
./tensorflow/tools/ci_build/update_version.py --nightly
fi
./tensorflow/tools/ci_build/linux/libtensorflow.sh
# Copy the nightly version update script
if [ -n "${IS_NIGHTLY_BUILD}" ]; then
cp tensorflow/tools/ci_build/builds/libtensorflow_nightly_symlink.sh lib_package
fi

View File

@ -0,0 +1,61 @@
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
install_ubuntu_16_pip_deps pip3.6
# Update Bazel to the desired version
install_bazelisk
# Run configure.
export TF_NEED_GCP=1
export TF_NEED_HDFS=1
export TF_NEED_S3=1
export TF_NEED_CUDA=1
export TF_CUDA_VERSION=10
export TF_CUDNN_VERSION=7
export TF_NEED_TENSORRT=1
export TENSORRT_INSTALL_PATH=/usr/local/tensorrt
export CC_OPT_FLAGS='-mavx'
export PYTHON_BIN_PATH=$(which python3.6)
export LD_LIBRARY_PATH="/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:$TENSORRT_INSTALL_PATH/lib"
export TF_CUDA_COMPUTE_CAPABILITIES=sm_35,sm_37,sm_52,sm_60,sm_61,compute_70
yes "" | "$PYTHON_BIN_PATH" configure.py
########################
## Build GPU pip package
########################
bazel build --config=opt \
--crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain \
tensorflow/tools/pip_package:build_pip_package
# Set TF nightly flag so we get the proper version of estimator
if [[ "$IS_NIGHTLY" == 1 ]]; then
NIGHTLY_FLAG="--nightly_flag"
fi
PIP_WHL_DIR=whl
mkdir -p ${PIP_WHL_DIR}
PIP_WHL_DIR=$(readlink -f ${PIP_WHL_DIR}) # Get absolute path
bazel-bin/tensorflow/tools/pip_package/build_pip_package "${PIP_WHL_DIR}" "${NIGHTLY_FLAG}"
WHL_PATH=$(ls "${PIP_WHL_DIR}"/*.whl)
cp "${WHL_PATH}" "$(pwd)"/.
chmod +x tensorflow/tools/ci_build/builds/docker_cpu_pip.sh
docker run -e "BAZEL_VERSION=${BAZEL_VERSION}" -e "CI_BUILD_USER=$(id -u -n)" -e "CI_BUILD_UID=$(id -u)" -e "CI_BUILD_GROUP=$(id -g -n)" -e "CI_BUILD_GID=$(id -g)" -e "CI_BUILD_HOME=/bazel_pip" -v "$(pwd)":/bazel_pip tensorflow/tensorflow:devel "./bazel_pip/tensorflow/tools/ci_build/builds/with_the_same_user" "./bazel_pip/tensorflow/tools/ci_build/builds/docker_cpu_pip.sh"

View File

@ -0,0 +1,60 @@
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
install_ubuntu_16_pip_deps pip3.5
# Update bazel
install_bazelisk
# Run configure.
export TF_NEED_GCP=1
export TF_NEED_HDFS=1
export TF_NEED_S3=1
export TF_NEED_CUDA=1
export TF_CUDA_VERSION=10
export TF_CUDNN_VERSION=7
export TF_NEED_TENSORRT=1
export TENSORRT_INSTALL_PATH=/usr/local/tensorrt
export CC_OPT_FLAGS='-mavx'
export PYTHON_BIN_PATH=$(which python3.5)
export TF2_BEHAVIOR=1
export PROJECT_NAME="tensorflow_gpu"
export LD_LIBRARY_PATH="/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:$TENSORRT_INSTALL_PATH/lib"
export TF_CUDA_COMPUTE_CAPABILITIES=sm_35,sm_37,sm_52,sm_60,sm_61,compute_70
yes "" | "$PYTHON_BIN_PATH" configure.py
# Get the default test targets for bazel.
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
tag_filters="gpu,requires-gpu,-no_gpu,-no_oss,-oss_serial,-no_oss_py35"
set +e
bazel test --config=cuda --config=opt \
--crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain \
--linkopt=-lrt \
--action_env=TF2_BEHAVIOR="${TF2_BEHAVIOR}" \
--test_lang_filters=py \
--test_tag_filters=${tag_filters} \
--build_tag_filters=${tag_filters} \
--test_timeout="300,450,1200,3600" --local_test_jobs=4 \
--test_output=errors --verbose_failures=true --keep_going \
--run_under=//tensorflow/tools/ci_build/gpu_build:parallel_gpu_execute \
-- ${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/...
test_xml_summary_exit

View File

@ -0,0 +1,69 @@
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
install_ubuntu_16_pip_deps pip3.5
# Update bazel
install_bazelisk
# Export required variables for running pip.sh
export OS_TYPE="UBUNTU"
export CONTAINER_TYPE="GPU"
export TF_PYTHON_VERSION='python3.5'
# Run configure.
export TF_NEED_GCP=1
export TF_NEED_HDFS=1
export TF_NEED_S3=1
export TF_NEED_CUDA=1
export TF_CUDA_VERSION=10
export TF_CUDNN_VERSION=7
export TF_NEED_TENSORRT=1
export TENSORRT_INSTALL_PATH=/usr/local/tensorrt
export CC_OPT_FLAGS='-mavx'
export PYTHON_BIN_PATH=$(which ${TF_PYTHON_VERSION})
export PROJECT_NAME="tensorflow_gpu"
export LD_LIBRARY_PATH="/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:$TENSORRT_INSTALL_PATH/lib"
export TF_CUDA_COMPUTE_CAPABILITIES=sm_35,sm_37,sm_52,sm_60,sm_61,compute_70
yes "" | "$PYTHON_BIN_PATH" configure.py
# Get the default test targets for bazel.
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
# Export optional variables for running pip.sh
export TF_TEST_FILTER_TAGS='gpu,requires-gpu,-no_gpu,-no_oss,-oss_serial,-no_oss_py35'
export TF_BUILD_FLAGS="--config=opt --config=v2 --config=cuda --distinct_host_configuration=false \
--action_env=TF_CUDA_VERSION --action_env=TF_CUDNN_VERSION --crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain "
export TF_TEST_FLAGS="--test_tag_filters=${TF_TEST_FILTER_TAGS} --build_tag_filters=${TF_TEST_FILTER_TAGS} \
--distinct_host_configuration=false \
--action_env=TF_CUDA_VERSION --action_env=TF_CUDNN_VERSION --test_env=TF2_BEHAVIOR=1 \
--config=cuda --test_output=errors --local_test_jobs=4 --test_lang_filters=py \
--verbose_failures=true --keep_going --define=no_tensorflow_py_deps=true \
--run_under=//tensorflow/tools/ci_build/gpu_build:parallel_gpu_execute "
export TF_TEST_TARGETS="${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/... "
export TF_PIP_TESTS="test_pip_virtualenv_non_clean test_pip_virtualenv_clean"
#export IS_NIGHTLY=0 # Not nightly; uncomment if building from tf repo.
export TF_PROJECT_NAME=${PROJECT_NAME}
export TF_PIP_TEST_ROOT="pip_test"
# To build both tensorflow and tensorflow-gpu pip packages
export TF_BUILD_BOTH_GPU_PACKAGES=1
./tensorflow/tools/ci_build/builds/pip_new.sh

View File

@ -0,0 +1,60 @@
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
install_ubuntu_16_pip_deps pip3.6
# Update bazel
install_bazelisk
# Run configure.
export TF_NEED_GCP=1
export TF_NEED_HDFS=1
export TF_NEED_S3=1
export TF_NEED_CUDA=1
export TF_CUDA_VERSION=10
export TF_CUDNN_VERSION=7
export TF_NEED_TENSORRT=1
export TENSORRT_INSTALL_PATH=/usr/local/tensorrt
export CC_OPT_FLAGS='-mavx'
export PYTHON_BIN_PATH=$(which python3.6)
export TF2_BEHAVIOR=1
export PROJECT_NAME="tensorflow_gpu"
export LD_LIBRARY_PATH="/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:$TENSORRT_INSTALL_PATH/lib"
export TF_CUDA_COMPUTE_CAPABILITIES=sm_35,sm_37,sm_52,sm_60,sm_61,compute_70
yes "" | "$PYTHON_BIN_PATH" configure.py
# Get the default test targets for bazel.
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
tag_filters="gpu,requires-gpu,-no_gpu,-no_oss,-oss_serial,-no_oss_py36"
set +e
bazel test --config=cuda --config=opt \
--crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain \
--linkopt=-lrt \
--action_env=TF2_BEHAVIOR="${TF2_BEHAVIOR}" \
--test_lang_filters=py \
--test_tag_filters=${tag_filters} \
--build_tag_filters=${tag_filters} \
--test_timeout="300,450,1200,3600" --local_test_jobs=4 \
--test_output=errors --verbose_failures=true --keep_going \
--run_under=//tensorflow/tools/ci_build/gpu_build:parallel_gpu_execute \
-- ${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/...
test_xml_summary_exit

View File

@ -0,0 +1,69 @@
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
install_ubuntu_16_pip_deps pip3.6
# Update bazel
install_bazelisk
# Export required variables for running pip.sh
export OS_TYPE="UBUNTU"
export CONTAINER_TYPE="GPU"
export TF_PYTHON_VERSION='python3.6'
# Run configure.
export TF_NEED_GCP=1
export TF_NEED_HDFS=1
export TF_NEED_S3=1
export TF_NEED_CUDA=1
export TF_CUDA_VERSION=10
export TF_CUDNN_VERSION=7
export TF_NEED_TENSORRT=1
export TENSORRT_INSTALL_PATH=/usr/local/tensorrt
export CC_OPT_FLAGS='-mavx'
export PYTHON_BIN_PATH=$(which ${TF_PYTHON_VERSION})
export PROJECT_NAME="tensorflow_gpu"
export LD_LIBRARY_PATH="/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:$TENSORRT_INSTALL_PATH/lib"
export TF_CUDA_COMPUTE_CAPABILITIES=sm_35,sm_37,sm_52,sm_60,sm_61,compute_70
yes "" | "$PYTHON_BIN_PATH" configure.py
# Get the default test targets for bazel.
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
# Export optional variables for running pip.sh
export TF_TEST_FILTER_TAGS='gpu,requires-gpu,-no_gpu,-no_oss,-oss_serial,-no_oss_py36'
export TF_BUILD_FLAGS="--config=opt --config=v2 --config=cuda --distinct_host_configuration=false \
--action_env=TF_CUDA_VERSION --action_env=TF_CUDNN_VERSION --crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain "
export TF_TEST_FLAGS="--test_tag_filters=${TF_TEST_FILTER_TAGS} --build_tag_filters=${TF_TEST_FILTER_TAGS} \
--distinct_host_configuration=false \
--action_env=TF_CUDA_VERSION --action_env=TF_CUDNN_VERSION --test_env=TF2_BEHAVIOR=1 \
--config=cuda --test_output=errors --local_test_jobs=4 --test_lang_filters=py \
--verbose_failures=true --keep_going --define=no_tensorflow_py_deps=true \
--run_under=//tensorflow/tools/ci_build/gpu_build:parallel_gpu_execute "
export TF_TEST_TARGETS="${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/... "
export TF_PIP_TESTS="test_pip_virtualenv_non_clean test_pip_virtualenv_clean"
#export IS_NIGHTLY=0 # Not nightly; uncomment if building from tf repo.
export TF_PROJECT_NAME=${PROJECT_NAME}
export TF_PIP_TEST_ROOT="pip_test"
# To build both tensorflow and tensorflow-gpu pip packages
export TF_BUILD_BOTH_GPU_PACKAGES=1
./tensorflow/tools/ci_build/builds/pip_new.sh

View File

@ -0,0 +1,60 @@
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
install_ubuntu_16_pip_deps pip3.7
# Update bazel
install_bazelisk
# Run configure.
export TF_NEED_GCP=1
export TF_NEED_HDFS=1
export TF_NEED_S3=1
export TF_NEED_CUDA=1
export TF_CUDA_VERSION=10
export TF_CUDNN_VERSION=7
export TF_NEED_TENSORRT=1
export TENSORRT_INSTALL_PATH=/usr/local/tensorrt
export CC_OPT_FLAGS='-mavx'
export PYTHON_BIN_PATH=$(which python3.7)
export TF2_BEHAVIOR=1
export PROJECT_NAME="tensorflow_gpu"
export LD_LIBRARY_PATH="/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:$TENSORRT_INSTALL_PATH/lib"
export TF_CUDA_COMPUTE_CAPABILITIES=sm_35,sm_37,sm_52,sm_60,sm_61,compute_70
yes "" | "$PYTHON_BIN_PATH" configure.py
# Get the default test targets for bazel.
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
tag_filters="gpu,requires-gpu,-no_gpu,-no_oss,-oss_serial,-no_oss_py37"
set +e
bazel test --config=cuda --config=opt \
--crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain \
--linkopt=-lrt \
--action_env=TF2_BEHAVIOR="${TF2_BEHAVIOR}" \
--test_lang_filters=py \
--build_tag_filters=${tag_filters} \
--test_tag_filters=${tag_filters} \
--test_timeout="300,450,1200,3600" --local_test_jobs=4 \
--test_output=errors --verbose_failures=true --keep_going \
--run_under=//tensorflow/tools/ci_build/gpu_build:parallel_gpu_execute \
-- ${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/...
test_xml_summary_exit

View File

@ -0,0 +1,69 @@
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
install_ubuntu_16_pip_deps pip3.7
# Update bazel
install_bazelisk
# Export required variables for running pip.sh
export OS_TYPE="UBUNTU"
export CONTAINER_TYPE="GPU"
export TF_PYTHON_VERSION='python3.7'
# Run configure.
export TF_NEED_GCP=1
export TF_NEED_HDFS=1
export TF_NEED_S3=1
export TF_NEED_CUDA=1
export TF_CUDA_VERSION=10
export TF_CUDNN_VERSION=7
export TF_NEED_TENSORRT=1
export TENSORRT_INSTALL_PATH=/usr/local/tensorrt
export CC_OPT_FLAGS='-mavx'
export PYTHON_BIN_PATH=$(which ${TF_PYTHON_VERSION})
export PROJECT_NAME="tensorflow_gpu"
export LD_LIBRARY_PATH="/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:$TENSORRT_INSTALL_PATH/lib"
export TF_CUDA_COMPUTE_CAPABILITIES=sm_35,sm_37,sm_52,sm_60,sm_61,compute_70
yes "" | "$PYTHON_BIN_PATH" configure.py
# Get the default test targets for bazel.
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
# Export optional variables for running pip.sh
export TF_TEST_FILTER_TAGS='gpu,requires-gpu,-no_gpu,-no_oss,-oss_serial,-no_oss_py37'
export TF_BUILD_FLAGS="--config=opt --config=v2 --config=cuda --distinct_host_configuration=false \
--action_env=TF_CUDA_VERSION --action_env=TF_CUDNN_VERSION --crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain "
export TF_TEST_FLAGS="--test_tag_filters=${TF_TEST_FILTER_TAGS} --build_tag_filters=${TF_TEST_FILTER_TAGS} \
--distinct_host_configuration=false \
--action_env=TF_CUDA_VERSION --action_env=TF_CUDNN_VERSION --test_env=TF2_BEHAVIOR=1 \
--config=cuda --test_output=errors --local_test_jobs=4 --test_lang_filters=py \
--verbose_failures=true --keep_going --define=no_tensorflow_py_deps=true \
--run_under=//tensorflow/tools/ci_build/gpu_build:parallel_gpu_execute "
export TF_TEST_TARGETS="${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/... "
export TF_PIP_TESTS="test_pip_virtualenv_non_clean test_pip_virtualenv_clean"
#export IS_NIGHTLY=0 # Not nightly; uncomment if building from tf repo.
export TF_PROJECT_NAME=${PROJECT_NAME}
export TF_PIP_TEST_ROOT="pip_test"
# To build both tensorflow and tensorflow-gpu pip packages
export TF_BUILD_BOTH_GPU_PACKAGES=1
./tensorflow/tools/ci_build/builds/pip_new.sh

View File

@ -0,0 +1,60 @@
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
install_ubuntu_16_pip_deps pip3.8
# Update bazel
update_bazel_linux
# Run configure.
export TF_NEED_GCP=1
export TF_NEED_HDFS=1
export TF_NEED_S3=1
export TF_NEED_CUDA=1
export TF_CUDA_VERSION=10
export TF_CUDNN_VERSION=7
export TF_NEED_TENSORRT=1
export TENSORRT_INSTALL_PATH=/usr/local/tensorrt
export CC_OPT_FLAGS='-mavx'
export PYTHON_BIN_PATH=$(which python3.8)
export TF2_BEHAVIOR=1
export PROJECT_NAME="tensorflow_gpu"
export LD_LIBRARY_PATH="/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:$TENSORRT_INSTALL_PATH/lib"
export TF_CUDA_COMPUTE_CAPABILITIES=sm_35,sm_37,sm_52,sm_60,sm_61,compute_70
yes "" | "$PYTHON_BIN_PATH" configure.py
# Get the default test targets for bazel.
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
tag_filters="gpu,requires-gpu,-no_gpu,-no_oss,-oss_serial,-no_oss_py38"
test +e
bazel test --config=cuda --config=opt \
--crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain \
--linkopt=-lrt \
--action_env=TF2_BEHAVIOR="${TF2_BEHAVIOR}" \
--test_lang_filters=py \
--build_tag_filters=${tag_filters} \
--test_tag_filters=${tag_filters} \
--test_timeout="300,450,1200,3600" --local_test_jobs=4 \
--test_output=errors --verbose_failures=true --keep_going \
--run_under=//tensorflow/tools/ci_build/gpu_build:parallel_gpu_execute \
-- ${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/...
test_xml_summary_exit

View File

@ -0,0 +1,69 @@
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
install_ubuntu_16_pip_deps pip3.8
# Update bazel
update_bazel_linux
# Export required variables for running pip.sh
export OS_TYPE="UBUNTU"
export CONTAINER_TYPE="GPU"
export TF_PYTHON_VERSION='python3.8'
# Run configure.
export TF_NEED_GCP=1
export TF_NEED_HDFS=1
export TF_NEED_S3=1
export TF_NEED_CUDA=1
export TF_CUDA_VERSION=10
export TF_CUDNN_VERSION=7
export TF_NEED_TENSORRT=1
export TENSORRT_INSTALL_PATH=/usr/local/tensorrt
export CC_OPT_FLAGS='-mavx'
export PYTHON_BIN_PATH=$(which ${TF_PYTHON_VERSION})
export PROJECT_NAME="tensorflow_gpu"
export LD_LIBRARY_PATH="/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:$TENSORRT_INSTALL_PATH/lib"
export TF_CUDA_COMPUTE_CAPABILITIES=sm_35,sm_37,sm_52,sm_60,sm_61,compute_70
yes "" | "$PYTHON_BIN_PATH" configure.py
# Get the default test targets for bazel.
source tensorflow/tools/ci_build/build_scripts/PRESUBMIT_BUILD_TARGETS.sh
# Export optional variables for running pip.sh
export TF_TEST_FILTER_TAGS='gpu,requires-gpu,-no_gpu,-no_oss,-oss_serial,-no_oss_py38'
export TF_BUILD_FLAGS="--config=opt --config=v2 --config=cuda --distinct_host_configuration=false \
--action_env=TF_CUDA_VERSION --action_env=TF_CUDNN_VERSION --crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda10.1:toolchain "
export TF_TEST_FLAGS="--test_tag_filters=${TF_TEST_FILTER_TAGS} --build_tag_filters=${TF_TEST_FILTER_TAGS} \
--distinct_host_configuration=false \
--action_env=TF_CUDA_VERSION --action_env=TF_CUDNN_VERSION --test_env=TF2_BEHAVIOR=1 \
--config=cuda --test_output=errors --local_test_jobs=4 --test_lang_filters=py \
--verbose_failures=true --keep_going --define=no_tensorflow_py_deps=true \
--run_under=//tensorflow/tools/ci_build/gpu_build:parallel_gpu_execute "
export TF_TEST_TARGETS="${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/... "
export TF_PIP_TESTS="test_pip_virtualenv_non_clean test_pip_virtualenv_clean"
#export IS_NIGHTLY=0 # Not nightly; uncomment if building from tf repo.
export TF_PROJECT_NAME=${PROJECT_NAME}
export TF_PIP_TEST_ROOT="pip_test"
# To build both tensorflow and tensorflow-gpu pip packages
export TF_BUILD_BOTH_GPU_PACKAGES=1
./tensorflow/tools/ci_build/builds/pip_new.sh

View File

@ -0,0 +1,36 @@
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
# Install latest bazel
source tensorflow/tools/ci_build/release/common.sh
install_bazelisk
which bazel
# We need py3 lint
sudo pip3 install pep8
# TODO(gunan): figure out why we get stuck with later versions of pylint.
# Install pylint.
sudo python3 -m pip install setuptools --upgrade
sudo python2 -m pip install pylint==1.6.4
sudo python3 -m pip install pylint==1.6.4
# TODO(yifeif): print pylint version for debug. remove later.
python3 -m pylint --version
# Run tensorflow sanity checks.
tensorflow/tools/ci_build/ci_sanity.sh

View File

@ -0,0 +1,20 @@
:: Copyright 2019 The TensorFlow Authors. All Rights Reserved.
::
:: Licensed under the Apache License, Version 2.0 (the "License");
:: you may not use this file except in compliance with the License.
:: You may obtain a copy of the License at
::
:: http://www.apache.org/licenses/LICENSE-2.0
::
:: Unless required by applicable law or agreed to in writing, software
:: distributed under the License is distributed on an "AS IS" BASIS,
:: WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
:: See the License for the specific language governing permissions and
:: limitations under the License.
:: =============================================================================
CALL tensorflow\tools\ci_build\release\common_win.bat
call tensorflow\tools\ci_build\windows\cpu\bazel\run_libtensorflow.bat || exit /b 1
copy lib_package %TF_ARTIFACTS_DIR%\lib_package

View File

@ -0,0 +1,20 @@
:: Copyright 2019 The TensorFlow Authors. All Rights Reserved.
::
:: Licensed under the Apache License, Version 2.0 (the "License");
:: you may not use this file except in compliance with the License.
:: You may obtain a copy of the License at
::
:: http://www.apache.org/licenses/LICENSE-2.0
::
:: Unless required by applicable law or agreed to in writing, software
:: distributed under the License is distributed on an "AS IS" BASIS,
:: WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
:: See the License for the specific language governing permissions and
:: limitations under the License.
:: =============================================================================
SET PYTHON_DIRECTORY=Python35
CALL tensorflow\tools\ci_build\release\common_win.bat
call tensorflow\tools\ci_build\windows\cpu\pip\run.bat --release_build --extra_build_flags "--config=v2 --define=no_tensorflow_py_deps=true" --extra_test_flags "--test_env=TF2_BEHAVIOR=1" --project_name "tensorflow_cpu"

View File

@ -0,0 +1,20 @@
:: Copyright 2019 The TensorFlow Authors. All Rights Reserved.
::
:: Licensed under the Apache License, Version 2.0 (the "License");
:: you may not use this file except in compliance with the License.
:: You may obtain a copy of the License at
::
:: http://www.apache.org/licenses/LICENSE-2.0
::
:: Unless required by applicable law or agreed to in writing, software
:: distributed under the License is distributed on an "AS IS" BASIS,
:: WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
:: See the License for the specific language governing permissions and
:: limitations under the License.
:: =============================================================================
SET PYTHON_DIRECTORY=Python36
CALL tensorflow\tools\ci_build\release\common_win.bat
call tensorflow\tools\ci_build\windows\cpu\pip\run.bat --release_build --extra_build_flags "--config=v2 --define=no_tensorflow_py_deps=true" --extra_test_flags "--test_env=TF2_BEHAVIOR=1" --project_name "tensorflow_cpu"

View File

@ -0,0 +1,20 @@
:: Copyright 2019 The TensorFlow Authors. All Rights Reserved.
::
:: Licensed under the Apache License, Version 2.0 (the "License");
:: you may not use this file except in compliance with the License.
:: You may obtain a copy of the License at
::
:: http://www.apache.org/licenses/LICENSE-2.0
::
:: Unless required by applicable law or agreed to in writing, software
:: distributed under the License is distributed on an "AS IS" BASIS,
:: WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
:: See the License for the specific language governing permissions and
:: limitations under the License.
:: =============================================================================
SET PYTHON_DIRECTORY=Python37
CALL tensorflow\tools\ci_build\release\common_win.bat
call tensorflow\tools\ci_build\windows\cpu\pip\run.bat --release_build --extra_build_flags "--config=v2 --define=no_tensorflow_py_deps=true" --extra_test_flags "--test_env=TF2_BEHAVIOR=1" --project_name "tensorflow_cpu"

View File

@ -0,0 +1,21 @@
:: Copyright 2019 The TensorFlow Authors. All Rights Reserved.
::
:: Licensed under the Apache License, Version 2.0 (the "License");
:: you may not use this file except in compliance with the License.
:: You may obtain a copy of the License at
::
:: http://www.apache.org/licenses/LICENSE-2.0
::
:: Unless required by applicable law or agreed to in writing, software
:: distributed under the License is distributed on an "AS IS" BASIS,
:: WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
:: See the License for the specific language governing permissions and
:: limitations under the License.
:: =============================================================================
SET PYTHON_DIRECTORY=Python38
CALL tensorflow\tools\ci_build\release\common_win.bat
call tensorflow\tools\ci_build\windows\cpu\pip\run.bat --release_build --extra_build_flags "--config=v2 --define=no_tensorflow_py_deps=true" --extra_test_flags "--test_env=TF2_BEHAVIOR=1" --project_name "tensorflow_cpu"

View File

@ -0,0 +1,20 @@
:: Copyright 2019 The TensorFlow Authors. All Rights Reserved.
::
:: Licensed under the Apache License, Version 2.0 (the "License");
:: you may not use this file except in compliance with the License.
:: You may obtain a copy of the License at
::
:: http://www.apache.org/licenses/LICENSE-2.0
::
:: Unless required by applicable law or agreed to in writing, software
:: distributed under the License is distributed on an "AS IS" BASIS,
:: WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
:: See the License for the specific language governing permissions and
:: limitations under the License.
:: =============================================================================
CALL tensorflow\tools\ci_build\release\common_win.bat
call tensorflow\tools\ci_build\windows\gpu\bazel\run_libtensorflow.bat || exit /b
copy lib_package %TF_ARTIFACTS_DIR%\lib_package

View File

@ -0,0 +1,21 @@
:: Copyright 2019 The TensorFlow Authors. All Rights Reserved.
::
:: Licensed under the Apache License, Version 2.0 (the "License");
:: you may not use this file except in compliance with the License.
:: You may obtain a copy of the License at
::
:: http://www.apache.org/licenses/LICENSE-2.0
::
:: Unless required by applicable law or agreed to in writing, software
:: distributed under the License is distributed on an "AS IS" BASIS,
:: WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
:: See the License for the specific language governing permissions and
:: limitations under the License.
:: =============================================================================
SET PYTHON_DIRECTORY=Python36
CALL tensorflow\tools\ci_build\release\common_win.bat
call tensorflow\tools\ci_build\windows\integration\gpu_pip_on_cpu\run.bat

Some files were not shown because too many files have changed in this diff Show More