Fix rendering of security advisories.

GitHub does not insert automatic links and smart code snippets in these files, so we have to do it manually.

PiperOrigin-RevId: 333195707
Change-Id: I1e2fed8ff207fbfce6eb8fb2b910d12bcab4100c
This commit is contained in:
Mihai Maruseac 2020-09-22 17:41:48 -07:00 committed by TensorFlower Gardener
parent 4c71606397
commit c06650b697
25 changed files with 351 additions and 131 deletions

View File

@ -6,13 +6,30 @@ CVE-2020-15214
### Impact
In TensorFlow Lite models using segment sum can trigger a write out bounds /
segmentation fault if the segment ids are not sorted. Code assumes that the
segment ids are in increasing order, using the last element of the tensor
holding them to determine the dimensionality of output tensor:
https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/lite/kernels/segment_sum.cc#L39-L44
segment ids are in increasing order, [using the last element of the tensor
holding them to determine the dimensionality of output
tensor](https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/lite/kernels/segment_sum.cc#L39-L44):
```cc
if (segment_id_size > 0) {
max_index = segment_ids->data.i32[segment_id_size - 1];
}
TfLiteIntArray* output_shape = TfLiteIntArrayCreate(NumDimensions(data));
output_shape->data[0] = max_index + 1;
```
This results in allocating insufficient memory for the output tensor and in a
write outside the bounds of the output array:
https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/lite/kernels/internal/reference/reference_ops.h#L2625-L2631
[write outside the bounds of the output
array](https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/lite/kernels/internal/reference/reference_ops.h#L2625-L2631):
```cc
memset(output_data, 0, sizeof(T) * output_shape.FlatSize());
for (int i = 0; i < input_shape.Dims(0); i++) {
int output_index = segment_ids_data[i];
for (int j = 0; j < segment_flat_size; ++j) {
output_data[output_index * segment_flat_size + j] +=
input_data[i * segment_flat_size + j];
}
}
```
This usually results in a segmentation fault, but depending on runtime
conditions it can provide for a write gadget to be used in future memory
@ -22,8 +39,9 @@ corruption-based exploits.
TensorFlow 2.2.0, 2.3.0.
### Patches
We have patched the issue in 204945b and will release patch releases for all
affected versions.
We have patched the issue in
[204945b](https://github.com/tensorflow/tensorflow/commit/204945b) and will
release patch releases for all affected versions.
We recommend users to upgrade to TensorFlow 2.2.1, or 2.3.1.

View File

@ -8,15 +8,27 @@ In TensorFlow Lite models using segment sum can trigger a denial of service by
causing an out of memory allocation in the implementation of segment sum. Since
code uses the last element of the tensor holding them to determine the
dimensionality of output tensor, attackers can use a very large value to trigger
a large allocation:
https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/lite/kernels/segment_sum.cc#L39-L44
a [large
allocation](https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/lite/kernels/segment_sum.cc#L39-L48):
```cc
if (segment_id_size > 0) {
max_index = segment_ids->data.i32[segment_id_size - 1];
}
TfLiteIntArray* output_shape = TfLiteIntArrayCreate(NumDimensions(data));
output_shape->data[0] = max_index + 1;
for (int i = 1; i < data_rank; ++i) {
output_shape->data[i] = data->dims->data[i];
}
return context->ResizeTensor(context, output, output_shape);
```
### Vulnerable Versions
TensorFlow 2.2.0, 2.3.0.
### Patches
We have patched the issue in 204945b and will release patch releases for all
affected versions.
We have patched the issue in
[204945b](https://github.com/tensorflow/tensorflow/commit/204945b) and will
release patch releases for all affected versions.
We recommend users to upgrade to TensorFlow 2.2.1, or 2.3.1.

View File

@ -4,10 +4,19 @@
CVE-2020-15212
### Impact
In TensorFlow Lite models using segment sum can trigger writes outside of bounds
of heap allocated buffers by inserting negative elements in the segment ids
tensor:
https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/lite/kernels/internal/reference/reference_ops.h#L2625-L2631
In TensorFlow Lite models using segment sum can trigger [writes outside of
bounds of heap allocated
buffers](https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/lite/kernels/internal/reference/reference_ops.h#L2625-L2631)
by inserting negative elements in the segment ids tensor:
```cc
for (int i = 0; i < input_shape.Dims(0); i++) {
int output_index = segment_ids_data[i];
for (int j = 0; j < segment_flat_size; ++j) {
output_data[output_index * segment_flat_size + j] +=
input_data[i * segment_flat_size + j];
}
}
```
Users having access to `segment_ids_data` can alter `output_index` and then
write to outside of `output_data` buffer.
@ -20,8 +29,9 @@ advanced exploits.
TensorFlow 2.2.0, 2.3.0.
### Patches
We have patched the issue in 204945b and will release patch releases for all
affected versions.
We have patched the issue in
[204945b](https://github.com/tensorflow/tensorflow/commit/204945b) and will
release patch releases for all affected versions.
We recommend users to upgrade to TensorFlow 2.2.1, or 2.3.1.

View File

@ -8,17 +8,38 @@ In TensorFlow Lite, saved models in the flatbuffer format use a double indexing
scheme: a model has a set of subgraphs, each subgraph has a set of operators and
each operator has a set of input/output tensors. The flatbuffer format uses
indices for the tensors, indexing into an array of tensors that is owned by the
subgraph. This results in a pattern of double array indexing when trying to get
the data of each
tensor:https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/lite/kernels/kernel_util.cc#L36
subgraph. This results in a pattern of double array indexing when trying to
[get the data of each
tensor](https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/lite/kernels/kernel_util.cc#L36):
```cc
return &context->tensors[node->inputs->data[index]];
```
However, some operators can have some tensors be optional. To handle this
scenario, the flatbuffer model uses a negative `-1` value as index for these
tensors:
https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/lite/c/common.h#L82
scenario, the flatbuffer model uses a negative `-1` value as [index for these tensors](https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/lite/c/common.h#L82):
```cc
#define kTfLiteOptionalTensor (-1)
```
This results in special casing during validation at model loading time:
https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/lite/core/subgraph.cc#L566-L580
This results in [special casing during validation at model loading
time](https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/lite/core/subgraph.cc#L566-L580):
```cc
for (int i = 0; i < length; i++) {
int index = indices[i];
// Continue if index == kTfLiteOptionalTensor before additional comparisons
// below, size_t(-1) is always >= context_tensors_size.
if (index == kTfLiteOptionalTensor) {
continue;
}
if (index < 0 || static_cast<size_t>(index) >= context_.tensors_size) {
ReportError(
"Invalid tensor index %d in %s. The subgraph has %d tensors\n", index,
label, context_.tensors_size);
consistent_ = false;
return kTfLiteError;
}
}
```
Unfortunately, this means that the `-1` index is a valid tensor index for any
operator, including those that don't expect optional inputs and including for
@ -33,9 +54,14 @@ TensorFlow 1.15.0, 1.15.1, 1.15.2, 1.15.3, 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.1.1,
2.2.0, 2.3.0.
### Patches
We have patched the issue in several commits (46d5b0852, 00302787b7, e11f5558,
cd31fd0ce, 1970c21, and fff2c83). We will release patch releases for all
versions between 1.15 and 2.3.
We have patched the issue in several commits
([46d5b0852](https://github.com/tensorflow/tensorflow/commit/46d5b0852),
[00302787b7](https://github.com/tensorflow/tensorflow/commit/00302787b7),
[e11f5558](https://github.com/tensorflow/tensorflow/commit/e11f5558),
[cd31fd0ce](https://github.com/tensorflow/tensorflow/commit/cd31fd0ce),
[1970c21](https://github.com/tensorflow/tensorflow/commit/1970c21), and
[fff2c83](https://github.com/tensorflow/tensorflow/commit/fff2c83)). We will
release patch releases for all versions between 1.15 and 2.3.
We recommend users to upgrade to TensorFlow 1.15.4, 2.0.3, 2.1.2, 2.2.1, or
2.3.1.

View File

@ -13,8 +13,9 @@ TensorFlow 1.15.0, 1.15.1, 1.15.2, 1.15.3, 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.1.1,
2.2.0, 2.3.0.
### Patches
We have patched the issue in d58c96946b and will release patch releases for all
versions between 1.15 and 2.3.
We have patched the issue in
[d58c96946b](https://github.com/tensorflow/tensorflow/commit/d58c96946b) and
will release patch releases for all versions between 1.15 and 2.3.
We recommend users to upgrade to TensorFlow 1.15.4, 2.0.3, 2.1.2, 2.2.1, or
2.3.1.

View File

@ -8,8 +8,14 @@ A crafted TFLite model can force a node to have as input a tensor backed by a
`nullptr` buffer. This can be achieved by changing a buffer index in the
flatbuffer serialization to convert a read-only tensor to a read-write one. The
runtime assumes that these buffers are written to before a possible read, hence
they are initialized with `nullptr`:
https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/lite/core/subgraph.cc#L1224-L1227
they are [initialized with
`nullptr`](https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/lite/core/subgraph.cc#L1224-L1227):
```cc
TfLiteTensorReset(type, name, ConvertArrayToTfLiteIntArray(rank, dims),
GetLegacyQuantization(quantization),
/*buffer=*/nullptr, required_bytes, allocation_type,
nullptr, is_variable, &tensor);
```
However, by changing the buffer index for a tensor and implicitly converting
that tensor to be a read-write one, as there is nothing in the model that writes
@ -20,8 +26,9 @@ TensorFlow 1.15.0, 1.15.1, 1.15.2, 1.15.3, 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.1.1,
2.2.0, 2.3.0.
### Patches
We have patched the issue in 0b5662bc and will release patch releases for all
versions between 1.15 and 2.3.
We have patched the issue in
[0b5662bc](https://github.com/tensorflow/tensorflow/commit/0b5662bc) and will
release patch releases for all versions between 1.15 and 2.3.
We recommend users to upgrade to TensorFlow 1.15.4, 2.0.3, 2.1.2, 2.2.1, or
2.3.1.

View File

@ -4,9 +4,17 @@
CVE-2020-15208
### Impact
When determining the common dimension size of two tensors, TFLite uses a
`DCHECK` which is no-op outside of debug compilation modes:
https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/lite/kernels/internal/types.h#L437-L442
When determining the common dimension size of two tensors, [TFLite uses a
`DCHECK`](https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/lite/kernels/internal/types.h#L437-L442)
which is no-op outside of debug compilation modes:
```cc
// Get common shape dim, DCHECKing that they all agree.
inline int MatchingDim(const RuntimeShape& shape1, int index1,
const RuntimeShape& shape2, int index2) {
TFLITE_DCHECK_EQ(shape1.Dims(index1), shape2.Dims(index2));
return shape1.Dims(index1);
}
```
Since the function always returns the dimension of the first tensor, malicious
attackers can craft cases where this is larger than that of the second tensor.
@ -18,8 +26,9 @@ TensorFlow 1.15.0, 1.15.1, 1.15.2, 1.15.3, 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.1.1,
2.2.0, 2.3.0.
### Patches
We have patched the issue in 8ee24e7949a20 and will release patch releases for
all versions between 1.15 and 2.3.
We have patched the issue in
[8ee24e7949a20](https://github.com/tensorflow/tensorflow/commit/8ee24e7949a20)
and will release patch releases for all versions between 1.15 and 2.3.
We recommend users to upgrade to TensorFlow 1.15.4, 2.0.3, 2.1.2, 2.2.1, or
2.3.1.

View File

@ -6,8 +6,15 @@ CVE-2020-15207
### Impact
To mimic Python's indexing with negative values, TFLite uses `ResolveAxis` to
convert negative values to positive indices. However, the only check that the
converted index is now valid is only present in debug builds:
https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/lite/kernels/internal/reference/reduce.h#L68-L72
converted index is now valid is [only present in debug
builds](https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/lite/kernels/internal/reference/reduce.h#L68-L72):
```cc
// Handle negative index. A positive index 'p_idx' can be represented as a
// negative index 'n_idx' as: n_idx = p_idx-num_dims
// eg: For num_dims=3, [0, 1, 2] is the same as [-3, -2, -1] */
int current = axis[idx] < 0 ? (axis[idx] + num_dims) : axis[idx];
TFLITE_DCHECK(current >= 0 && current < num_dims);
```
If the `DCHECK` does not trigger, then code execution moves ahead with a
negative index. This, in turn, results in accessing data out of bounds which
@ -18,8 +25,9 @@ TensorFlow 1.15.0, 1.15.1, 1.15.2, 1.15.3, 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.1.1,
2.2.0, 2.3.0.
### Patches
We have patched the issue in 2d88f470dea2671b430884260f3626b1fe99830a and will
release patch releases for all versions between 1.15 and 2.3.
We have patched the issue in
[2d88f470dea2671b430884260f3626b1fe99830a](https://github.com/tensorflow/tensorflow/commit/2d88f470dea2671b430884260f3626b1fe99830a)
and will release patch releases for all versions between 1.15 and 2.3.
We recommend users to upgrade to TensorFlow 1.15.4, 2.0.3, 2.1.2, 2.2.1, or
2.3.1.

View File

@ -9,21 +9,26 @@ required keys results in segfaults and data corruption while loading the model.
This can cause a denial of service in products using `tensorflow-serving` or
other inference-as-a-service installments.
We have added fixes to this in f760f88b4267d981e13f4b302c437ae800445968 and
fcfef195637c6e365577829c4d67681695956e7d (both going into TensorFlow 2.2.0 and
2.3.0 but not yet backported to earlier versions). However, this was not enough,
as #41097 reports a different failure mode.
We have added fixes to this in
[f760f88b4267d981e13f4b302c437ae800445968](https://github.com/tensorflow/tensorflow/commit/f760f88b4267d981e13f4b302c437ae800445968)
and
[fcfef195637c6e365577829c4d67681695956e7d](https://github.com/tensorflow/tensorflow/commit/fcfef195637c6e365577829c4d67681695956e7d)
(both going into TensorFlow 2.2.0 and 2.3.0 but not yet backported to earlier
versions). However, this was not enough, as #41097 reports a different failure
mode.
### Vulnerable Versions
TensorFlow 1.15.0, 1.15.1, 1.15.2, 1.15.3, 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.1.1,
2.2.0, 2.3.0.
### Patches
We have patched the issue in adf095206f25471e864a8e63a0f1caef53a0e3a6 and will
release patch releases for all versions between 1.15 and 2.3. Patch releases for
versions between 1.15 and 2.1 will also contain cherry-picks of
f760f88b4267d981e13f4b302c437ae800445968 and
fcfef195637c6e365577829c4d67681695956e7d.
We have patched the issue in
[adf095206f25471e864a8e63a0f1caef53a0e3a6](https://github.com/tensorflow/tensorflow/commit/adf095206f25471e864a8e63a0f1caef53a0e3a6)
and will release patch releases for all versions between 1.15 and 2.3. Patch
releases for versions between 1.15 and 2.1 will also contain cherry-picks of
[f760f88b4267d981e13f4b302c437ae800445968](https://github.com/tensorflow/tensorflow/commit/f760f88b4267d981e13f4b302c437ae800445968)
and
[fcfef195637c6e365577829c4d67681695956e7d](https://github.com/tensorflow/tensorflow/commit/fcfef195637c6e365577829c4d67681695956e7d).
We recommend users to upgrade to TensorFlow 1.15.4, 2.0.3, 2.1.2, 2.2.1, or
2.3.1.

View File

@ -24,8 +24,9 @@ TensorFlow 1.15.0, 1.15.1, 1.15.2, 1.15.3, 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.1.1,
2.2.0, 2.3.0.
### Patches
We have patched the issue in 0462de5b544ed4731aa2fb23946ac22c01856b80 and will
release patch releases for all versions between 1.15 and 2.3.
We have patched the issue in
[0462de5b544ed4731aa2fb23946ac22c01856b80](https://github.com/tensorflow/tensorflow/commit/0462de5b544ed4731aa2fb23946ac22c01856b80)
and will release patch releases for all versions between 1.15 and 2.3.
We recommend users to upgrade to TensorFlow 1.15.4, 2.0.3, 2.1.2, 2.2.1, or
2.3.1.

View File

@ -6,8 +6,11 @@ CVE-2020-15204
### Impact
In eager mode, TensorFlow does not set the session state. Hence, calling
`tf.raw_ops.GetSessionHandle` or `tf.raw_ops.GetSessionHandleV2` results in a
null pointer dereference:
https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/session_ops.cc#L45
[null pointer
dereference](https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/session_ops.cc#L45):
```cc
int64 id = ctx->session_state()->GetNewId();
```
In the above snippet, in eager mode, `ctx->session_state()` returns `nullptr`.
Since code immediately dereferences this, we get a segmentation fault.
@ -17,8 +20,9 @@ TensorFlow 1.15.0, 1.15.1, 1.15.2, 1.15.3, 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.1.1,
2.2.0, 2.3.0.
### Patches
We have patched the issue in 9a133d73ae4b4664d22bd1aa6d654fec13c52ee1 and will
release patch releases for all versions between 1.15 and 2.3.
We have patched the issue in
[9a133d73ae4b4664d22bd1aa6d654fec13c52ee1](https://github.com/tensorflow/tensorflow/commit/9a133d73ae4b4664d22bd1aa6d654fec13c52ee1)
and will release patch releases for all versions between 1.15 and 2.3.
We recommend users to upgrade to TensorFlow 1.15.4, 2.0.3, 2.1.2, 2.2.1, or
2.3.1.

View File

@ -7,8 +7,17 @@ CVE-2020-15203
By controlling the `fill` argument of
[`tf.strings.as_string`](https://www.tensorflow.org/api_docs/python/tf/strings/as_string),
a malicious attacker is able to trigger a format string vulnerability due to the
way the internal format use in a `printf` call is constructed:
https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/as_string_op.cc#L68-L74
way the internal format use in a `printf` call is
[constructed](https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/as_string_op.cc#L68-L74):
```cc
format_ = "%";
if (width > -1) {
strings::Appendf(&format_, "%s%d", fill_string.c_str(), width);
}
if (precision > -1) {
strings::Appendf(&format_, ".%d", precision);
}
```
This can result in unexpected output:
```python
@ -54,8 +63,9 @@ TensorFlow 1.15.0, 1.15.1, 1.15.2, 1.15.3, 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.1.1,
2.2.0, 2.3.0.
### Patches
We have patched the issue in 33be22c65d86256e6826666662e40dbdfe70ee83 and will
release patch releases for all versions between 1.15 and 2.3.
We have patched the issue in
[33be22c65d86256e6826666662e40dbdfe70ee83](https://github.com/tensorflow/tensorflow/commit/33be22c65d86256e6826666662e40dbdfe70ee83)
and will release patch releases for all versions between 1.15 and 2.3.
We recommend users to upgrade to TensorFlow 1.15.4, 2.0.3, 2.1.2, 2.2.1, or
2.3.1.

View File

@ -4,14 +4,24 @@
CVE-2020-15202
### Impact
The `Shard` API in TensorFlow expects the last argument to be a function taking
two `int64` (i.e., `long long`) arguments:
https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/util/work_sharder.h#L59-L60
The [`Shard`
API](https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/util/work_sharder.h#L59-L60)
in TensorFlow expects the last argument to be a function taking two `int64`
(i.e., `long long`) arguments:
```cc
void Shard(int max_parallelism, thread::ThreadPool* workers, int64 total,
int64 cost_per_unit, std::function<void(int64, int64)> work);
```
However, there are several places in TensorFlow where a lambda taking `int` or
`int32` arguments is being used:
https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/random_op.cc#L204-L205
https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/random_op.cc#L317-L318
`int32` arguments is [being
used](https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/random_op.cc#L204-L205):
```cc
auto DoWork = [samples_per_alpha, num_alphas, &rng, samples_flat,
alpha_flat](int start_output, int limit_output) {...};
Shard(worker_threads.num_threads, worker_threads.workers,
num_alphas * samples_per_alpha, kElementCost, DoWork);
```
In these cases, if the amount of work to be parallelized is large enough,
integer truncation occurs. Depending on how the two arguments of the lambda are
@ -23,9 +33,11 @@ TensorFlow 1.15.0, 1.15.1, 1.15.2, 1.15.3, 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.1.1,
2.2.0, 2.3.0.
### Patches
We have patched the issue in 27b417360cbd671ef55915e4bb6bb06af8b8a832 and
ca8c013b5e97b1373b3bb1c97ea655e69f31a575. We will release patch releases for all
versions between 1.15 and 2.3.
We have patched the issue in
[27b417360cbd671ef55915e4bb6bb06af8b8a832](https://github.com/tensorflow/tensorflow/commit/27b417360cbd671ef55915e4bb6bb06af8b8a832)
and
[ca8c013b5e97b1373b3bb1c97ea655e69f31a575](https://github.com/tensorflow/tensorflow/commit/ca8c013b5e97b1373b3bb1c97ea655e69f31a575).
We will release patch releases for all versions between 1.15 and 2.3.
We recommend users to upgrade to TensorFlow 1.15.4, 2.0.3, 2.1.2, 2.2.1, or
2.3.1.

View File

@ -7,8 +7,16 @@ CVE-2020-15201
The `RaggedCountSparseOutput` implementation does not validate that the input
arguments form a valid ragged tensor. In particular, there is no validation that
the values in the `splits` tensor generate a valid partitioning of the `values`
tensor. Hence, this code is prone to heap buffer overflow:
https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/count_ops.cc#L248-L251
tensor. Hence, this code is prone to [heap buffer
overflow](https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/count_ops.cc#L248-L251):
```cc
for (int idx = 0; idx < num_values; ++idx) {
while (idx >= splits_values(batch_idx)) {
batch_idx++;
}
// ...
}
```
If `split_values` does not end with a value at least `num_values` then the
`while` loop condition will trigger a read outside of the bounds of
@ -18,8 +26,9 @@ If `split_values` does not end with a value at least `num_values` then the
TensorFlow 2.3.0.
### Patches
We have patched the issue in 3cbb917b4714766030b28eba9fb41bb97ce9ee02 and will
release a patch release.
We have patched the issue in
[3cbb917b4714766030b28eba9fb41bb97ce9ee02](https://github.com/tensorflow/tensorflow/commit/3cbb917b4714766030b28eba9fb41bb97ce9ee02)
and will release a patch release.
We recommend users to upgrade to TensorFlow 2.3.1.

View File

@ -33,8 +33,9 @@ Trying to access that in the user code results in a segmentation fault.
TensorFlow 2.3.0.
### Patches
We have patched the issue in 3cbb917b4714766030b28eba9fb41bb97ce9ee02 and will
release a patch release.
We have patched the issue in
[3cbb917b4714766030b28eba9fb41bb97ce9ee02](https://github.com/tensorflow/tensorflow/commit/3cbb917b4714766030b28eba9fb41bb97ce9ee02)
and will release a patch release.
We recommend users to upgrade to TensorFlow 2.3.1.

View File

@ -7,8 +7,12 @@ CVE-2020-15199
The `RaggedCountSparseOutput` does not validate that the input arguments form a
valid ragged tensor. In particular, there is no validation that the `splits`
tensor has the minimum required number of elements. Code uses this quantity to
initialize a different data structure:
https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/count_ops.cc#L241-L244
[initialize a different data
structure](https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/count_ops.cc#L241-L244):
```cc
int num_batches = splits.NumElements() - 1;
auto per_batch_counts = BatchedMap<W>(num_batches);
```
Since `BatchedMap` is equivalent to a vector, it needs to have at least one
element to not be `nullptr`. If user passes a `splits` tensor that is empty or
@ -19,8 +23,9 @@ system.
TensorFlow 2.3.0.
### Patches
We have patched the issue in 3cbb917b4714766030b28eba9fb41bb97ce9ee02 and will
release a patch release.
We have patched the issue in
[3cbb917b4714766030b28eba9fb41bb97ce9ee02](https://github.com/tensorflow/tensorflow/commit/3cbb917b4714766030b28eba9fb41bb97ce9ee02)
and will release a patch release.
We recommend users to upgrade to TensorFlow 2.3.1.

View File

@ -7,8 +7,15 @@ CVE-2020-15198
The `SparseCountSparseOutput` implementation does not validate that the input
arguments form a valid sparse tensor. In particular, there is no validation that
the `indices` tensor has the same shape as the `values` one. The values in these
tensors are always accessed in parallel:
https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/count_ops.cc#L193-L195
tensors are always [accessed in
parallel](https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/count_ops.cc#L193-L195):
```cc
for (int idx = 0; idx < num_values; ++idx) {
int batch = is_1d ? 0 : indices_values(idx, 0);
const auto& value = values_values(idx);
// ...
}
```
Thus, a shape mismatch can result in accesses outside the bounds of heap
allocated buffers.
@ -17,8 +24,9 @@ allocated buffers.
TensorFlow 2.3.0.
### Patches
We have patched the issue in 3cbb917b4714766030b28eba9fb41bb97ce9ee02 and will
release a patch release.
We have patched the issue in
[3cbb917b4714766030b28eba9fb41bb97ce9ee02](https://github.com/tensorflow/tensorflow/commit/3cbb917b4714766030b28eba9fb41bb97ce9ee02)
and will release a patch release.
We recommend users to upgrade to TensorFlow 2.3.1.

View File

@ -7,8 +7,11 @@ CVE-2020-15197
The `SparseCountSparseOutput` implementation does not validate that the input
arguments form a valid sparse tensor. In particular, there is no validation that
the `indices` tensor has rank 2. This tensor must be a matrix because code
assumes its elements are accessed as elements of a matrix:
https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/count_ops.cc#L185
assumes its elements are [accessed as elements of a
matrix](https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/count_ops.cc#L185):
```cc
const auto indices_values = indices.matrix<int64>();
```
However, malicious users can pass in tensors of different rank, resulting in a
`CHECK` assertion failure and a crash. This can be used to cause denial of
@ -19,8 +22,9 @@ of the input sparse tensor.
TensorFlow 2.3.0.
### Patches
We have patched the issue in 3cbb917b4714766030b28eba9fb41bb97ce9ee02 and will
release a patch release.
We have patched the issue in
[3cbb917b4714766030b28eba9fb41bb97ce9ee02](https://github.com/tensorflow/tensorflow/commit/3cbb917b4714766030b28eba9fb41bb97ce9ee02)
and will release a patch release.
We recommend users to upgrade to TensorFlow 2.3.1.

View File

@ -6,13 +6,28 @@ CVE-2020-15196
### Impact
The `SparseCountSparseOutput` and `RaggedCountSparseOutput` implementations
don't validate that the `weights` tensor has the same shape as the data. The
check exists for `DenseCountSparseOutput`, where both tensors are fully
specified:
https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/count_ops.cc#L110-L117
check exists for `DenseCountSparseOutput`, where both tensors are [fully
specified](https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/count_ops.cc#L110-L117):
```cc
if (use_weights) {
OP_REQUIRES(
context, weights.shape() == data.shape(),
errors::InvalidArgument(
"Weights and data must have the same shape. Weight shape: ",
weights.shape().DebugString(),
"; data shape: ", data.shape().DebugString()));
}
```
In the sparse and ragged count weights are still accessed in parallel with the
data:
https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/count_ops.cc#L199-L201
In the sparse and ragged count weights are still accessed [in parallel with the
data](https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/count_ops.cc#L199-L201):
```cc
for (int idx = 0; idx < num_values; ++idx) {
int batch = is_1d ? 0 : indices_values(idx, 0);
const auto& value = values_values(idx);
per_batch_counts[batch][value] += weight_values(idx);
}
```
But, since there is no validation, a user passing fewer weights than the values
for the tensors can generate a read from outside the bounds of the heap buffer
@ -22,8 +37,9 @@ allocated for the weights.
TensorFlow 2.3.0.
### Patches
We have patched the issue in 3cbb917b4714766030b28eba9fb41bb97ce9ee02 and will
release a patch release.
We have patched the issue in
[3cbb917b4714766030b28eba9fb41bb97ce9ee02](https://github.com/tensorflow/tensorflow/commit/3cbb917b4714766030b28eba9fb41bb97ce9ee02)
and will release a patch release.
We recommend users to upgrade to TensorFlow 2.3.1.

View File

@ -4,8 +4,11 @@
CVE-2020-15195
### Impact
The implementation of `SparseFillEmptyRowsGrad` uses a double indexing pattern:
https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/sparse_fill_empty_rows_op.cc#L263-L269
The implementation of `SparseFillEmptyRowsGrad` uses a [double indexing
pattern](https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/sparse_fill_empty_rows_op.cc#L263-L269):
```cc
d_values(i) = grad_values(reverse_index_map(i));
```
It is possible for `reverse_index_map(i)` to be an index outside of bounds of
`grad_values`, thus resulting in a heap buffer overflow.
@ -15,8 +18,9 @@ TensorFlow 1.15.0, 1.15.1, 1.15.2, 1.15.3, 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.1.1,
2.2.0, 2.3.0.
### Patches
We have patched the issue in 390611e0d45c5793c7066110af37c8514e6a6c54 and will
release a patch release for all affected versions.
We have patched the issue in
[390611e0d45c5793c7066110af37c8514e6a6c54](https://github.com/tensorflow/tensorflow/commit/390611e0d45c5793c7066110af37c8514e6a6c54)
and will release a patch release for all affected versions.
We recommend users to upgrade to TensorFlow 1.15.4, 2.0.3, 2.1.2, 2.2.1, or
2.3.1.

View File

@ -4,9 +4,18 @@
CVE-2020-15194
### Impact
The `SparseFillEmptyRowsGrad` implementation has incomplete validation of the
shapes of its arguments:
https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/sparse_fill_empty_rows_op.cc#L235-L241
The `SparseFillEmptyRowsGrad` implementation has [incomplete validation of the
shapes of its
arguments](https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/sparse_fill_empty_rows_op.cc#L235-L241):
```cc
OP_REQUIRES(
context, TensorShapeUtils::IsVector(reverse_index_map_t->shape()),
errors::InvalidArgument("reverse_index_map must be a vector, saw: ",
reverse_index_map_t->shape().DebugString()));
const auto reverse_index_map = reverse_index_map_t->vec<int64>();
const auto grad_values = grad_values_t->vec<T>();
```
Although `reverse_index_map_t` and `grad_values_t` are accessed in a similar
pattern, only `reverse_index_map_t` is validated to be of proper shape. Hence,
@ -18,8 +27,9 @@ TensorFlow 1.15.0, 1.15.1, 1.15.2, 1.15.3, 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.1.1,
2.2.0, 2.3.0.
### Patches
We have patched the issue in 390611e0d45c5793c7066110af37c8514e6a6c54 and will
release a patch release for all affected versions.
We have patched the issue in
[390611e0d45c5793c7066110af37c8514e6a6c54](https://github.com/tensorflow/tensorflow/commit/390611e0d45c5793c7066110af37c8514e6a6c54)
and will release a patch release for all affected versions.
We recommend users to upgrade to TensorFlow 1.15.4, 2.0.3, 2.1.2, 2.2.1, or
2.3.1.

View File

@ -4,8 +4,13 @@
CVE-2020-15193
### Impact
The implementation of `dlpack.to_dlpack` can be made to use uninitialized memory resulting in further memory corruption. This is because the pybind11 glue code assumes that the argument is a tensor:
https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/python/tfe_wrapper.cc#L1361
The implementation of `dlpack.to_dlpack` can be made to use uninitialized
memory resulting in further memory corruption. This is because the pybind11
glue code [assumes that the argument is a
tensor](https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/python/tfe_wrapper.cc#L1361):
```cc
TFE_TensorHandle* thandle = EagerTensor_Handle(eager_tensor_pyobject_ptr);
```
However, there is nothing stopping users from passing in a Python object instead of a tensor.
```python
@ -16,8 +21,13 @@ In [2]: tf.experimental.dlpack.to_dlpack([2])
...
```
The uninitialized memory address is due to a `reinterpret_cast`
https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/python/eager/pywrap_tensor.cc#L848-L850
The uninitialized memory address is due to a
[`reinterpret_cast`](https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/python/eager/pywrap_tensor.cc#L848-L850):
```cc
TFE_TensorHandle* EagerTensor_Handle(const PyObject* o) {
return reinterpret_cast<const EagerTensor*>(o)->handle;
}
```
Since the `PyObject` is a Python object, not a TensorFlow Tensor, the cast to `EagerTensor` fails.
@ -25,8 +35,9 @@ Since the `PyObject` is a Python object, not a TensorFlow Tensor, the cast to `E
TensorFlow 2.2.0, 2.3.0.
### Patches
We have patched the issue in 22e07fb204386768e5bcbea563641ea11f96ceb8 and will
release a patch release for all affected versions.
We have patched the issue in
[22e07fb204386768e5bcbea563641ea11f96ceb8](https://github.com/tensorflow/tensorflow/commit/22e07fb204386768e5bcbea563641ea11f96ceb8)
and will release a patch release for all affected versions.
We recommend users to upgrade to TensorFlow 2.2.1 or 2.3.1.

View File

@ -5,15 +5,25 @@ CVE-2020-15192
### Impact
If a user passes a list of strings to `dlpack.to_dlpack` there is a memory leak
following an expected validation failure:
https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/c/eager/dlpack.cc#L100-L104
following an expected [validation failure](https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/c/eager/dlpack.cc#L100-L104):
```cc
status->status = tensorflow::errors::InvalidArgument(
DataType_Name(static_cast<DataType>(data_type)),
" is not supported by dlpack");
```
The allocated memory is from
https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/c/eager/dlpack.cc#L256
The allocated memory is
[from](https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/c/eager/dlpack.cc#L256):
```cc
auto* tf_dlm_tensor_ctx = new TfDlManagedTensorCtx(tensor_ref);
```
The issue occurs because the `status` argument during validation failures is not
properly checked:
https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/c/eager/dlpack.cc#L265-L267
The issue occurs because the `status` argument during validation failures [is not
properly checked](https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/c/eager/dlpack.cc#L265-L267):
```cc
dlm_tensor->dl_tensor.data = TFE_TensorHandleDevicePointer(h, status);
dlm_tensor->dl_tensor.dtype = GetDlDataType(data_type, status);
```
Since each of the above methods can return an error status, the `status` value
must be checked before continuing.
@ -22,8 +32,9 @@ must be checked before continuing.
TensorFlow 2.2.0, 2.3.0.
### Patches
We have patched the issue in 22e07fb204386768e5bcbea563641ea11f96ceb8 and will
release a patch release for all affected versions.
We have patched the issue in
[22e07fb204386768e5bcbea563641ea11f96ceb8](https://github.com/tensorflow/tensorflow/commit/22e07fb204386768e5bcbea563641ea11f96ceb8)
and will release a patch release for all affected versions.
We recommend users to upgrade to TensorFlow 2.2.1 or 2.3.1.

View File

@ -8,11 +8,19 @@ If a user passes an invalid argument to `dlpack.to_dlpack` the expected
validations will cause variables to bind to `nullptr` while setting a `status`
variable to the error condition.
However, this `status` argument is not properly checked:
https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/c/eager/dlpack.cc#L265-L267
However, this `status` argument is not [properly
checked](https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/c/eager/dlpack.cc#L265-L267):
```cc
dlm_tensor->dl_tensor.data = TFE_TensorHandleDevicePointer(h, status);
dlm_tensor->dl_tensor.dtype = GetDlDataType(data_type, status);
```
Hence, code following these methods will bind references to null pointers:
https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/c/eager/dlpack.cc#L279-L285
Hence, code following these methods will [bind references to null
pointers](https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/c/eager/dlpack.cc#L279-L285):
```cc
dlm_tensor->dl_tensor.shape = &(*shape_arr)[0];
dlm_tensor->dl_tensor.strides = &(*stride_arr)[0];
```
This is undefined behavior and reported as an error if compiling with
`-fsanitize=null`.
@ -21,8 +29,9 @@ This is undefined behavior and reported as an error if compiling with
TensorFlow 2.2.0, 2.3.0.
### Patches
We have patched the issue in 22e07fb204386768e5bcbea563641ea11f96ceb8 and will
release a patch release for all affected versions.
We have patched the issue in
[22e07fb204386768e5bcbea563641ea11f96ceb8](https://github.com/tensorflow/tensorflow/commit/22e07fb204386768e5bcbea563641ea11f96ceb8)
and will release a patch release for all affected versions.
We recommend users to upgrade to TensorFlow 2.2.1 or 2.3.1.

View File

@ -10,8 +10,16 @@ operation takes as input a tensor and a boolean and outputs two tensors.
Depending on the boolean value, one of the tensors is exactly the input tensor
whereas the other one should be an empty tensor.
However, the eager runtime traverses all tensors in the output:
https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/common_runtime/eager/kernel_and_device.cc#L308-L313
However, the eager runtime [traverses all tensors in the
output](https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/common_runtime/eager/kernel_and_device.cc#L308-L313):
```cc
if (outputs != nullptr) {
outputs->clear();
for (int i = 0; i < context.num_outputs(); ++i) {
outputs->push_back(Tensor(*context.mutable_output(i)));
}
}
```
Since only one of the tensors is defined, the other one is `nullptr`, hence we
are binding a reference to `nullptr`. This is undefined behavior and reported as
@ -23,8 +31,9 @@ TensorFlow 1.15.0, 1.15.1, 1.15.2, 1.15.3, 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.1.1,
2.2.0, 2.3.0.
### Patches
We have patched the issue in da8558533d925694483d2c136a9220d6d49d843c and will
release a patch release for all affected versions.
We have patched the issue in
[da8558533d925694483d2c136a9220d6d49d843c](https://github.com/tensorflow/tensorflow/commit/da8558533d925694483d2c136a9220d6d49d843c)
and will release a patch release for all affected versions.
We recommend users to upgrade to TensorFlow 1.15.4, 2.0.3, 2.1.2, 2.2.1, or
2.3.1.