* Internal cleanup PiperOrigin-RevId: 167636242 * Move the Keras API to tf.keras. PiperOrigin-RevId: 167638421 * Automated g4 rollback of changelist 167604306 PiperOrigin-RevId: 167639833 * Call HloComputation.Accept instead of HloInstruction.Accept to get all instructions profiled. RELNOTES: n/a PiperOrigin-RevId: 167640259 * Add fast math attributes to all generated methods when fast math enabled. RELNOTES: n/a PiperOrigin-RevId: 167646637 * Extended ScratchSpace to expose its underlying scratch tensor object. PiperOrigin-RevId: 167649551 * Change zip(...)[1] to list(zip(...))[1], for python 3 compatibility. PiperOrigin-RevId: 167654035 * Add scoped timer to log jit compile times. RELNOTES: n/a PiperOrigin-RevId: 167656720 * Verify that predictions are in the expected range for ops that use thresholds, e.g. tf.contrib.metrics.streaming_auc. PiperOrigin-RevId: 167658134 * Internal change. PiperOrigin-RevId: 167658401 * Fix list formatting PiperOrigin-RevId: 167660250 * Enable java test. PiperOrigin-RevId: 167660276 * Add shape functions on debug ops. PiperOrigin-RevId: 167668811 * Increase session_bundle_test to a medium test. PiperOrigin-RevId: 167672587 * Include layout of convolution input data in the op_profile. PiperOrigin-RevId: 167680208 * Fix tf.sparse_add for SparseTensor with _ref typed values. Example: st = tf.SparseTensor( indices=[[1]], values=tf.Variable([1.0]), dense_shape=[1]) tf.sparse_add(st, st) PiperOrigin-RevId: 167681121 * Fix conversion to explicit scalar broadcast The dimensions field of a broadcast HLO op is meant to be populated with the dimensions that are broadcasted, which in case of a scalar is the empty vector. Generally, the rank of the operand of a broadcast op should always equal the size of the dimensions vector. PiperOrigin-RevId: 167686946 * Add 'unknown shape' shape functions on deprecated linalg ops. PiperOrigin-RevId: 167719029 * Be more careful in IsInitalized, and log when it is called on an unknown node_id. PiperOrigin-RevId: 167722344 * tfdbg: Refactor graph-processing code out of debug_data.py The basic idea is to separate the code in debug_data.py that handles graph structures into its own module (debug_graphs.py). This tackles an existing TODO item to simplify the code debug_data.DebugDumpDir. In a later CL, code will be added to debug_graphs.DebugGraph to allow reconstruction of the original GraphDef, i.e., the GraphDef without the Copy* and Debug* nodes inserted by tfdbg. This will be useful for, among other things, the TensorBoard Debugger Plugin. PiperOrigin-RevId: 167726113 * internal PiperOrigin-RevId: 167727508 * Update MaxPoolV2Shape to support NCHV_VECT_C. PiperOrigin-RevId: 167732437 * Deleting tf.contrib.learn.dnn benchmark tests. PiperOrigin-RevId: 167741308 * Fix off-by-one documentation error. sequence_lengths is the actual length of the sequence and therefor should not be used as zero-based indexing. The code is correct but the documentation was misleading. PiperOrigin-RevId: 167742082 * contrib summaries work in eager-graph mode (with defun) As a side effect fix issues related to using eager-defined variables in graph mode. PiperOrigin-RevId: 167744121 * Fix minor documentation error in ZlibInputStream. PiperOrigin-RevId: 167745218 * Sets the distributed training related properties of RunConfig based on TF_CONFIG. PiperOrigin-RevId: 167752997 * Improved documentation about eval ops in EstimatorSpec. PiperOrigin-RevId: 167753099 * Automated g4 rollback of changelist 156748870 PiperOrigin-RevId: 167753805 * Make cuda_solvers_gpu.cu.cc compile with nvcc8. PiperOrigin-RevId: 167754383 * Add csv dataset example to get_started/regression. PiperOrigin-RevId: 167754634 * Switches to OrderedDict to make the dictionary order deterministic so we have less randomness from graph building. PiperOrigin-RevId: 167755072 * Add int8 version of fused_conv2d_bias_activation operator for the forward phase, and support side_input and scaling parameters in float and int8 versions. PiperOrigin-RevId: 167763219 * Make the text summary write no plugin data content This is actually a safe removal because no logic makes use of the content of text plugin data. PiperOrigin-RevId: 167763880 * Avoid unnecessary buffer allocations & deallocations Before this change, when we reached the end of a file, we would (1) clear the existing buffer (which at large buffer sizes typically involved deallocating it). (2) reserve a buffer (which at large buffer sizes is non-trivial) (3) realize we had reached EoF, and therefore clear the buffer, deallocating it again. With this change, whenever the buffered reader detects an EoF condition, we remember it, so that we can short-circuit the above logic. The above optimization results in a more than 25x performance improvement for large buffers reading small files. PiperOrigin-RevId: 167766751 * [TF:XLA] In Literal: correctly handle operands with zero elements in Copy. PiperOrigin-RevId: 167769308 * Reduce batch size for resampler backward pass test, to speed up test. PiperOrigin-RevId: 167769539 * Remove `SimpleGraphExecutionState::costs_`, which is unused. PiperOrigin-RevId: 167772120 * detecting cycles when users add a control edge to a graph PiperOrigin-RevId: 167773598 * Make writer_test avoid setting content to a string That content field of the PluginData proto is going to be converted into a bytes field, and setting it to a string makes the test fail. Furthermore, the purpose of this test is to make sure that correct data is written, so setting the name of the plugin suffices. PiperOrigin-RevId: 167776457 * Propagate the original stack trace when exceptions caught be MonitoredSession are re-raised. PiperOrigin-RevId: 167781071 * Change trace.py to not access a graph as a default argument. Checks for None and access via default graph inside the function. PiperOrigin-RevId: 167788815 * Added custom metric support for tf.estimator.Estimator. PiperOrigin-RevId: 167788891 * A eager Saver that allows restore on create. PiperOrigin-RevId: 167789332 * Make content field of PluginData a bytes field The content field had previously been a string field, which had been problematic because string fields can only store UTF-8 strings. This problem can manifest in various ways. For instance, take the precision-recall curve plugin. Its summary collects data that scales in size based on the number of thresholds. When the content field is a string, the summary logic serializes the relevant data proto just fine when we only have a few thresholds (about 100). However, for large numbers of thresholds (ie, around 200), the summary logic fails to serialize and throws a cryptic error. ValueError: '\x10\xc8\x01' has type str, but isn't valid UTF-8 encoding. Non-UTF-8 strings must be converted to unicode objects before being added. Changing the content field to a bytes field fixes this issue because bytes fields are not restricted to UTF-8 strings. I just happened to have needed a long enough string for the string to no longer be a valid UTF-8 one. PiperOrigin-RevId: 167790594 * Temporarily disable tf_should_use wrapper, since it can cause python Graph/Operation/Tensor memory leaks. PiperOrigin-RevId: 167790657 * Ensure using "path" as a URI will keep working. PiperOrigin-RevId: 167793848 * Fix typo in graph transforms error message PiperOrigin-RevId: 167796563 * Merge changes from github. END_PUBLIC --- Commit607816029
authored by Eugene Brevdo<ebrevdo@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Extended ScratchSpace to expose its underlying scratch tensor object. PiperOrigin-RevId: 167649551 --- Commitdb43fe68e
authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Add fast math attributes to all generated methods when fast math enabled. RELNOTES: n/a PiperOrigin-RevId: 167646637 --- Commitaebe8cc6f
authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Call HloComputation.Accept instead of HloInstruction.Accept to get all instructions profiled. RELNOTES: n/a PiperOrigin-RevId: 167640259 --- Commit0ab137cd8
authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: BEGIN_PUBLIC Automated g4 rollback of changelist 167604306 PiperOrigin-RevId: 167800256 * Update ops-related pbtxt files. PiperOrigin-RevId: 167802521 * Go: Update generated wrapper functions for TensorFlow ops. PiperOrigin-RevId: 167804076 * Add sloppy_interleave dataset operator. When feeding data at high speed into a model from variable-latency data sources, head-of-line blocking can be a significant concern when using a deterministic input pipeline, such as interleave. This change introduces a new non-deterministic dataset operator that avoids head-of-line blocking. PiperOrigin-RevId: 167810743 * Update ops-related pbtxt files. PiperOrigin-RevId: 167811375 * tfdbg: Fix python3 breakage in grpc debug tests caused by bytes-type plugin_data content PiperOrigin-RevId: 167812508 * [XLA] Rip CheckFusionNode() out of instruction, and move it into the HLO verifier instead. CheckFusionNode() is linear in the size of the fusion node, and was called once per Fuse(), leading to run-time quadratic in the fusion node's size. PiperOrigin-RevId: 167812735 * Disable tensorflow/contrib/data/python/kernel_tests/sloppy_transformation_dataset_op_test.py in cmake.
1050 lines
35 KiB
C++
1050 lines
35 KiB
C++
/* Copyright 2017 The TensorFlow Authors. All Rights Reserved.
|
|
|
|
Licensed under the Apache License, Version 2.0 (the "License");
|
|
you may not use this file except in compliance with the License.
|
|
You may obtain a copy of the License at
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
Unless required by applicable law or agreed to in writing, software
|
|
distributed under the License is distributed on an "AS IS" BASIS,
|
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
See the License for the specific language governing permissions and
|
|
limitations under the License.
|
|
==============================================================================*/
|
|
|
|
#include "tensorflow/compiler/xla/literal_util.h"
|
|
|
|
#include <vector>
|
|
|
|
#include "tensorflow/compiler/xla/array3d.h"
|
|
#include "tensorflow/compiler/xla/array4d.h"
|
|
#include "tensorflow/compiler/xla/layout_util.h"
|
|
#include "tensorflow/compiler/xla/shape_util.h"
|
|
#include "tensorflow/compiler/xla/test.h"
|
|
#include "tensorflow/compiler/xla/types.h"
|
|
#include "tensorflow/core/lib/core/status_test_util.h"
|
|
#include "tensorflow/core/platform/macros.h"
|
|
#include "tensorflow/core/platform/types.h"
|
|
|
|
namespace xla {
|
|
namespace {
|
|
|
|
using ::testing::ElementsAre;
|
|
|
|
class LiteralUtilTest : public ::testing::Test {
|
|
protected:
|
|
LiteralUtilTest() {
|
|
Array4D<float> arr4d({
|
|
// clang-format off
|
|
{ // i0=0
|
|
{ // i1=0
|
|
{1, 2, 3}, // i2=0
|
|
{4, 5, 6}, // i2=1
|
|
{7, 8, 9}, // i2=2
|
|
},
|
|
{ // i1=1
|
|
{11, 12, 13},
|
|
{14, 15, 16},
|
|
{17, 18, 19},
|
|
},
|
|
},
|
|
{ // i0=1
|
|
{ // i1=0
|
|
{101, 102, 103},
|
|
{104, 105, 106},
|
|
{107, 108, 109},
|
|
},
|
|
{ // i1=1
|
|
{201, 202, 203}, // i2=0
|
|
{204, 205, 206}, // i2=1
|
|
{207, 208, 209}, // i2=2
|
|
},
|
|
},
|
|
// clang-format on
|
|
});
|
|
|
|
layout_r2_dim0major_ = LayoutUtil::MakeLayout({1, 0});
|
|
layout_r2_dim0minor_ = LayoutUtil::MakeLayout({0, 1});
|
|
layout_r3_dim0major_ = LayoutUtil::MakeLayout({2, 1, 0});
|
|
layout_r3_dim0minor_ = LayoutUtil::MakeLayout({0, 1, 2});
|
|
layout_r4_dim0major_ = LayoutUtil::MakeLayout({3, 2, 1, 0});
|
|
layout_r4_dim0minor_ = LayoutUtil::MakeLayout({0, 1, 2, 3});
|
|
|
|
literal_r4_2x2x3x3_dim0major_ =
|
|
Literal::CreateR4FromArray4DWithLayout<float>(arr4d,
|
|
layout_r4_dim0major_);
|
|
literal_r4_2x2x3x3_dim0minor_ =
|
|
Literal::CreateR4FromArray4DWithLayout<float>(arr4d,
|
|
layout_r4_dim0minor_);
|
|
}
|
|
|
|
Layout layout_r2_dim0major_;
|
|
Layout layout_r2_dim0minor_;
|
|
Layout layout_r3_dim0major_;
|
|
Layout layout_r3_dim0minor_;
|
|
Layout layout_r4_dim0major_;
|
|
Layout layout_r4_dim0minor_;
|
|
std::unique_ptr<Literal> literal_r4_2x2x3x3_dim0major_;
|
|
std::unique_ptr<Literal> literal_r4_2x2x3x3_dim0minor_;
|
|
};
|
|
|
|
TEST_F(LiteralUtilTest, LiteralScalarToString) {
|
|
auto true_lit = Literal::CreateR0<bool>(true);
|
|
ASSERT_EQ("true", true_lit->ToString());
|
|
|
|
auto false_lit = Literal::CreateR0<bool>(false);
|
|
ASSERT_EQ("false", false_lit->ToString());
|
|
|
|
auto u32_lit = Literal::CreateR0<uint32>(42);
|
|
ASSERT_EQ("42", u32_lit->ToString());
|
|
|
|
auto s32_lit = Literal::CreateR0<int32>(-999);
|
|
ASSERT_EQ("-999", s32_lit->ToString());
|
|
|
|
auto f32_lit = Literal::CreateR0<float>(3.14f);
|
|
ASSERT_EQ("3.14", f32_lit->ToString());
|
|
|
|
auto f16_lit = Literal::CreateR0<half>(static_cast<half>(0.5f));
|
|
ASSERT_EQ("0.5", f16_lit->ToString());
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, LiteralVectorToString) {
|
|
auto pred_vec = Literal::CreateR1<bool>({true, false, true});
|
|
ASSERT_EQ("{101}", pred_vec->ToString());
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, R2ToString) {
|
|
const auto literal = Literal::CreateR2({{1, 2}, {3, 4}, {5, 6}});
|
|
const string expected = R"(s32[3,2] {
|
|
{ 1, 2 },
|
|
{ 3, 4 },
|
|
{ 5, 6 },
|
|
})";
|
|
ASSERT_EQ(expected, literal->ToString());
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, R3ToString) {
|
|
const auto literal = Literal::CreateR3({{{1}, {2}}, {{3}, {4}}, {{5}, {6}}});
|
|
const string expected = R"(s32[3,2,1] {
|
|
{ { 1 },
|
|
{ 2 } },
|
|
{ { 3 },
|
|
{ 4 } },
|
|
{ { 5 },
|
|
{ 6 } }
|
|
})";
|
|
ASSERT_EQ(expected, literal->ToString());
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, TupleToString) {
|
|
auto scalar = Literal::CreateR0<float>(1.0);
|
|
auto matrix = Literal::CreateR2<float>({{1.0, 2.0}, {3.0, 4.0}});
|
|
auto tuple = Literal::MakeTuple({scalar.get(), matrix.get()});
|
|
const string expected = R"((f32[], f32[2,2]) (
|
|
1,
|
|
f32[2,2] {
|
|
{ 1, 2 },
|
|
{ 3, 4 },
|
|
},
|
|
))";
|
|
ASSERT_EQ(expected, tuple->ToString());
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, CreateR3FromArray3d) {
|
|
// clang-format off
|
|
Array3D<float> array_3d({
|
|
{{1.0f, 2.0f},
|
|
{3.0f, 4.0f},
|
|
{5.0f, 6.0f}},
|
|
{{7.0f, 8.0f},
|
|
{9.0f, 10.0f},
|
|
{11.0f, 12.0f}},
|
|
});
|
|
// clang-format on
|
|
|
|
auto literal = Literal::CreateR3FromArray3D(array_3d);
|
|
EXPECT_THAT(literal->shape().dimensions(), ElementsAre(2, 3, 2));
|
|
string result = literal->ToString();
|
|
const string expected = R"(f32[2,3,2] {
|
|
{ { 1, 2 },
|
|
{ 3, 4 },
|
|
{ 5, 6 } },
|
|
{ { 7, 8 },
|
|
{ 9, 10 },
|
|
{ 11, 12 } }
|
|
})";
|
|
ASSERT_EQ(expected, result);
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, LiteralR4F32ProjectedStringifies) {
|
|
// clang-format off
|
|
auto literal = Literal::CreateR4Projected<float>({
|
|
{1, 2},
|
|
{1001, 1002},
|
|
{2001, 2002},
|
|
}, /*projection_p=*/1, /*projection_z=*/2);
|
|
// clang-format on
|
|
EXPECT_THAT(literal->shape().dimensions(), ElementsAre(1, 2, 3, 2));
|
|
string result = literal->ToString();
|
|
const string expected = R"(f32[1,2,3,2] {
|
|
{ // i0=0
|
|
{ // i1=0
|
|
{1, 2},
|
|
{1001, 1002},
|
|
{2001, 2002},
|
|
},
|
|
{ // i1=1
|
|
{1, 2},
|
|
{1001, 1002},
|
|
{2001, 2002},
|
|
},
|
|
},
|
|
})";
|
|
ASSERT_EQ(expected, result);
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, LiteralR4F32Stringifies) {
|
|
EXPECT_THAT(literal_r4_2x2x3x3_dim0major_->shape().dimensions(),
|
|
ElementsAre(2, 2, 3, 3));
|
|
string result = literal_r4_2x2x3x3_dim0major_->ToString();
|
|
const string expected = R"(f32[2,2,3,3] {
|
|
{ // i0=0
|
|
{ // i1=0
|
|
{1, 2, 3},
|
|
{4, 5, 6},
|
|
{7, 8, 9},
|
|
},
|
|
{ // i1=1
|
|
{11, 12, 13},
|
|
{14, 15, 16},
|
|
{17, 18, 19},
|
|
},
|
|
},
|
|
{ // i0=1
|
|
{ // i1=0
|
|
{101, 102, 103},
|
|
{104, 105, 106},
|
|
{107, 108, 109},
|
|
},
|
|
{ // i1=1
|
|
{201, 202, 203},
|
|
{204, 205, 206},
|
|
{207, 208, 209},
|
|
},
|
|
},
|
|
})";
|
|
ASSERT_EQ(expected, result);
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, EachCellR2F32) {
|
|
// clang-format off
|
|
auto literal = Literal::CreateR2<float>({
|
|
{3.1f, 4.2f},
|
|
{9.3f, 12.4f},
|
|
});
|
|
// clang-format on
|
|
std::vector<std::tuple<int64, int64, string>> seen;
|
|
literal->EachCellAsString(
|
|
[&seen](tensorflow::gtl::ArraySlice<int64> indices, const string& value) {
|
|
seen.emplace_back(indices[0], indices[1], value);
|
|
});
|
|
|
|
using Elem = std::tuple<int64, int64, string>;
|
|
std::vector<Elem> expected = {Elem(0, 0, "3.1"), Elem(0, 1, "4.2"),
|
|
Elem(1, 0, "9.3"), Elem(1, 1, "12.4")};
|
|
EXPECT_EQ(expected, seen);
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, ScalarEquality) {
|
|
// Test Literal::Equal with scalars.
|
|
auto f32_42 = Literal::CreateR0<float>(42.0);
|
|
auto f32_42_clone = Literal::CreateR0<float>(42.0);
|
|
|
|
EXPECT_TRUE(f32_42->Equal(*f32_42));
|
|
EXPECT_TRUE(f32_42->Equal(*f32_42_clone));
|
|
|
|
auto f32_123 = Literal::CreateR0<float>(123.0);
|
|
EXPECT_FALSE(f32_42->Equal(*f32_123));
|
|
|
|
auto f64_42 = Literal::CreateR0<double>(42.0);
|
|
EXPECT_FALSE(f32_42->Equal(*f64_42));
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, NonScalarEquality) {
|
|
// Test Literal::Equal with nonscalars.
|
|
auto matrix = Literal::CreateR2<float>({{1.0, 2.0}, {3.0, 4.0}});
|
|
auto matrix_clone = Literal::CreateR2<float>({{1.0, 2.0}, {3.0, 4.0}});
|
|
auto matrix_different = Literal::CreateR2<float>({{4.0, 3.0}, {1.0, 2.0}});
|
|
auto vector_literal = Literal::CreateR1<float>({1.0, 2.0, 3.0, 4.0});
|
|
auto scalar = Literal::CreateR0<float>(1.0);
|
|
|
|
EXPECT_TRUE(matrix->Equal(*matrix));
|
|
EXPECT_TRUE(matrix->Equal(*matrix_clone));
|
|
EXPECT_FALSE(matrix->Equal(*matrix_different));
|
|
EXPECT_FALSE(matrix->Equal(*vector_literal));
|
|
EXPECT_FALSE(matrix->Equal(*scalar));
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, DifferentLayoutEquality) {
|
|
// Test Literal::Equal with literals which have different layouts.
|
|
auto colmajor = MakeUnique<Literal>();
|
|
*colmajor->mutable_shape() = ShapeUtil::MakeShape(F32, {2, 2});
|
|
*colmajor->mutable_shape()->mutable_layout() = LayoutUtil::MakeLayout({0, 1});
|
|
colmajor->Reserve(4);
|
|
colmajor->Set<float>({0, 0}, 1.0);
|
|
colmajor->Set<float>({0, 1}, 2.0);
|
|
colmajor->Set<float>({1, 0}, 3.0);
|
|
colmajor->Set<float>({1, 1}, 4.0);
|
|
|
|
auto rowmajor = MakeUnique<Literal>();
|
|
*rowmajor->mutable_shape() = ShapeUtil::MakeShape(F32, {2, 2});
|
|
*rowmajor->mutable_shape()->mutable_layout() = LayoutUtil::MakeLayout({1, 0});
|
|
rowmajor->Reserve(4);
|
|
rowmajor->Set<float>({0, 0}, 1.0);
|
|
rowmajor->Set<float>({0, 1}, 2.0);
|
|
rowmajor->Set<float>({1, 0}, 3.0);
|
|
rowmajor->Set<float>({1, 1}, 4.0);
|
|
|
|
EXPECT_TRUE(rowmajor->Equal(*colmajor));
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, TupleEquality) {
|
|
// Test Literal::Equal with tuples.
|
|
auto scalar = Literal::CreateR0<float>(1.0);
|
|
auto matrix = Literal::CreateR2<float>({{1.0, 2.0}, {3.0, 4.0}});
|
|
auto tuple1 = Literal::MakeTuple({scalar.get(), matrix.get()});
|
|
|
|
// Tuple with the same elements. One element is shared with the original
|
|
// tuple, the other is a clone of the element in the original tuple.
|
|
auto scalar_clone = Literal::CreateR0<float>(1.0);
|
|
auto tuple2 = Literal::MakeTuple({scalar_clone.get(), matrix.get()});
|
|
EXPECT_TRUE(tuple1->Equal(*tuple2));
|
|
|
|
// Tuple with elements reversed.
|
|
auto reversed_tuple = Literal::MakeTuple({matrix.get(), scalar.get()});
|
|
EXPECT_FALSE(tuple1->Equal(*reversed_tuple));
|
|
|
|
// Tuple with different value.
|
|
auto scalar_42 = Literal::CreateR0<float>(42.0);
|
|
auto different_tuple = Literal::MakeTuple({scalar_42.get(), matrix.get()});
|
|
EXPECT_FALSE(tuple1->Equal(*different_tuple));
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, IsAllTuple) {
|
|
auto element1 = Literal::CreateR0<float>(0.0);
|
|
auto element2 = Literal::CreateR2<float>({{0.0, 0.0}, {0.0, 0.0}});
|
|
auto tuple = Literal::MakeTuple({element1.get(), element1.get()});
|
|
|
|
// Tuples should always return false for IsAll.
|
|
EXPECT_FALSE(tuple->IsAll(0));
|
|
EXPECT_FALSE(tuple->IsAll(1));
|
|
}
|
|
|
|
// Verifies that CreateFromShape works for tuples.
|
|
TEST_F(LiteralUtilTest, CreateFromShapeTuple) {
|
|
auto scalar = Literal::CreateR0<float>(0.0);
|
|
auto matrix = Literal::CreateR2<int32>({{0, 0}, {0, 0}});
|
|
auto tuple = Literal::MakeTuple({scalar.get(), matrix.get()});
|
|
|
|
auto x = Literal::CreateFromShape(tuple->shape());
|
|
EXPECT_TRUE(tuple->Equal(*x));
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, IsAll) {
|
|
EXPECT_TRUE(Literal::CreateR0<bool>(false)->IsAll(0));
|
|
EXPECT_TRUE(Literal::CreateR0<bool>(true)->IsAll(1));
|
|
EXPECT_FALSE(Literal::CreateR0<bool>(false)->IsAll(1));
|
|
EXPECT_FALSE(Literal::CreateR0<bool>(false)->IsAll(2));
|
|
EXPECT_FALSE(Literal::CreateR0<bool>(true)->IsAll(0));
|
|
EXPECT_FALSE(Literal::CreateR0<bool>(true)->IsAll(2));
|
|
EXPECT_FALSE(Literal::CreateR0<bool>(true)->IsAll(-1));
|
|
|
|
// We shouldn't reinterpret int8_min as an unsigned type and then decide that
|
|
// it is equal to 255.
|
|
auto int8_min = std::numeric_limits<int8>::min();
|
|
EXPECT_FALSE(Literal::CreateR0<uint8>(255)->IsAll(int8_min));
|
|
|
|
EXPECT_TRUE(Literal::CreateR0<float>(42.0)->IsAll(42));
|
|
EXPECT_FALSE(Literal::CreateR0<float>(42.0001)->IsAll(42));
|
|
|
|
EXPECT_TRUE(Literal::CreateR1<int>({100, 100, 100})->IsAll(100));
|
|
EXPECT_FALSE(Literal::CreateR1<double>({100, 100, 100.001})->IsAll(100));
|
|
|
|
EXPECT_TRUE(Literal::CreateR2<uint64>({{8, 8}, {8, 8}})->IsAll(8));
|
|
EXPECT_FALSE(Literal::CreateR2<uint64>({{8, 8}, {8, 9}})->IsAll(8));
|
|
EXPECT_FALSE(Literal::CreateR2<uint64>({{9, 8}, {8, 8}})->IsAll(8));
|
|
|
|
half h8(8.0f);
|
|
half h9(9.0f);
|
|
EXPECT_TRUE(Literal::CreateR2<half>({{h8}, {h8}})->IsAll(8));
|
|
EXPECT_FALSE(Literal::CreateR2<half>({{h8}, {h9}})->IsAll(8));
|
|
EXPECT_FALSE(Literal::CreateR2<half>({{h9}, {h8}})->IsAll(8));
|
|
|
|
auto uint64_max = std::numeric_limits<uint64>::max();
|
|
EXPECT_FALSE(Literal::CreateR2<uint64>(
|
|
{{uint64_max, uint64_max}, {uint64_max, uint64_max}})
|
|
->IsAll(-1));
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, IsAllFloat) {
|
|
// IsAllFloat always returns false when the literal is not floating-point.
|
|
EXPECT_FALSE(Literal::CreateR0<bool>(false)->IsAllFloat(0));
|
|
EXPECT_FALSE(Literal::CreateR0<int8>(0)->IsAllFloat(0));
|
|
EXPECT_FALSE(Literal::CreateR0<uint8>(0)->IsAllFloat(0));
|
|
EXPECT_FALSE(Literal::CreateR0<int>(0)->IsAllFloat(0));
|
|
|
|
EXPECT_TRUE(Literal::CreateR0<float>(0)->IsAllFloat(0));
|
|
EXPECT_TRUE(Literal::CreateR0<float>(.5)->IsAllFloat(.5));
|
|
EXPECT_TRUE(Literal::CreateR0<float>(-.5)->IsAllFloat(-.5));
|
|
EXPECT_FALSE(Literal::CreateR0<float>(-.5)->IsAllFloat(-.49));
|
|
EXPECT_FALSE(
|
|
Literal::CreateR2<float>({{0, 0, 0}, {0, .1, 0}})->IsAllFloat(0));
|
|
EXPECT_TRUE(
|
|
Literal::CreateR2<float>({{.5, .5, .5}, {.5, .5, .5}})->IsAllFloat(.5));
|
|
|
|
EXPECT_TRUE(Literal::CreateR0<double>(0)->IsAllFloat(0));
|
|
EXPECT_TRUE(Literal::CreateR0<double>(.5)->IsAllFloat(.5));
|
|
EXPECT_TRUE(Literal::CreateR0<double>(-.5)->IsAllFloat(-.5));
|
|
EXPECT_FALSE(Literal::CreateR0<double>(-.5)->IsAllFloat(-.49));
|
|
EXPECT_FALSE(
|
|
Literal::CreateR2<double>({{0, 0, 0}, {0, .1, 0}})->IsAllFloat(0));
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, IsZero) {
|
|
auto scalar_zero = Literal::CreateR0<float>(0.0f);
|
|
auto scalar_one = Literal::CreateR0<float>(1.0f);
|
|
EXPECT_TRUE(scalar_zero->IsZero({}));
|
|
EXPECT_FALSE(scalar_one->IsZero({}));
|
|
|
|
auto array = Literal::CreateR2<uint32>({{1, 2, 0, 3}, {1, 0, 1, 2}});
|
|
EXPECT_FALSE(array->IsZero({0, 1}));
|
|
EXPECT_TRUE(array->IsZero({0, 2}));
|
|
EXPECT_TRUE(array->IsZero({1, 1}));
|
|
EXPECT_FALSE(array->IsZero({1, 2}));
|
|
}
|
|
|
|
template <typename T>
|
|
class LiteralUtilTestTemplated : public ::testing::Test {};
|
|
|
|
using TestedTypes = ::testing::Types<float, int32, uint32>;
|
|
TYPED_TEST_CASE(LiteralUtilTestTemplated, TestedTypes);
|
|
|
|
TYPED_TEST(LiteralUtilTestTemplated, Relayout2x2) {
|
|
// Make a non-integer for floating point types.
|
|
TypeParam half = TypeParam(1) / TypeParam(2);
|
|
auto data = Literal::CreateR2<TypeParam>({{half, 2}, {3, 4}});
|
|
const Layout layout01 = LayoutUtil::MakeLayout({0, 1});
|
|
const Layout layout10 = LayoutUtil::MakeLayout({1, 0});
|
|
|
|
auto data01 = data->Relayout(layout01);
|
|
EXPECT_TRUE(LayoutUtil::Equal(data01->shape().layout(), layout01));
|
|
EXPECT_TRUE(data->Equal(*data01));
|
|
|
|
auto data10 = data->Relayout(layout10);
|
|
EXPECT_TRUE(LayoutUtil::Equal(data10->shape().layout(), layout10));
|
|
EXPECT_TRUE(data->Equal(*data10));
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, ReshapeR0) {
|
|
auto original = Literal::CreateR0<float>(1.7f);
|
|
auto reshape = original->Reshape(/*shape=*/{}).ConsumeValueOrDie();
|
|
EXPECT_TRUE(original->Equal(*reshape));
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, ReshapeR4) {
|
|
// clang-format off
|
|
// F32[1x3x2x4]
|
|
auto original = Literal::CreateR4WithLayout<float>({{
|
|
{{10, 11, 12, 13}, {14, 15, 16, 17}},
|
|
{{18, 19, 20, 21}, {22, 23, 24, 25}},
|
|
{{26, 27, 28, 29}, {30, 31, 32, 33}},
|
|
}}, layout_r4_dim0major_);
|
|
// F32[1x3x4x2]
|
|
auto expected = Literal::CreateR3WithLayout<float>({
|
|
{{10, 11}, {12, 13}, {14, 15}, {16, 17}},
|
|
{{18, 19}, {20, 21}, {22, 23}, {24, 25}},
|
|
{{26, 27}, {28, 29}, {30, 31}, {32, 33}},
|
|
}, layout_r3_dim0major_);
|
|
// clang-format on
|
|
auto reshape = original->Reshape({3, 4, 2}).ConsumeValueOrDie();
|
|
|
|
EXPECT_TRUE(expected->Equal(*reshape));
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, ReshapeR4Dim0Minor) {
|
|
// clang-format off
|
|
// F32[1x3x2x4]
|
|
auto original = Literal::CreateR4WithLayout<float>({{
|
|
{{10, 11, 12, 13}, {14, 15, 16, 17}},
|
|
{{18, 19, 20, 21}, {22, 23, 24, 25}},
|
|
{{26, 27, 28, 29}, {30, 31, 32, 33}},
|
|
}}, layout_r4_dim0minor_);
|
|
// F32[1x3x4x2]
|
|
auto expected = Literal::CreateR3WithLayout<float>({
|
|
{{10, 11}, {12, 13}, {14, 15}, {16, 17}},
|
|
{{18, 19}, {20, 21}, {22, 23}, {24, 25}},
|
|
{{26, 27}, {28, 29}, {30, 31}, {32, 33}},
|
|
}, layout_r3_dim0major_);
|
|
// clang-format on
|
|
auto reshape = original->Reshape({3, 4, 2}).ConsumeValueOrDie();
|
|
|
|
EXPECT_TRUE(expected->Equal(*reshape));
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, TransposeR0) {
|
|
auto original = Literal::CreateR0<float>(1.7f);
|
|
auto reshape = original->Transpose(/*permutation=*/{});
|
|
EXPECT_TRUE(original->Equal(*reshape));
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, TransposeR4) {
|
|
// clang-format off
|
|
// F32[1x3x2x4]
|
|
auto original = Literal::CreateR4<float>({{
|
|
{{10, 11, 12, 13}, {14, 15, 16, 17}},
|
|
{{18, 19, 20, 21}, {22, 23, 24, 25}},
|
|
{{26, 27, 28, 29}, {30, 31, 32, 33}},
|
|
}});
|
|
// clang-format on
|
|
auto reshape = original->Transpose(/*permutation=*/{2, 3, 0, 1});
|
|
|
|
reshape->EachCell<float>(
|
|
[&](tensorflow::gtl::ArraySlice<int64> indices, float value) {
|
|
EXPECT_EQ(value, original->Get<float>(
|
|
{indices[2], indices[3], indices[0], indices[1]}));
|
|
});
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, TestR4RelayoutEquivalence) {
|
|
// Tests that using Relayout on an array is equivalent to creating it in the
|
|
// target layout in the first place.
|
|
auto dim0minor_relaid_to_dim0major =
|
|
literal_r4_2x2x3x3_dim0minor_->Relayout(layout_r4_dim0major_);
|
|
EXPECT_TRUE(
|
|
literal_r4_2x2x3x3_dim0major_->Equal(*dim0minor_relaid_to_dim0major));
|
|
|
|
auto dim0major_relaid_to_dim0minor =
|
|
literal_r4_2x2x3x3_dim0major_->Relayout(layout_r4_dim0minor_);
|
|
EXPECT_TRUE(
|
|
literal_r4_2x2x3x3_dim0minor_->Equal(*dim0major_relaid_to_dim0minor));
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, TestR2LinearLayout) {
|
|
// Test expected memory layout of R2 dim0-minor (column-major) literal.
|
|
auto mat_dim0minor = Literal::CreateR2WithLayout<int>({{1, 2, 3}, {4, 5, 6}},
|
|
layout_r2_dim0minor_);
|
|
EXPECT_EQ(mat_dim0minor->s32s_size(), 6);
|
|
EXPECT_THAT(mat_dim0minor->s32s(), ElementsAre(1, 4, 2, 5, 3, 6));
|
|
|
|
// Test expected memory layout when using Relayout to row major.
|
|
auto relaid_mat_to_dim0major = mat_dim0minor->Relayout(layout_r2_dim0major_);
|
|
EXPECT_THAT(relaid_mat_to_dim0major->s32s(), ElementsAre(1, 2, 3, 4, 5, 6));
|
|
|
|
// Test expected memory layout of R2 created with dim0-major (row-major).
|
|
auto mat_dim0major = Literal::CreateR2WithLayout<int>({{1, 2, 3}, {4, 5, 6}},
|
|
layout_r2_dim0major_);
|
|
EXPECT_EQ(mat_dim0major->s32s_size(), 6);
|
|
EXPECT_THAT(mat_dim0major->s32s(), ElementsAre(1, 2, 3, 4, 5, 6));
|
|
|
|
// Test expected memory layout when using Relayout to column major.
|
|
auto relaid_mat_to_dim0minor = mat_dim0major->Relayout(layout_r2_dim0minor_);
|
|
EXPECT_THAT(relaid_mat_to_dim0minor->s32s(), ElementsAre(1, 4, 2, 5, 3, 6));
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, TestR3LinearLayout) {
|
|
// Test expected memory layout of R3 dim0-minor (column-major) literal.
|
|
Array3D<int> arr3d(
|
|
// clang-format off
|
|
{
|
|
{
|
|
{1, 2, 3},
|
|
{4, 5, 6},
|
|
},
|
|
{
|
|
{7, 8, 9},
|
|
{10, 11, 12},
|
|
},
|
|
}); // clang-format on
|
|
auto lit_dim0minor =
|
|
Literal::CreateR3FromArray3DWithLayout<int>(arr3d, layout_r3_dim0minor_);
|
|
|
|
EXPECT_EQ(lit_dim0minor->s32s_size(), 12);
|
|
std::vector<int> expected_dim0minor{1, 7, 4, 10, 2, 8, 5, 11, 3, 9, 6, 12};
|
|
EXPECT_THAT(lit_dim0minor->s32s(),
|
|
testing::ElementsAreArray(expected_dim0minor));
|
|
|
|
// Test expected memory layout when using Relayout to row major.
|
|
auto relaid_lit_to_dim0major = lit_dim0minor->Relayout(layout_r3_dim0major_);
|
|
std::vector<int> expected_dim0major{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12};
|
|
EXPECT_THAT(relaid_lit_to_dim0major->s32s(),
|
|
testing::ElementsAreArray(expected_dim0major));
|
|
|
|
// Test expected memory layout of R3 created with dim0-major (row-major).
|
|
auto lit_dim0major =
|
|
Literal::CreateR3FromArray3DWithLayout<int>(arr3d, layout_r3_dim0major_);
|
|
EXPECT_EQ(lit_dim0major->s32s_size(), 12);
|
|
EXPECT_THAT(lit_dim0major->s32s(),
|
|
testing::ElementsAreArray(expected_dim0major));
|
|
|
|
// Test expected memory layout when using Relayout to column major.
|
|
auto relaid_lit_to_dim0minor = lit_dim0major->Relayout(layout_r3_dim0minor_);
|
|
EXPECT_THAT(relaid_lit_to_dim0minor->s32s(),
|
|
testing::ElementsAreArray(expected_dim0minor));
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, SliceR0S32) {
|
|
auto input = Literal::CreateR0<int32>(1);
|
|
auto result = input->Slice({}, {});
|
|
EXPECT_TRUE(input->Equal(*result));
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, SliceR1F32) {
|
|
auto input = Literal::CreateR1<float>({1.0, 2.0, 3.0, 4.0, 5.0});
|
|
auto result = input->Slice({3}, {4});
|
|
auto expected = Literal::CreateR1<float>({4.0});
|
|
EXPECT_TRUE(expected->Equal(*result));
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, SliceR2U32) {
|
|
auto input_3x4 =
|
|
Literal::CreateR2<uint32>({{1, 2, 3, 4}, {5, 6, 7, 8}, {9, 10, 11, 12}});
|
|
auto result = input_3x4->Slice({0, 2}, {2, 4});
|
|
auto expected = Literal::CreateR2<uint32>({{3, 4}, {7, 8}});
|
|
EXPECT_TRUE(expected->Equal(*result));
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, SliceR3U32Full) {
|
|
auto input_2x3x2 = Literal::CreateR3<uint32>(
|
|
{{{1, 2}, {3, 4}, {5, 6}}, {{7, 8}, {9, 10}, {11, 12}}});
|
|
auto result = input_2x3x2->Slice({0, 0, 0}, {2, 3, 2});
|
|
EXPECT_TRUE(input_2x3x2->Equal(*result));
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, PopulateR1S64) {
|
|
Literal output;
|
|
output.PopulateR1<int64>({77});
|
|
auto expected = Literal::CreateR1<int64>({77});
|
|
EXPECT_TRUE(output.Equal(*expected));
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, PopulateR2U64) {
|
|
Literal output;
|
|
output.PopulateR1<uint64>({{77, 88}});
|
|
auto expected = Literal::CreateR1<uint64>({{77, 88}});
|
|
EXPECT_TRUE(output.Equal(*expected));
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, PopulateWithValueR0F32) {
|
|
Literal output;
|
|
output.PopulateWithValue<float>(2.5f, {});
|
|
auto expected = Literal::CreateR0<float>(2.5f);
|
|
EXPECT_TRUE(output.Equal(*expected));
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, PopulateWithValueR1S64) {
|
|
Literal output;
|
|
output.PopulateWithValue<int64>(-7, {3});
|
|
auto expected = Literal::CreateR1<int64>({-7, -7, -7});
|
|
EXPECT_TRUE(output.Equal(*expected));
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, PopulateWithValueR2U64) {
|
|
Literal output;
|
|
output.PopulateWithValue<uint64>(42, {2, 2});
|
|
auto expected = Literal::CreateR2<uint64>({{42, 42}, {42, 42}});
|
|
EXPECT_TRUE(output.Equal(*expected));
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, PopulateWithValueR0F16) {
|
|
Literal output;
|
|
half h(0.25f);
|
|
output.PopulateWithValue<half>(h, {});
|
|
auto expected = Literal::CreateR0<half>(h);
|
|
EXPECT_TRUE(output.Equal(*expected));
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, PopulateWithValueR1F16) {
|
|
Literal output;
|
|
half h(0.5f);
|
|
output.PopulateWithValue<half>(h, {3});
|
|
auto expected = Literal::CreateR1<half>({h, h, h});
|
|
EXPECT_TRUE(output.Equal(*expected));
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, PopulateWithValueR2F16) {
|
|
Literal output;
|
|
half h(2.0f);
|
|
output.PopulateWithValue<half>(h, {2, 2});
|
|
auto expected = Literal::CreateR2<half>({{h, h}, {h, h}});
|
|
EXPECT_TRUE(output.Equal(*expected));
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, ReplicateR2U32) {
|
|
auto input =
|
|
Literal::CreateR2<uint32>({{1, 2, 3, 4}, {5, 6, 7, 8}, {9, 10, 11, 12}});
|
|
auto output = input->Replicate<uint32>(3);
|
|
auto expected = Literal::CreateR3<uint32>(
|
|
{{{1, 2, 3, 4}, {5, 6, 7, 8}, {9, 10, 11, 12}},
|
|
{{1, 2, 3, 4}, {5, 6, 7, 8}, {9, 10, 11, 12}},
|
|
{{1, 2, 3, 4}, {5, 6, 7, 8}, {9, 10, 11, 12}}});
|
|
EXPECT_TRUE(output->Equal(*expected));
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, Copy) {
|
|
const int64 dimensions[] = {17, 15, 34, 21};
|
|
const int64 layouts[][4] = {
|
|
{3, 2, 1, 0}, {0, 2, 1, 3}, {0, 1, 2, 3}, {2, 0, 3, 1}, {1, 3, 0, 2}};
|
|
for (const auto& layout : layouts) {
|
|
Shape shape = ShapeUtil::MakeShapeWithLayout(
|
|
primitive_util::NativeToPrimitiveType<uint32>(), dimensions, layout);
|
|
|
|
auto source = Literal::CreateFromShape(shape);
|
|
const int64 zero_base[] = {0, 0, 0, 0};
|
|
const int64 step[] = {1, 1, 1, 1};
|
|
uint32 seqnr = 0;
|
|
auto init_proc = [&](const std::vector<int64>& indexes) {
|
|
source->Set(indexes, ++seqnr);
|
|
return true;
|
|
};
|
|
ShapeUtil::ForEachIndex(source->shape(), zero_base, dimensions, step,
|
|
init_proc);
|
|
|
|
auto blank = Literal::CreateFromShape(shape);
|
|
const int64 src_base[] = {3, 1, 5, 7};
|
|
const int64 dest_base[] = {6, 4, 12, 2};
|
|
const int64 copy_size[] = {7, 8, 11, 9};
|
|
TF_EXPECT_OK(blank->Copy(*source, src_base, dest_base, copy_size));
|
|
|
|
std::vector<int64> source_indexes(TF_ARRAYSIZE(dimensions), 0);
|
|
std::vector<int64> blank_indexes(TF_ARRAYSIZE(dimensions), 0);
|
|
bool matched = true;
|
|
auto check_proc = [&](const std::vector<int64>& indexes) {
|
|
std::copy(indexes.begin(), indexes.end(), source_indexes.begin());
|
|
std::transform(source_indexes.begin(), source_indexes.end(), src_base,
|
|
source_indexes.begin(), std::plus<int64>());
|
|
std::copy(indexes.begin(), indexes.end(), blank_indexes.begin());
|
|
std::transform(blank_indexes.begin(), blank_indexes.end(), dest_base,
|
|
blank_indexes.begin(), std::plus<int64>());
|
|
auto bval = blank->Get<uint32>(blank_indexes);
|
|
matched = (bval != 0 && bval == source->Get<uint32>(source_indexes));
|
|
return matched;
|
|
};
|
|
|
|
ShapeUtil::ForEachIndex(source->shape(), zero_base, copy_size, step,
|
|
check_proc);
|
|
EXPECT_TRUE(matched);
|
|
}
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, CopyScalars) {
|
|
auto zero = Literal::CreateR0<uint32>(0);
|
|
auto nine = Literal::CreateR0<uint32>(9);
|
|
TF_EXPECT_OK(zero->Copy(*nine, {}, {}, {}));
|
|
EXPECT_TRUE(zero->Equal(*nine));
|
|
|
|
auto vect = Literal::CreateR1<uint32>({3, 4, 9, 12, 5, 17, 21});
|
|
TF_EXPECT_OK(zero->Copy(*vect, {5}, {}, {}));
|
|
EXPECT_EQ(zero->Get<uint32>({}), 17);
|
|
TF_EXPECT_OK(vect->Copy(*zero, {}, {4}, {}));
|
|
EXPECT_EQ(vect->Get<uint32>({4}), 17);
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, CopyFromAndToZeroElement) {
|
|
const Shape empty_r1_shape = ShapeUtil::MakeShape(F32, {0});
|
|
const auto const_nine = Literal::CreateR1<float>({9});
|
|
const auto const_empty = Literal::CreateFromShape(empty_r1_shape);
|
|
|
|
{
|
|
// Source contains dimension with zero elements.
|
|
const auto empty = Literal::CreateFromShape(empty_r1_shape);
|
|
auto nine = Literal::CreateR1<float>({9});
|
|
|
|
TF_EXPECT_OK(nine->Copy(*empty, {0}, {0}, {0}));
|
|
EXPECT_TRUE(nine->Equal(*const_nine));
|
|
}
|
|
|
|
{
|
|
// Copy 0 element to destination with zero elements.
|
|
const auto empty = Literal::CreateFromShape(empty_r1_shape);
|
|
auto nine = Literal::CreateR1<float>({9});
|
|
|
|
TF_EXPECT_OK(empty->Copy(*nine, {0}, {0}, {0}));
|
|
EXPECT_TRUE(empty->Equal(*const_empty));
|
|
}
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, F16) {
|
|
// Verify that the internal data views are consistent and that they
|
|
// are in little endian format
|
|
// TODO - modify if we make the data format machine endianess dependent
|
|
auto m1 = Literal::CreateFromShape(ShapeUtil::MakeShape(F16, {2, 2}));
|
|
Literal* l1 = m1.get();
|
|
const char* d1 = static_cast<const char*>(l1->InternalData());
|
|
EXPECT_EQ(d1[0], 0);
|
|
EXPECT_EQ(d1[1], 0);
|
|
EXPECT_EQ(d1[2], 0);
|
|
EXPECT_EQ(d1[3], 0);
|
|
EXPECT_EQ(d1[4], 0);
|
|
EXPECT_EQ(d1[5], 0);
|
|
EXPECT_EQ(d1[6], 0);
|
|
EXPECT_EQ(d1[7], 0);
|
|
EXPECT_EQ(l1->InternalData(), l1->MutableInternalData());
|
|
|
|
half h1(1.0f);
|
|
half h2(2.0f);
|
|
auto m2 = Literal::CreateR2<half>({{h1, h2}, {h2, h1}});
|
|
Literal* l2 = m2.get();
|
|
const char* d2 = static_cast<const char*>(l2->InternalData());
|
|
EXPECT_EQ(d2[0], 0);
|
|
EXPECT_EQ(d2[1], 0x3C);
|
|
EXPECT_EQ(d2[2], 0);
|
|
EXPECT_EQ(d2[3], 0x40);
|
|
EXPECT_EQ(d2[4], 0);
|
|
EXPECT_EQ(d2[5], 0x40);
|
|
EXPECT_EQ(d2[6], 0);
|
|
EXPECT_EQ(d2[7], 0x3C);
|
|
EXPECT_EQ(l2->InternalData(), l2->MutableInternalData());
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, Populate) {
|
|
struct PopulateData {
|
|
std::vector<int64> dimensions;
|
|
std::vector<int64> layout;
|
|
} populate_data[] = {
|
|
{{}, {}},
|
|
{{0}, {0}},
|
|
{{16}, {0}},
|
|
{{2, 0}, {1, 0}},
|
|
{{4, 16}, {1, 0}},
|
|
{{21, 12}, {0, 1}},
|
|
{{6, 11, 17}, {2, 0, 1}},
|
|
{{6, 11, 5, 17}, {3, 2, 0, 1}},
|
|
};
|
|
for (const auto& data : populate_data) {
|
|
Shape shape = ShapeUtil::MakeShapeWithLayout(
|
|
primitive_util::NativeToPrimitiveType<uint32>(), data.dimensions,
|
|
data.layout);
|
|
auto literal = Literal::CreateFromShape(shape);
|
|
auto generator = [&](tensorflow::gtl::ArraySlice<int64> indexes) -> uint32 {
|
|
// Offsets from linear index just to avoid R0 literals to be initialized
|
|
// with zero.
|
|
return literal->LinearIndex(indexes) + 17;
|
|
};
|
|
TF_EXPECT_OK(literal->Populate<uint32>(generator));
|
|
|
|
std::vector<int64> zero_base(data.dimensions.size(), 0);
|
|
std::vector<int64> step(data.dimensions.size(), 1);
|
|
bool matched = true;
|
|
auto check_function = [&](const std::vector<int64>& indexes) {
|
|
auto value = literal->Get<uint32>(indexes);
|
|
matched = matched && (value == generator(indexes));
|
|
return matched;
|
|
};
|
|
ShapeUtil::ForEachIndex(literal->shape(), zero_base, data.dimensions, step,
|
|
check_function);
|
|
EXPECT_TRUE(matched);
|
|
}
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, ConvertR4) {
|
|
// clang-format off
|
|
auto original = Literal::CreateR4WithLayout<int8>({{
|
|
{{10, 11, 12, 13}, {14, 15, 16, 17}},
|
|
{{18, 19, 20, 21}, {22, 23, 24, 25}},
|
|
{{26, 27, 28, 29}, {30, 31, 32, 33}},
|
|
}}, layout_r4_dim0major_);
|
|
auto expected = Literal::CreateR4WithLayout<uint32>({{
|
|
{{10, 11, 12, 13}, {14, 15, 16, 17}},
|
|
{{18, 19, 20, 21}, {22, 23, 24, 25}},
|
|
{{26, 27, 28, 29}, {30, 31, 32, 33}},
|
|
}}, layout_r4_dim0major_);
|
|
// clang-format on
|
|
TF_ASSERT_OK_AND_ASSIGN(std::unique_ptr<Literal> converted,
|
|
original->Convert(U32));
|
|
|
|
EXPECT_TRUE(expected->Equal(*converted));
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, ConvertIfTypesMatch) {
|
|
// clang-format off
|
|
auto s8 = Literal::CreateR4WithLayout<int8>({{
|
|
{{10, 0, 12, 0}, {0, 15, 0, 17}},
|
|
{{0, 19, 0, 21}, {22, 0, 24, 0}},
|
|
{{26, 0, 28, 0}, {0, 31, 0, 33}},
|
|
}}, layout_r4_dim0major_);
|
|
auto s32 = Literal::CreateR4WithLayout<int32>({{
|
|
{{10, 0, 12, 0}, {0, 15, 0, 17}},
|
|
{{0, 19, 0, 21}, {22, 0, 24, 0}},
|
|
{{26, 0, 28, 0}, {0, 31, 0, 33}},
|
|
}}, layout_r4_dim0major_);
|
|
auto u32 = Literal::CreateR4WithLayout<uint32>({{
|
|
{{10, 0, 12, 0}, {0, 15, 0, 17}},
|
|
{{0, 19, 0, 21}, {22, 0, 24, 0}},
|
|
{{26, 0, 28, 0}, {0, 31, 0, 33}},
|
|
}}, layout_r4_dim0major_);
|
|
auto s64 = Literal::CreateR4WithLayout<int64>({{
|
|
{{10, 0, 12, 0}, {0, 15, 0, 17}},
|
|
{{0, 19, 0, 21}, {22, 0, 24, 0}},
|
|
{{26, 0, 28, 0}, {0, 31, 0, 33}},
|
|
}}, layout_r4_dim0major_);
|
|
auto u64 = Literal::CreateR4WithLayout<uint64>({{
|
|
{{10, 0, 12, 0}, {0, 15, 0, 17}},
|
|
{{0, 19, 0, 21}, {22, 0, 24, 0}},
|
|
{{26, 0, 28, 0}, {0, 31, 0, 33}},
|
|
}}, layout_r4_dim0major_);
|
|
auto pred = Literal::CreateR4WithLayout<bool>({{
|
|
{{true, false, true, false}, {false, true, false, true}},
|
|
{{false, true, false, true}, {true, false, true, false}},
|
|
{{true, false, true, false}, {false, true, false, true}},
|
|
}}, layout_r4_dim0major_);
|
|
auto int32_pred = Literal::CreateR4WithLayout<int32>({{
|
|
{{1, 0, 1, 0}, {0, 1, 0, 1}},
|
|
{{0, 1, 0, 1}, {1, 0, 1, 0}},
|
|
{{1, 0, 1, 0}, {0, 1, 0, 1}},
|
|
}}, layout_r4_dim0major_);
|
|
auto f16 = Literal::CreateR4WithLayout<half>({{
|
|
{{half(10.0), half(0.0), half(12.0), half(0.0)},
|
|
{half(0.0), half(15.0), half(0.0), half(17.0)}},
|
|
{{half(0.0), half(19.0), half(0.0), half(21.0)},
|
|
{half(22.0), half(0.0), half(24.0), half(0.0)}},
|
|
{{half(26.0), half(0.0), half(28.0), half(0.0)},
|
|
{half(0.0), half(31.0), half(0.0), half(33.0)}},
|
|
}}, layout_r4_dim0major_);
|
|
auto f32 = Literal::CreateR4WithLayout<float>({{
|
|
{{10.0f, 0.0f, 12.0f, 0.0f}, {0.0f, 15.0f, 0.0f, 17.0f}},
|
|
{{0.0f, 19.0f, 0.0f, 21.0f}, {22.0f, 0.0f, 24.0f, 0.0f}},
|
|
{{26.0f, 0.0f, 28.0f, 0.0f}, {0.0f, 31.0f, 0.0f, 33.0f}},
|
|
}}, layout_r4_dim0major_);
|
|
auto f64 = Literal::CreateR4WithLayout<double>({{
|
|
{{10.0, 0.0, 12.0, 0.0}, {0.0, 15.0, 0.0, 17.0}},
|
|
{{0.0, 19.0, 0.0, 21.0}, {22.0, 0.0, 24.0, 0.0}},
|
|
{{26.0, 0.0, 28.0, 0.0}, {0.0, 31.0, 0.0, 33.0}},
|
|
}}, layout_r4_dim0major_);
|
|
// clang-format on
|
|
std::unique_ptr<Literal> conv;
|
|
|
|
conv = s8->Convert(U32).ConsumeValueOrDie();
|
|
EXPECT_TRUE(conv->Equal(*u32));
|
|
|
|
conv = s8->Convert(S32).ConsumeValueOrDie();
|
|
EXPECT_TRUE(conv->Equal(*s32));
|
|
|
|
conv = s8->Convert(U64).ConsumeValueOrDie();
|
|
EXPECT_TRUE(conv->Equal(*u64));
|
|
|
|
conv = s8->Convert(S64).ConsumeValueOrDie();
|
|
EXPECT_TRUE(conv->Equal(*s64));
|
|
|
|
conv = s8->Convert(PRED).ConsumeValueOrDie();
|
|
EXPECT_TRUE(conv->Equal(*pred));
|
|
|
|
conv = pred->Convert(S32).ConsumeValueOrDie();
|
|
EXPECT_TRUE(conv->Equal(*int32_pred));
|
|
|
|
conv = f32->Convert(S32).ConsumeValueOrDie();
|
|
EXPECT_TRUE(conv->Equal(*s32));
|
|
|
|
conv = f64->Convert(S32).ConsumeValueOrDie();
|
|
EXPECT_TRUE(conv->Equal(*s32));
|
|
|
|
conv = s32->Convert(F32).ConsumeValueOrDie();
|
|
EXPECT_TRUE(conv->Equal(*f32));
|
|
|
|
conv = f32->Convert(F16).ConsumeValueOrDie();
|
|
EXPECT_TRUE(conv->Equal(*f16));
|
|
|
|
conv = f64->Convert(F16).ConsumeValueOrDie();
|
|
EXPECT_TRUE(conv->Equal(*f16));
|
|
|
|
conv = s32->Convert(F16).ConsumeValueOrDie();
|
|
EXPECT_TRUE(conv->Equal(*f16));
|
|
|
|
conv = u32->Convert(F16).ConsumeValueOrDie();
|
|
EXPECT_TRUE(conv->Equal(*f16));
|
|
|
|
EXPECT_EQ(s32->Convert(TUPLE).status().code(),
|
|
tensorflow::error::INVALID_ARGUMENT);
|
|
EXPECT_EQ(s32->Convert(S16).status().code(),
|
|
tensorflow::error::INVALID_ARGUMENT);
|
|
EXPECT_EQ(s32->Convert(U16).status().code(),
|
|
tensorflow::error::INVALID_ARGUMENT);
|
|
}
|
|
|
|
TEST_F(LiteralUtilTest, CopyFromProto_Bool) {
|
|
LiteralProto p;
|
|
p.mutable_shape()->set_element_type(PRED);
|
|
for (int len = 0; len < 25; ++len) {
|
|
p.mutable_shape()->clear_dimensions();
|
|
p.mutable_shape()->add_dimensions(len);
|
|
p.clear_preds();
|
|
for (int i = 0; i < len; ++i) {
|
|
p.add_preds((i % 2) == (len % 2));
|
|
}
|
|
|
|
Literal literal(p);
|
|
ASSERT_EQ(len, literal.preds_size());
|
|
int i = 0;
|
|
for (auto it = literal.preds().begin(); it < literal.preds().end(); ++it) {
|
|
EXPECT_EQ((i % 2) == (len % 2), *it);
|
|
++i;
|
|
}
|
|
}
|
|
}
|
|
|
|
// Note that f16 is currently stored in a byte array in little endian byte order
|
|
TEST_F(LiteralUtilTest, ToProto_f16) {
|
|
half h1(1.0f);
|
|
half h2(2.0f);
|
|
|
|
auto m = Literal::CreateR2<half>({{h1, h2}, {h2, h1}});
|
|
Literal* l = m.get();
|
|
EXPECT_EQ(4, ShapeUtil::ElementsIn(l->shape()));
|
|
EXPECT_EQ(4, l->f16s().size());
|
|
EXPECT_EQ(4, l->f16s_size());
|
|
|
|
LiteralProto p = l->ToProto();
|
|
EXPECT_EQ(4, ShapeUtil::ElementsIn(p.shape()));
|
|
EXPECT_EQ(8, p.f16s().size());
|
|
const char* d = p.f16s().data();
|
|
EXPECT_EQ(d[0], 0);
|
|
EXPECT_EQ(d[1], 0x3C);
|
|
EXPECT_EQ(d[2], 0);
|
|
EXPECT_EQ(d[3], 0x40);
|
|
EXPECT_EQ(d[4], 0);
|
|
EXPECT_EQ(d[5], 0x40);
|
|
EXPECT_EQ(d[6], 0);
|
|
EXPECT_EQ(d[7], 0x3C);
|
|
}
|
|
|
|
// Note that f16 is currently stored in a byte array in little endian byte order
|
|
TEST_F(LiteralUtilTest, CopyFromProto_f16) {
|
|
half h1(1.0f);
|
|
half h2(2.0f);
|
|
|
|
const char half_vals[8] = {0x00, 0x3C, 0x00, 0x40, 0x00, 0x40, 0x00, 0x3C};
|
|
LiteralProto p;
|
|
p.mutable_shape()->set_element_type(F16);
|
|
p.mutable_shape()->clear_dimensions();
|
|
p.mutable_shape()->add_dimensions(4);
|
|
p.clear_f16s();
|
|
p.set_f16s(half_vals, 8);
|
|
|
|
Literal literal(p);
|
|
ASSERT_EQ(4, literal.f16s_size());
|
|
ASSERT_EQ(h1, literal.f16s(0));
|
|
ASSERT_EQ(h2, literal.f16s(1));
|
|
ASSERT_EQ(h2, literal.f16s(2));
|
|
ASSERT_EQ(h1, literal.f16s(3));
|
|
|
|
const std::vector<half>& r = literal.f16s();
|
|
ASSERT_EQ(4, r.size());
|
|
ASSERT_EQ(h1, r[0]);
|
|
ASSERT_EQ(h2, r[1]);
|
|
ASSERT_EQ(h2, r[2]);
|
|
ASSERT_EQ(h1, r[3]);
|
|
}
|
|
|
|
} // namespace
|
|
} // namespace xla
|