Commit Graph

48 Commits

Author SHA1 Message Date
Yunxing Dai
827b2f4723 [XLA] Add a few helper functions around dynamic dimensions.
PiperOrigin-RevId: 282408969
Change-Id: I9058ee17ae239d24ed741246af4288905be10212
2019-11-25 12:52:48 -08:00
TensorFlower Gardener
266a0adca6 Merge pull request from nouiz:utils_pr
PiperOrigin-RevId: 267705184
2019-09-06 17:27:40 -07:00
Yunxing Dai
792abd2eaf [TF2XLA] Preserve the dynamic dimension (-1) when building a reshape.
This cl is created to handle the problem of [4] -> [2, 2] reshape, where 4 is a dynamic dimension.

- We need the producer of reshape's second operand to return "-1", indicating a dimension is dynamic.
- Change GetDynamicSize's return type to S32 so we support '-1'.
- Constant folding has been to be changed to return -1 when a dimension is dynamic and when a special flag is passed to an op kernel.
- Resurrect the "dynamci dimension inference" feature in xla buidler. Expect some brokeness as this feature is not heavily exercised.

PiperOrigin-RevId: 267465833
2019-09-05 15:48:43 -07:00
Frederic Bastien
c2632730f8 Relax a check to enable an optimization on all element type. 2019-09-05 14:42:03 -07:00
Blake Hechtman
40cd7f9057 [XLA] simplify ShapeUtil::Alignlayouts by removing degenerate dimensions.
[TF2XLA] Use iota instruction instead of constant literal.

PiperOrigin-RevId: 234076260
2019-02-14 20:12:13 -08:00
Cong Liu
79e040fd8a [XLA] Propagate is_dynamic_dimensions for some operations.
PiperOrigin-RevId: 230382946
2019-01-22 12:01:21 -08:00
Mark Heffernan
0d15dbe9c5 Remove deprecated methods in ShapeUtil.
No functional change. Replace with calls to Shape methods.

PiperOrigin-RevId: 228566953
2019-01-09 12:51:26 -08:00
Mark Heffernan
02ac99e6ad Add is_dynamic_dimension repeated bool field to Shape.
The bool indicates which of the dimensions in the Shape are dynamic. For dynamic dimensions the 'dimensions' int64 value indicates an inclusive upper bound on the dimension bound. This CL adds the field to ShapeProto and Shape and adds support to the parser. It is unused otherwise.

This change also includes a couple small changes to the Shape interface to preserve the invariant that the dimensions vector and the dynamic_dimensions vector are always the same size.

This may not be the final form for dynamic shape representation. Having int64 lower and upper bounds may be cleaner where static dimensions have equal lower and upper bound dimensions. However, this current representation is sufficiently expressive to start using dynamic shapes in other parts of XLA. Shape inference would be a natural first step. Changing the underlying shape representation during this initial process should not be too difficult.

PiperOrigin-RevId: 226434424
2018-12-20 20:13:52 -08:00
Mark Heffernan
b55e7a9a82 Change Shape parsing from regexp matcher to parser.
Previously in the HLO parser/lexer shapes were tokens which were identified using a complicated regular expression. This made augmenting the textual form of shape difficult such as would be necessary for dynamic shapes or tiling. To avoid ambiguity and other problems a couple changes were made to HLO textual form, as well as some related clean up:

(1) Do not redundantly print the shape inside of the constant HLO instruction's "operand" field. Previously, constant instructions we printed like:

    S32[2,2] constant(S32[2,2] {{1,2},{3,4}})

  Now this is printed as:

    S32[2,2] constant({{1,2},{3,4}})

  This avoids an ambiguity where the values of the literal can be misinterpreted as a layout. Also, the shape was printed inconsistently: only when the rank was greater than one.

(2) Remove ShapeUtil::ParseShapeString, replace with ParseShape function in hlo parser.

(3) Merge hlo_token.h into hlo_lexer.h. It is only used by the lexer and parser which include that file and avoids potential confusion with the token HLO type

(4) Fix b/112302613 by removing the unused Shape field in the sharding attribute of HLO text.

(5) As part of this change primitive element types are now keywords which simplifies parsing. The fallout is that a bunch of values in HLO text named "token" had to be renamed. Also, change the HLO name sanitizer to avoid these primitive type keywords.

PiperOrigin-RevId: 225546437
2018-12-14 08:37:21 -08:00
Mark Heffernan
bd737c846c Replace Shape with a C++ class in XLA.
No functional change. Rename the proto message Shape to ShapeProto and define an in-place replacement C++ class named Shape with an interface which mirrors the protobuf generated code interface. Having Shape as a C++ class enables greater flexibility in the interface, enables enforcement of invariants, and potential performance improvements.

PiperOrigin-RevId: 223252977
2018-11-28 16:08:45 -08:00
Mark Heffernan
f22eec10b6 Replace ProgramShape proto with a C++ class.
Rename the protobuf message ProgramShape to ProgramShapeProto and create a new ProgramShape C++ class with an interface which mirrors the protobuf generated code interface. This CL is a step toward replacing Shape proto with a C++ class. ProgramShape needs to be migrated first because ProgramShape contains Shapes.

PiperOrigin-RevId: 222435461
2018-11-21 11:22:17 -08:00
Justin Lebar
18d8ac6730 [XLA] Remove ShapeUtil::IsNil in favor of ShapeUtil::IsEmptyTuple.
No need to have two functions that do the same thing.

PiperOrigin-RevId: 221745815
2018-11-15 22:41:00 -08:00
A. Unique TensorFlower
d13ee0b7f8 Remove the padded_dimensions and padding_value fields from the Layout
protobuffer. These fields were never used nor supported.

PiperOrigin-RevId: 219828371
2018-11-02 11:10:54 -07:00
A. Unique TensorFlower
725dfe9cd0 internal change only.
PiperOrigin-RevId: 212754752
2018-09-12 21:26:37 -07:00
Tim Shen
6f879f891a [XLA] Rename all (Mutable)ArraySlice to absl::Span.
PiperOrigin-RevId: 210998142
2018-08-30 16:07:27 -07:00
Justin Lebar
9e9e11a3fb [XLA] Stop including str_util.h.
PiperOrigin-RevId: 210049592
2018-08-23 21:33:13 -07:00
Justin Lebar
e9b58d8f2e [XLA] Switch from tensorflow::str_util::Join to absl::StrJoin.
PiperOrigin-RevId: 210018843
2018-08-23 16:16:24 -07:00
Justin Lebar
90d9d2b194 [XLA] Use absl string types and functions instead of the TF versions.
Unfortunately this has to be one big patch, because e.g. absl::StrCat
doesn't accept a TF StringPiece, but as soon as we switch to
absl::string_view, we have to switch away from all of the TF functions.
PiperOrigin-RevId: 209957896
2018-08-23 10:27:38 -07:00
A. Unique TensorFlower
4c9e4ba5d3 Fix ShapeUtil::CompatibleIgnoringElementType for opaque types
Previously we had an assymetric comparision when comparing an array type
with an opaque type returning false for array vs opaque while true for
opaque vs array.

PiperOrigin-RevId: 205706477
2018-07-23 12:42:53 -07:00
Benjamin Kramer
755503c4d7 [XLA:GPU] Support infeed of nested tuples
Outfeed already supports this and with the shared queue management code this
became easy.

PiperOrigin-RevId: 204370770
2018-07-12 15:05:02 -07:00
Justin Lebar
8737cd1ef4 [XLA] Make ShapeUtil::PermuteDimensions() properly permute layouts.
Although there was no API comment explaining what PermuteDimensions()
intended to do with layouts, the intent seems to be that if the input
shape has a layout, then

  TransposeIsBitcast(input_shape, PermuteDimensions(input_shape, perm),
                     InversePermutation(perm))

is true.

This was not previously correct.  (I can't really say what the old
behavior was, although maybe there's some clever interpretation of what
it used to do.)

PiperOrigin-RevId: 203290794
2018-07-04 10:08:24 -07:00
Thomas Joerg
ed37c8a8a0 [XLA] A variant of the Equal method ignoring fp precision.
PiperOrigin-RevId: 202113177
2018-06-26 05:34:14 -07:00
Sanjoy Das
1c697bc909 Teach gather-reshape folding to work with degenerate dims
I was hoping not to do this, but the motivating benchmark for all this work has
reshapes on degenerate dimensions.  This also forced me to introduce a new node
to the analysis which isn't great (we don't want to replicate HLO inside
IndexedArrayAnalysis!) but this is cleanest solution I can think of.

In brief I support gather-reshape folding with degenerate dimensions by
disallowing it in the core tricky part of the algorithm and instead reshaping
the degenerate dimensions "in and out" in a helper that calls the core part of
the folding logic.

Also worth calling out that before we weren't doing something conservative -- we
were just buggy.  For instance the CHECK_NE(candidate_operand_dim, 0) in
ComputeReshapePassthroughDimPairs can fail with degenerate dims.

I also made some other supporting changes:

 - I was not checking window bounds in ComputeArrayForGather.  I've fixed this
   and beefed up testing in this area (the hammer for all my nails).
 - Added a bunch of VLOG(3) info that was useful when debugging.
 - Added a simple helper to the test that makes the strings I'm matching against
   "whitespace insensitive" so that I can indent these.

I'm happy to pull these out into separate CLs if that makes reviewing easier but
for now I took the path of least resistance. :)

PiperOrigin-RevId: 200821883
2018-06-16 00:09:15 -07:00
Mark Heffernan
f01d25471d Add support for TOKEN type to CPU/GPU backends.
TOKENs will be used for ordering side-effecting operations. They are not materialized but can be contained in tuples and flow into and out of computations. This CL adds a trivial representation for the cpu and gpu backends to support TOKENs and modifies copy insertion to avoid making copies of tokens.

This also adds a Literal TOKEN which is required for the interpreter backend.

PiperOrigin-RevId: 200623120
2018-06-14 14:54:32 -07:00
Mark Heffernan
e1296c15a3 Fix assumptions that a Shape must be a tuple or an array.
A TOKEN primitive type was added with cl/199215963 and XLA also has an OPAQUE primitive type. However, in many places in XLA we assume either a tuple or array. This CL fixes many of those instances, but some may remain. Identified instances were discovered by searching for IsTuple or IsArray so the set of fixes is not exhaustive.

Also opportunistically addressed a couple potential points of confusion in the ShapeUtil interface:

(1) Rename ShapeUtil::HasZeroElements to ShapeUtil::IsZeroElementArray. The point of confusion here is that tuples can also have zero elements and HasZeroElements would check fail on tuple shapes. Method no longer check fails if the given shape is not an array.

(2) ShapeUtil::IsNil now returns true only for empty tuples. Previously it also returned true for zero-element array types which was confusing because ShapeUtil::MakeNil creates an empty tuple.

PiperOrigin-RevId: 200452672
2018-06-13 14:24:30 -07:00
Michael Kuperstein
bbc2c612da [XLA] Delete StripDegenerateDimensions()
This is unused, and, as it turns out, is broken for sparse shapes.

PiperOrigin-RevId: 200313641
2018-06-12 18:59:38 -07:00
Mark Heffernan
14d4d1634d Add TOKEN primitive type.
The token type will be threaded through side-effecting ops to order them. Subsequent cls will add new opcodes and change side effecting operations to support this ordering.

This CL also does some cleanup in shape_util and layout_util where we have assumed that shapes are either arrays or tuples.

PiperOrigin-RevId: 199215963
2018-06-04 16:44:21 -07:00
A. Unique TensorFlower
b2b8dca583 [XLA] Fix bug in ShapeUtil::StripDegenerateDimensions
PiperOrigin-RevId: 194621163
2018-04-27 18:27:28 -07:00
Michael Kuperstein
8fd805fc79 [XLA] Parallelize HloEvaluator::HandleConvolution
This adds a parallel version of Literal::Populate, and uses it in the embarrassingly parallel convolution computation.

PiperOrigin-RevId: 192065277
2018-04-08 15:39:50 -07:00
A. Unique TensorFlower
4f0aa15e96 Fix ShapeUtil::CompatibleIgnoringElementType for scalar vs tuple comparision
Previously if the lhs was a scalar and the rhs was a tuple of arbitrary
shape it reported them as compatible what is clearly wrong.

PiperOrigin-RevId: 188155575
2018-03-07 03:48:18 -08:00
Sanjoy Das
39a43c4f1d Introduce a ShapeUtil::ForEachIndexWithStatus, change index type to ArraySlice
This is not used yet, but I need it in a later CL.  I don't specifically need
the argument to be an ArraySlice, but it seemed cleaner than taking a const ref
to a vector.

No functional change intended.

PiperOrigin-RevId: 187352376
2018-02-28 11:11:26 -08:00
Yuanzhong Xu
fabf6ddede [XLA] An HLO pass that folds BF16 F32 conversions: if an HLO already supports BF16 input/output, conversions before/after it will be removed and the HLO's input/output types will be converted to BF16.
Also updates HloVerifier to allow mixed precision if requested. If an HLO has both both F32 and BF16 inputs, ShapeInference will use F32 as the output type.

PiperOrigin-RevId: 185407143
2018-02-12 11:30:18 -08:00
A. Unique TensorFlower
7d1d459548 [XLA:TPU] Initial HLO parser/stringifier support for sparse formats
- Add methods for manipulating sparse literals to xla::Literal

- Make LayoutUtil::HumanString handle sparse layouts

- Make ShapeUtil::ParseShape handle sparse shapes

- Syntax for shapes has changed:

  - Old way of expressing layouts still works, e.g. f32[1,2,3]{2,1,0}

  - Can now make dense format explicit: f32[1,2,3]dense{2,1,0}

  - Can express sparse layouts; the max_sparse_elements value is in the
    braces, e.g.: f32[1,2,3]sparse{10}

  - The shape should not include braces for the layout if the shape is scalar;
    e.g. f32[]{} is not valid shape syntax.

  - The shape should not include braces for the layout if the shape is a dense
    rank-1 array; e.g. f32[10]{0} is not valid shape syntax

  - Sparse literals use a dictionary-liky syntax, e.g.:
    f32[2,3,4]sparse{10} {[0,1,2]: 10, [1,2,3]: 11}

  - For rank-1 sparse arrays, the square brackets around indices may be omitted, e.g.:
    f32[100]sparse{10} {5: 10, 20: 30}

PiperOrigin-RevId: 181813837
2018-01-12 17:27:13 -08:00
A. Unique TensorFlower
7fd2c7a7f8 [XLA] Add format field to layout
Format will describe the method used to store array data in memory. Currently
only DENSE is supported, which represents the way XLA currently stores arrays.

Scalars have a DENSE format. Tuples and opaque shapes use INVALID_FORMAT.

Adds checks to code that uses minor_to_major to ensure the layout is dense.

PiperOrigin-RevId: 179475450
2017-12-18 15:18:59 -08:00
A. Unique TensorFlower
b115a9fc73 [XLA] Enhancement to source tensor indexing.
Change ElementalIrEmitter::ElementwiseSourceIndex to use the target index as
a source index for the case where the two tensors have the same shape but
different element types.
This improves the implementation of fusion kernels by avoiding the calculation
of the dimensional indices from the linear index for the source tensors.

PiperOrigin-RevId: 177036769
2017-11-27 10:29:08 -08:00
A. Unique TensorFlower
f226eb3717 [XLA] Adds a C64 type to XLA, with actual compilation support coming soon.
PiperOrigin-RevId: 173172916
2017-10-23 14:54:43 -07:00
Chris Leary
c0a4c7ffc2 [XLA] Fix bug in ShapeUtil::ShapeIs that would lead to type inference errors.
PiperOrigin-RevId: 168323589
2017-09-11 20:14:33 -07:00
Chris Leary
8903e5fc72 [XLA] Make ShapeUtil::ParseShapeString more complete.
Handle tuples, nested tuples, more element types.

PiperOrigin-RevId: 165826211
2017-08-19 16:36:37 -07:00
Kay Zhu
beeaade460 Resubmit a reverted change. Original description:
[XLA] Enable HloEvaluator for constant folding, also merged a few operations
from hlo_constant_folding to hlo_evaluator.

Additionally:
- In ShapeUtil::ForEachIndex:
    * fix a bug where visitor is called when the shape has zero elements (e.g., F32{1,0})
    * added test case for ForEachIndex.

- In HloEvaluator:
    * Instead of copying and caching a Constant instruction, return the literal directly if the instruction is constant.
    * Fix an issue where TUPLE and OPAQUE primitives are not keyed in the templated typed_visitor.
    * Use (fixed) LiteralUtil::Populate to populate resulting literal, fixes the preexisting bug in the evaluator where R0 and shape with zero size dimensions are not handled.
    * Refactor ElementWiseUnaryOp and HandleCompare to be templatized on the operand's type.
    * Refactor IsFinite to be top level since it is only applicable to floats and the return type is always boolean.
    * Change from std::remainder to std::fmod for kRemainder to be compliant with existing XLA behavior.
    * Change from std::max and std::min to std::fmax and std::fmin to handle NaNs.
    * Minor comments fix.

PiperOrigin-RevId: 158330052
2017-06-07 16:01:11 -07:00
Mark Heffernan
05412bd367 [XLA] Simplify Shape traversal visitors.
Simplify shape traversal visitors in ShapeUtil and ShapeTree. Add a non-Status form because most uses of the traversal methods do not use it, and remove is_leaf parameter from ShapeTree.ForEach* as it is not frequently used.

PiperOrigin-RevId: 158201574
2017-06-06 15:46:30 -07:00
Kay Zhu
2ff1d7bf04 Automated g4 rollback of changelist 157174708
PiperOrigin-RevId: 157253080
2017-05-26 13:04:21 -07:00
Kay Zhu
405f70c6de [XLA] Enable HloEvaluator for constant folding, also merged a few operations
from hlo_constant_folding to hlo_evaluator.

Additionally:
- In ShapeUtil::ForEachIndex:
    * fix a bug where visitor is called when the shape has zero elements (e.g., F32{1,0})
    * added test case for ForEachIndex.

- In HloEvaluator:
    * Instead of copying and caching a Constant instruction, return the literal directly if the instruction is constant.
    * Fix an issue where TUPLE and OPAQUE primitives are not keyed in the templated typed_visitor.
    * Use (fixed) LiteralUtil::Populate to populate resulting literal, fixes the preexisting bug in the evaluator where R0 and shape with zero size dimensions are not handled.
    * Refactor ElementWiseUnaryOp and HandleCompare to be templatized on the operand's type.
    * Refactor IsFinite to be top level since it is only applicable to floats and the return type is always boolean.
    * Change from std::remainder to std::fmod for kRemainder to be compliant with existing XLA behavior.
    * Change from std::max and std::min to std::fmax and std::fmin to handle NaNs.
    * Minor comments fix.

- Disables constant_folding and reshape-motion for ClientLibraryTestBase so that constant folding would not affect the intended code paths to be execercised by the test. In the longer term we plan change all Constants to Parameter and re-enable constant_folding in tests.

PiperOrigin-RevId: 157174708
2017-05-25 17:36:25 -07:00
A. Unique TensorFlower
7266ea5133 Generalize layout assignment for bitcast reshapes.
Currently, only special cases are detected where we can assign a layout so
that the reshape is a bitcast. This CL generalizes this and if it is possible
at all, a layout will be assigned so that the reshape is a bitcast.

PiperOrigin-RevId: 156162657
2017-05-16 03:23:00 -07:00
A. Unique TensorFlower
c83bb86589 Replaced user defined matchers by gmock matchers.
Change: 154420642
2017-04-27 07:47:50 -07:00
A. Unique TensorFlower
5c8acccfc9 Using GMock matchers in XLA tests.
Change: 152823724
2017-04-11 10:05:47 -07:00
Bjarke Hammersholt Roune
863bab3420 Improvements to HLO text format printing.
Change: 145374835
2017-01-23 21:42:46 -08:00
Eli Bendersky
c789fbe3cb [XLA] VLOG more details when ShapeUtil::Equal returns false
Also turns CompareShape into an internal implementation function, since it's
only used by Equal. CompareShapes will VLOG what it doesn't like about lhs and
rhs, which should make the message VLOG-ed by Equal easier to grok.

Also adding a bit more testing for unequal shapes.

Shape::Equal is used in many places, not only tests. This CL provides more details in cases of CHECKs on Shape::Equal etc.
Change: 144841866
2017-01-18 09:26:41 -08:00
Peter Hawkins
1e67c90e2c Initial open-source release of XLA: Accelerated Linear Algebra.
XLA is a compiler-based linear algebra execution engine that targets CPUs, GPUs and custom accelerators.

XLA is still experimental; we are releasing it early to get the community involved.
Change: 143990941
2017-01-09 12:26:35 -08:00