STT-tensorflow/tensorflow/compiler/tf2xla/xla_compilation_device.h
Peter Hawkins f895a9e996 [TF:XLA] Refactor XlaCompiler, XlaExpression, and XlaOpKernelContext in preparation for adding support for compiling small computations that don't correspond to a single TF op.
The idea of the refactoring is that XlaExpression is the canonical XLA representation of a symbolic TF value. So in general a computation to compile is a function with type [XlaExpression] -> [XlaExpression], and in a future change we will add a method to XlaCompiler that exposes pretty much exactly that API. The current TF function/graph/op compilation methods are specific ways to build such a function.

* Move XlaExpression into its own file. Improve its ergonomics; it is really a kind of sum type. Also move some useful common methods on XlaExpressions into the XlaExpression class.
* Add support for passing and returning XlaExpressions via XlaOpKernelContext, since they are the underlying representation. The remaining *Input() and *Output() methods are really just conveniences built on top.
* Simplify _Arg and _Retval to just get and set an XlaExpression from an XlaContext. Move logic to flatten return values out of _Retval and move it instead into XlaCompiler so it can be reused when compiling non-graph computations.
* Move logic to assign cores to arguments and return values into a common place in XlaCompiler.

PiperOrigin-RevId: 221104314
2018-11-12 09:29:24 -08:00

70 lines
2.6 KiB
C++

/* Copyright 2017 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
#ifndef TENSORFLOW_COMPILER_TF2XLA_XLA_COMPILATION_DEVICE_H_
#define TENSORFLOW_COMPILER_TF2XLA_XLA_COMPILATION_DEVICE_H_
#include <memory>
#include "tensorflow/core/common_runtime/local_device.h"
#include "tensorflow/core/framework/device_base.h"
#include "tensorflow/core/framework/tensor.h"
#include "tensorflow/core/lib/core/status.h"
#include "tensorflow/core/platform/mem.h"
#include "tensorflow/core/public/session_options.h"
namespace tensorflow {
// Class is defined in xla_compilation_device.cc, reference
// included here only so the XlaCompilationDevice allocator_ member can be
// declared.
class XlaCompilationAllocator;
// This is a 'dummy' TensorFlow device that is only used to execute a
// subgraph of XLA compilation Ops to construct a compiled version
// of the subgraph's computation. It has a 'dummy' allocator that
// backs each Tensor with an XlaExpression. The shape of the Tensor
// matches the shape of XlaExpression.
//
// We deliberately don't register a device factory because we *never*
// want placement to put Ops on a compilation device. The device is created
// manually, not using a factory.
//
// XLA compilation is not thread-safe. OpKernels registered on the
// XlaCompilationDevice must not use threads or concurrency.
class XlaCompilationDevice : public LocalDevice {
public:
XlaCompilationDevice(const SessionOptions& options, DeviceType type);
~XlaCompilationDevice() override;
Allocator* GetAllocator(AllocatorAttributes attr) override;
void Compute(OpKernel* op_kernel, OpKernelContext* context) override;
Status Sync() override;
Status MakeTensorFromProto(const TensorProto& tensor_proto,
const AllocatorAttributes alloc_attrs,
Tensor* tensor) override;
private:
std::unique_ptr<XlaCompilationAllocator> allocator_;
};
} // namespace tensorflow
#endif // TENSORFLOW_COMPILER_TF2XLA_XLA_COMPILATION_DEVICE_H_