STT-tensorflow/tensorflow/compiler/aot/compile.h
Eugene Brevdo b26e1efece [TF] Fixes and new features to saved_model_cli aot_compile_cpu, adds e2e test.
* Can now specify which variables can be fed/fetched.
* Bugfix when signature name contains slashes or starts with integers.
* Prune input config entries from tf2xla config when graph freeze removes
  unused input feed.
* Fixed a bug where third_party/tensorflow/ isn't properly renamed to tensorflow/
  in opensource HOST build (identified during the new genrule test).
  Solution: bring back the hardcoded #include in codegen.cc; it's always correct.

NOTE: The bugfix to the #include line in the compiler/ codebase is a partial rollback
of the initial tfcompile + saved_model_cli CL which moved from the hard-coded
include path to a parameterized value.  It turns out we don't need the complexity
of this approach and it's incorrect in the host opensource build.

TESTED:

Includes a bonafide genrule test which runs saved_model_cli to generate the
header and object files, and includes them in a c++ unit test and ensures
that they compile and the resulting object runs correctly.

PiperOrigin-RevId: 290655683
Change-Id: I4cfa2c595ebe56f8bdd47853f82371d97b92b081
2020-01-20 15:58:44 -08:00

55 lines
2.0 KiB
C++

/* Copyright 2017 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
#ifndef TENSORFLOW_COMPILER_AOT_COMPILE_H_
#define TENSORFLOW_COMPILER_AOT_COMPILE_H_
#include <memory>
#include <string>
#include "tensorflow/compiler/aot/flags.h"
#include "tensorflow/compiler/tf2xla/tf2xla.pb.h"
#include "tensorflow/compiler/xla/service/cpu/cpu_compiler.h"
#include "tensorflow/compiler/xla/xla_data.pb.h"
#include "tensorflow/core/framework/graph.pb.h"
namespace tensorflow {
namespace tfcompile {
// CompileResult describes the output of CompileGraph, where the object file
// data and meta-information is available in aot.
struct CompileResult {
// Contains object file and meta-info.
std::unique_ptr<xla::cpu::CpuAotCompilationResult> aot;
xla::ProgramShapeProto program_shape; // Static shape of args and results.
string entry_point; // Name of generated function.
int pointer_size = 0; // Size of a pointer in bytes.
};
// CompileGraph compiles the graph_def into an object file containing a function
// that performs the graph operations.
//
// The XLA compilation options are specified in the flags.
Status CompileGraph(GraphDef graph_def, const tf2xla::Config& config,
const MainFlags& flags, CompileResult* compile_result);
// The full compilation method, for reuse in a library setting.
Status Main(const MainFlags& flags);
} // namespace tfcompile
} // namespace tensorflow
#endif // TENSORFLOW_COMPILER_AOT_COMPILE_H_