STT-tensorflow/tensorflow/tools/api/golden/v1/tensorflow.test.pbtxt
Eugene Brevdo 9959c04433 [TF XLA] Add ability to convert SavedModel subgraphs to compiled [XLA CPU] objects via saved_model_cli.
You can now run, e.g.:

saved_model_cli aot_compile_cpu \
  --dir /path/to/saved_model \
  --tag_set serve \
  --signature_def_key action \
  --output_prefix /tmp/out \
  --cpp_class Serving::Action

Which will create the files:
  /tmp/{out.h, out.o, out_metadata.o, out_makefile.inc}

where out.h defines something like:

namespace Serving {
  class Action {
    ...
  }
}

and out_makefile.inc provides the additional flags required to include the header
and object files into your build.

You can optionally also point aot_compile_cpu to a newer set of checkpoints (weight values) by using the optional argument --checkpoint_path.

Also added `tf.test.is_built_with_xla()`.

TESTED:
* bazel test -c opt :saved_model_cli_test passes
* built and installed the pip wheel and tested in the bazel directory via:
  TEST_SRCDIR=/tmp/tfcompile/bazel-bin/tensorflow/python/tools/saved_model_cli_test.runfiles/ python saved_model_cli_test.py

and checking the output files to ensure the proper includes and header directories are
set in out_makefile.inc and out.h.

PiperOrigin-RevId: 290144104
Change-Id: If8eb6c3334b3042c4b9c24813b1b52c06d8fbc06
2020-01-16 14:26:12 -08:00

76 lines
2.5 KiB
Plaintext

path: "tensorflow.test"
tf_module {
member {
name: "Benchmark"
mtype: "<class \'tensorflow.python.platform.benchmark._BenchmarkRegistrar\'>"
}
member {
name: "StubOutForTesting"
mtype: "<type \'type\'>"
}
member {
name: "TestCase"
mtype: "<type \'type\'>"
}
member {
name: "mock"
mtype: "<type \'module\'>"
}
member_method {
name: "assert_equal_graph_def"
argspec: "args=[\'actual\', \'expected\', \'checkpoint_v2\', \'hash_table_shared_name\'], varargs=None, keywords=None, defaults=[\'False\', \'False\'], "
}
member_method {
name: "benchmark_config"
argspec: "args=[], varargs=None, keywords=None, defaults=None"
}
member_method {
name: "compute_gradient"
argspec: "args=[\'x\', \'x_shape\', \'y\', \'y_shape\', \'x_init_value\', \'delta\', \'init_targets\', \'extra_feed_dict\'], varargs=None, keywords=None, defaults=[\'None\', \'0.001\', \'None\', \'None\'], "
}
member_method {
name: "compute_gradient_error"
argspec: "args=[\'x\', \'x_shape\', \'y\', \'y_shape\', \'x_init_value\', \'delta\', \'init_targets\', \'extra_feed_dict\'], varargs=None, keywords=None, defaults=[\'None\', \'0.001\', \'None\', \'None\'], "
}
member_method {
name: "create_local_cluster"
argspec: "args=[\'num_workers\', \'num_ps\', \'protocol\', \'worker_config\', \'ps_config\'], varargs=None, keywords=None, defaults=[\'grpc\', \'None\', \'None\'], "
}
member_method {
name: "get_temp_dir"
argspec: "args=[], varargs=None, keywords=None, defaults=None"
}
member_method {
name: "gpu_device_name"
argspec: "args=[], varargs=None, keywords=None, defaults=None"
}
member_method {
name: "is_built_with_cuda"
argspec: "args=[], varargs=None, keywords=None, defaults=None"
}
member_method {
name: "is_built_with_gpu_support"
argspec: "args=[], varargs=None, keywords=None, defaults=None"
}
member_method {
name: "is_built_with_rocm"
argspec: "args=[], varargs=None, keywords=None, defaults=None"
}
member_method {
name: "is_built_with_xla"
argspec: "args=[], varargs=None, keywords=None, defaults=None"
}
member_method {
name: "is_gpu_available"
argspec: "args=[\'cuda_only\', \'min_cuda_compute_capability\'], varargs=None, keywords=None, defaults=[\'False\', \'None\'], "
}
member_method {
name: "main"
argspec: "args=[\'argv\'], varargs=None, keywords=None, defaults=[\'None\'], "
}
member_method {
name: "test_src_dir_path"
argspec: "args=[\'relative_path\'], varargs=None, keywords=None, defaults=None"
}
}