While at it, expose the associated header files from tensorflow/c/ in pip package.
Note, we expose the subset of C API that doesn't require tensorflow/cc linkage;
specifically the core operations that exclude building while loops and gradient ops,
and also excluding the experimental API.
The experimental API can also be added in the future, by factoring it into
"core" and "non-core" targets. Similarly for the C eager API.
PiperOrigin-RevId: 301601988
Change-Id: I97eac79e684fc42ce90e67ee901cdcf6f7e91cbe
While at it, expose the associated header files from tensorflow/c/ in pip package.
Note, we expose the subset of C API that doesn't require tensorflow/cc linkage;
specifically the core operations that exclude building while loops and gradient ops,
and also excluding the experimental API.
The experimental API can also be added in the future, by factoring it into
"core" and "non-core" targets. Similarly for the C eager API.
PiperOrigin-RevId: 301430667
Change-Id: I5ae7f3cedfe9dc72184d39ef1147193450c3d92e
Fixes#33758
Downstream projects depending on TensorFlow: If bazel complains, please substitute `@zlib_archive` with `@zlib`, and `@grpc` with `@com_github_grpc_grpc` in WORKPLACE.
PiperOrigin-RevId: 295824868
Change-Id: If2259d59e9d82543369e5670916b1398374c9889
This package prefix is used in open source Kokoro pip testing.
This fixes errors like this one:
target '//tensorflow/compiler/tests:xla_test' is not visible from target '//bazel_pip/tensorflow/compiler/tests:reshape_op_test_gpu'
PiperOrigin-RevId: 295754319
Change-Id: Id1d2f0c55df64a0505fa83db2961e06b33037323
You can now run, e.g.:
saved_model_cli aot_compile_cpu \
--dir /path/to/saved_model \
--tag_set serve \
--signature_def_key action \
--output_prefix /tmp/out \
--cpp_class Serving::Action
Which will create the files:
/tmp/{out.h, out.o, out_metadata.o, out_makefile.inc}
where out.h defines something like:
namespace Serving {
class Action {
...
}
}
and out_makefile.inc provides the additional flags required to include the header
and object files into your build.
You can optionally also point aot_compile_cpu to a newer set of checkpoints (weight values) by using the optional argument --checkpoint_path.
Also added `tf.test.is_built_with_xla()`.
TESTED:
* bazel test -c opt :saved_model_cli_test passes
* built and installed the pip wheel and tested in the bazel directory via:
TEST_SRCDIR=/tmp/tfcompile/bazel-bin/tensorflow/python/tools/saved_model_cli_test.runfiles/ python saved_model_cli_test.py
and checking the output files to ensure the proper includes and header directories are
set in out_makefile.inc and out.h.
PiperOrigin-RevId: 290144104
Change-Id: If8eb6c3334b3042c4b9c24813b1b52c06d8fbc06
tensorflow version 1 or version 2 api. + Minor change to the way root_init_template flag is passed in (now it should be a location path instead of file name).
PiperOrigin-RevId: 286729526
Change-Id: I55ebaa0cfe0fe3db3f4d1e699082b1f7b11df4da
This is part of the refactoring described in the Tensorflow Build Improvements RFC: https://github.com/tensorflow/community/pull/179
Subsequent changes will migrate targets from build_refactor.bzl into the new BUILD files.
PiperOrigin-RevId: 284712709
Change-Id: I650eb200ba0ea87e95b15263bad53b0243732ef5
Bazel's change to legacy_whole_archive behavior is not the cause for TF's linking issues with protobuf. Protobuf's implementation and runtime are correctly being linked into TF here: da5765ebad/tensorflow/core/platform/default/build_config.bzl (L239) and da5765ebad/third_party/protobuf/protobuf.patch (L18), and I've confirmed that protobuf symbols are still present in libtensorflow_framework.so via nm.
After examining the linker flags that bazel passes to gcc, https://gist.github.com/bmzhao/f51bbdef50e9db9b24acd5b5acc95080, I discovered that the order of the linker flags was what was causing the undefined reference.
See https://eli.thegreenplace.net/2013/07/09/library-order-in-static-linking/ and https://stackoverflow.com/a/12272890. Basically linkers discard the objects they've been asked to link if those objects do not export any symbols that the linker currently has kept track as "undefined".
To prove this was the issue, I was able to successfully link after moving the linking shared object flag (-l:libtensorflow_framework.so.2) to the bottom of the flag order, and manually invoking g++.
This change uses cc_import to to link against a .so in the "deps" of tf_cc_binary, rather than as the "srcs" of tf_cc_binary. This technique was inspired by the comment here: 387c610d09/examples/windows/dll/windows_dll_library.bzl (L47-L48)
Successfully built on vanilla Ubuntu 18.04 VM:
bmzhao@bmzhao-tf-build-failure-reproing:~/tf-fix/tf$ bazel build -c opt --config=cuda --config=v2 --host_force_python=PY3 //tensorflow/tools/pip_package:build_pip_package
Target //tensorflow/tools/pip_package:build_pip_package up-to-date:
bazel-bin/tensorflow/tools/pip_package/build_pip_package
INFO: Elapsed time: 2067.380s, Critical Path: 828.19s
INFO: 12942 processes: 51 remote cache hit, 12891 local.
INFO: Build completed successfully, 14877 total actions
The root cause might instead be https://github.com/bazelbuild/bazel/issues/7687, which is pending further investigation.
PiperOrigin-RevId: 281341817
Change-Id: Ia240eb050d9514ed5ac95b7b5fb7e0e98b7d1e83
This should help prevent cases where things are committed to the codebase with
dangling references to things that don't exist.
PiperOrigin-RevId: 281200606
Change-Id: Ic298a1e051687099d9aa696cacb0dc2de062b600
simplify migration. Just importing compat.v2 and compat.v1 to create these modules would cause cycles. As a workaround, I add explicit __init__.py file under compat/vN/compat/vK and under compat/vN/compat/vK/compat modules.
PiperOrigin-RevId: 276154099
Change-Id: I3cae6f58759b63e9d1e9eb4ff18c59dcfff84b81