This change is a second attempt at #38964, which was rolled back because it was fragile.
First, cuda_configure.bzl templates a file with data it already pulled from get_cuda_config. gen_build_info loads that file to provide package
build information within TensorFlow:
from tensorflow.python.platform import build_info
print(build_info.build_info)
{'cuda_version': '10.2', 'cudnn_version': '7', ... }
Also is exposed through tf.sysconfig.get_build_info(), a public API change.
setup.py pulls build_info into package metadata. The wheel's
long description ends with:
TensorFlow 2.2.0 for NVIDIA GPUs was built with these platform
and library versions:
- NVIDIA CUDA 10.2
- NVIDIA cuDNN 7
- NVIDIA CUDA Compute Capabilities compute_30, compute_70 (etc.)
I set one of the new CUDA Classifiers, and add metadata to the "platform" tag:
>>> import pkginfo
>>> a = pkginfo.Wheel('./tf_nightly_gpu-2.1.0-cp36-cp36m-linux_x86_64.whl')
>>> a.platforms
['cuda_version:10.2', 'cudnn_version:7', ...]
I'm not 100% confident this is the best way to accomplish this. It
still seems odd to import like this setup.py, even though it works, even in
an environment with TensorFlow installed. This method is much better than the old method as it uses data that was already gathered. It could be extended to gather tensorrt, nccl, etc. from other .bzl files, but I wanted to get feedback (and ensure this lands in 2.3) before designing something like that.
Currently tested only on Linux GPU (Remote Build) for Python 3.6. I'd
like to see more tests before merging.
The API is the same as the earlier change.
Resolves https://github.com/tensorflow/tensorflow/issues/38351.
PiperOrigin-RevId: 315018663
Change-Id: Idf68a8fe4d1585164d22b5870894c879537c280d
The local gen_build_info rule calls into find_cuda_config, which only works in the remote image.
This is additionally brittle: relying on TF_CUDA_VERSION being an action_env is poisoning our caches, and running find_cuda_conifg multiple times is bugprone.
I think the better way to do this is to put the information from the repo_rule into a file template as part of the repo rule configuration (cuda_configure.bzl). Then we can just include that file, instead of trying to do that as part of the action.
PiperOrigin-RevId: 311148754
Change-Id: I80daa8652a85b2a1897c15117e6422bfd21cee6a
Since this module now generates a dictionary to expose in tf.config, it
doesn't make much sense to store only certain values in the build_info
dictionary and others as module variables. This obsoletes a lot of code
in gen_build_info.py and I've removed it.
I also updated all the in-code references I've found to the build_info
module. I think this may break whomever used to be using the build_info
library, but since it wasn't part of the API, there was no guarantee
that it would continue to be available.
If pybind11 is installed on the system its headers are already captured
by @local_config_python//:python_headers, so the system lib only needs
to depend on that.
When installed correctly, includes should be #include "pybind11/...",
the bundled pybind11 is based off the source repo which does not match
the install paths. Use bazels strip_include_prefix to align the bundled
headers correctly.
Signed-off-by: Jason Zaman <jason@perfinion.com>
As of pypi nightly 20200215, the includes/ directory in the tensorflow{,_core}
site-packages is missing/incomplete.
This is due to the removal of the virtual tensorflow pointing to tensorflow_core
package but without updating sysconfig.py or the seutp.py/MANIFEST.in.
This CL fixes that.
PiperOrigin-RevId: 295761153
Change-Id: I51e21dbf40f4c9b54a98978cfa3e0b5fbcb4bc61
Exercise the code paths it triggers.
Disable it on windows and pip packages for now.
PiperOrigin-RevId: 293910406
Change-Id: Ie84a3a85ff1f471e3d7ed4b192701667481e8324
You can now run, e.g.:
saved_model_cli aot_compile_cpu \
--dir /path/to/saved_model \
--tag_set serve \
--signature_def_key action \
--output_prefix /tmp/out \
--cpp_class Serving::Action
Which will create the files:
/tmp/{out.h, out.o, out_metadata.o, out_makefile.inc}
where out.h defines something like:
namespace Serving {
class Action {
...
}
}
and out_makefile.inc provides the additional flags required to include the header
and object files into your build.
You can optionally also point aot_compile_cpu to a newer set of checkpoints (weight values) by using the optional argument --checkpoint_path.
Also added `tf.test.is_built_with_xla()`.
TESTED:
* bazel test -c opt :saved_model_cli_test passes
* built and installed the pip wheel and tested in the bazel directory via:
TEST_SRCDIR=/tmp/tfcompile/bazel-bin/tensorflow/python/tools/saved_model_cli_test.runfiles/ python saved_model_cli_test.py
and checking the output files to ensure the proper includes and header directories are
set in out_makefile.inc and out.h.
PiperOrigin-RevId: 290144104
Change-Id: If8eb6c3334b3042c4b9c24813b1b52c06d8fbc06
This is part of a larger effort to deprecate swig and eventually with
modularization break pywrap_tensorflow into smaller components.
Please refer to https://github.com/tensorflow/community/blob/master/rfcs/20190208-pybind11.md
for more information.
PiperOrigin-RevId: 286474536
Change-Id: Ic942a4480b1c1a19bdc3d6b65d3272221e47537b
This is mostly the result of an internal cleanup and formatting pass.
PiperOrigin-RevId: 286318018
Change-Id: I8f9e2f7519070035da73f9f24d2fc90864abc51b
This is part of a larger effort to deprecate swig and eventually with
modularization break pywrap_tensorflow into smaller components.
Please refer to https://github.com/tensorflow/community/blob/master/rfcs/20190208-pybind11.md
for more information.
PiperOrigin-RevId: 286302183
Change-Id: I4baf4a2628d46d7bdf3aa2916fb6f980a3c99abe
This is part of a larger effort to deprecate swig and eventually with
modularization break pywrap_tensorflow into smaller components.
Please refer to https://github.com/tensorflow/community/blob/master/rfcs/20190208-pybind11.md
for more information.
PiperOrigin-RevId: 286181704
Change-Id: I06e92ec7bc945c4efd69de85ef8b9e4de8007bf4
Resolves https://github.com/tensorflow/tensorflow/issues/35036
For TensorFlow 2.1.0rc1, the TensorFlow team built Windows packages with Microsoft Visual Studio 2019 16.4, upgraded from Visual Studio 2017. As discovered in the issue linked above, this caused an import error for Windows TF Python whls, because the build upgrade pulled in an additional Visual C++ DLL dependency, `msvcp140_1.dll`, which can be found in the latest Visual C++ package for all Visual Studio releases since 2015 (https://support.microsoft.com/en-us/help/2977003/the-latest-supported-visual-c-downloads).
I discovered the missing DLL by unpacking the two wheels for rc0 and rc1 and separately running `dumpbin /DEPENDENTS tensorflow_core/python/_pywrap_tensorflow_internal.pyd` (thanks to @yifeif for help with this!).
In this change, I've updated the import-time checker to look for both `msvcp140_1.dll` and `msvcp140.dll` in a way that supports simple future additions to the list.
PiperOrigin-RevId: 285476568
Change-Id: Ia9727e50801a4ddad1ea30653a74478fb7aee4e8
This fix tries to address the issue raised in 33799
where running tensorflow on python 3.8 (Ubuntu 18.04)
raised the following error:
```
TypeError: _logger_find_caller() takes from 0 to 1 positional arguments but 2 were given
```
The issue was that findCaller changed in Python 3.8
This PR fixes the issue.
This PR fixes 33799
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
In addition, implemented ParameterizedBenchmark class so that we no longer have
to write benchamrk functions for all the configuration matrix. For example,
tensorflow/python/eager/benchmarks_test.py is very bloated due to this.
PiperOrigin-RevId: 268315479
This fix tries to address the issue raised in 30633 where
`tf.sysconfig.get_link_flags` on mac returned
'-l:libtensorflow_framework.1.dylib' which is not valid
for ld on macOS.
This fix changes to `-ltensorflow_framework.1`
This fix fixes 30633.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
This routine will be used in subsequent PRs to make ROCm specific changes to certain unit tests. The nature of these changes will be to primarily disable certain subtests, which test features that are not yet supported on the ROCm platform (for e.g. complex datatypes, 3D pooling, etc)
A `test.is_built_with_gpu_support` routine is also being added, which is essentially "is_built_with_cuda() or is_built_with_rocm()".
We have wrapped all cuda calls with dlopen and our shared library no longer explicitly depends on cuda libraries. This is in preparation for consolidating our two pip packages into one.
PiperOrigin-RevId: 252697407
When making the virtual pip changes, by moving code to new directory, required headers and shared object are no longer vizible where they should. This should fix that.
PiperOrigin-RevId: 252100366
This change allows users to depend on TF_Status and its associated constants
(e.g. TF_OK, TF_INVALID_ARGUMENT, etc) without bringing in the whole C API.
The first intended user is the C op definition API, which cannot depend on the
C API, because the C API itself depends on op definitions, causing circular
dependencies.
PiperOrigin-RevId: 248816896
This fix tries to address the issue raised in 27848 where
the frame numbers in tf_logging.warn is incorrect in python 3.
The reason was that in python 2, `warn = warning` but in python 3,
an additional wrapper has been added to deprecate the warn. For
that reason one additional frame has been added, thus causing
incorrect output in the log.
This fix fixes 27848.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>