This change is a second attempt at #38964, which was rolled back because it was fragile.
First, cuda_configure.bzl templates a file with data it already pulled from get_cuda_config. gen_build_info loads that file to provide package
build information within TensorFlow:
from tensorflow.python.platform import build_info
print(build_info.build_info)
{'cuda_version': '10.2', 'cudnn_version': '7', ... }
Also is exposed through tf.sysconfig.get_build_info(), a public API change.
setup.py pulls build_info into package metadata. The wheel's
long description ends with:
TensorFlow 2.2.0 for NVIDIA GPUs was built with these platform
and library versions:
- NVIDIA CUDA 10.2
- NVIDIA cuDNN 7
- NVIDIA CUDA Compute Capabilities compute_30, compute_70 (etc.)
I set one of the new CUDA Classifiers, and add metadata to the "platform" tag:
>>> import pkginfo
>>> a = pkginfo.Wheel('./tf_nightly_gpu-2.1.0-cp36-cp36m-linux_x86_64.whl')
>>> a.platforms
['cuda_version:10.2', 'cudnn_version:7', ...]
I'm not 100% confident this is the best way to accomplish this. It
still seems odd to import like this setup.py, even though it works, even in
an environment with TensorFlow installed. This method is much better than the old method as it uses data that was already gathered. It could be extended to gather tensorrt, nccl, etc. from other .bzl files, but I wanted to get feedback (and ensure this lands in 2.3) before designing something like that.
Currently tested only on Linux GPU (Remote Build) for Python 3.6. I'd
like to see more tests before merging.
The API is the same as the earlier change.
Resolves https://github.com/tensorflow/tensorflow/issues/38351.
PiperOrigin-RevId: 315018663
Change-Id: Idf68a8fe4d1585164d22b5870894c879537c280d