Previously TF_CUDA_CONFIG_REPO would point to a pregenerated and checked in configuration. This changes has it point to a remote repository intead that generates the configuration during the build for the specific docker image. All supported configurations can be found in third_party/toolchains/remote_config/configs.bzl. Each tensorflow_rbe_config() macro creates a few remote repositories to which to point the TF_*_CONFIG_REPO environment variables to. The remote repository names are prefixed with the macro's name. For example, tensorflow_rbe_config(name = "ubuntu") will create @ubuntu_config_python, @ubuntu_config_cuda, @ubuntu_config_nccl, etc.
This change also introduces the platform_configure. All this rule does is create a remote repository with a single platform target for the tensorflow_rbe_config(). This will make the platforms defined in //third_party/toolchains/BUILD obsolete once remote config is fully rolled out.
PiperOrigin-RevId: 296065144
Change-Id: Ia54beeb771b28846444e27a2023f70abbd9f6ad5
Currently TF_*_CONFIG_REPO environment variables point to checked in preconfig packages. After migrating to remote config they will point to remote repositories. The "config_repo_label" function ensures both ways continue to work.
PiperOrigin-RevId: 295990961
Change-Id: I7637ff5298893d4ee77354e9b48f87b8c328c301
This follows the same pattern as other repository rules. In a follow up change I
will introduce remote_tensorrt_configure that will use _create_local_tensorrt_repository
as its implementation function.
PiperOrigin-RevId: 295797220
Change-Id: Idbb56df088caae114ce23a898464577573257feb
repository_ctx.execute() does not support uploading of files from the source tree. I initially tried constructing a command that simply embeds the file's contents. However that did not work on Windows because the file is larger than 8192 characters. So my best idea was to compress it locally and embed the compressed contents in the command and to uncompress it remotely. This works but comes with the drawback that we need to compress it first. This can't be done as part of the repository_rule either because within one repository_rule every execute() runs either locally or remotely. I thus decided to check in the compressed version in the source tree. It's very much a temporary measure as I'll add the ability to upload files to a future version of Bazel.
PiperOrigin-RevId: 295787408
Change-Id: I1545dd86cdec7e4b20cba43d6a134ad6d1a08109
This change is in prepartion for rolling out remote config. It will
allow us to inject environment variables from repository rules as
well as from the shell enviroment.
PiperOrigin-RevId: 295782466
Change-Id: I1eb61fca3556473e94f2f12c45ee5eb1fe51625b
Move get_cpu_value() to common.bzl and use it from cuda_configure and rocm_configure
PiperOrigin-RevId: 293807189
Change-Id: I2eb0ef0ab27a64060a99985bcab9ae4706f57fc5
Use a single python script (third_party/gpus/find_cuda_config.py) from configure.py and the different *_configure.bzl scripts to find the different CUDA library and header paths based on a set of environment variables.
PiperOrigin-RevId: 243669844
i.e. without this modification Tensorflow under Nvidia Jetpack 3.2 is not compiling. Else condition in if-then-else block generates a path "%s/../include" which results in "/usr/lib/include" but it must be "/usr/include/aarch64-linux-gnu"... So this fix targets ARM architectures and with this fix, Tensorflow 1.6rc compiles fine with TensorRT support.