Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/44471
PR https://github.com/tensorflow/tensorflow/pull/43636 is a pre-requisite for this PR.
For the time being, this PR includes commits from it's pre-req as well. Once the pre-req PR is merged, I will rebase this PR to remove those commits.
--------------------------------------
/cc @cheshire @chsigg @nvining-work
Copybara import of the project:
--
3f0d378c14f55ac850ace17ac154e2333169329b by Deven Desai <deven.desai.amd@gmail.com>:
Adding #defines for ROCm / MIOpen / HIP Runtime version numbers
This PR/commit introduces the following #defines in the `rocm/rocm_config.h` file
```
#define TF_ROCM_VERSION <Version Number of ROCm install>
#define TF_MIOPEN_VERSION <Verion Number of MIOpen in ROCm install>
#define TF_HIPRUNTIME_VERSION <Version Number of HIP Runtinme in ROCm install>
```
These #defines should be used within TF code to add ROCm/MIOpen/HIp Runtime version specific code.
Details on how we go about determining these version numbers can found on the following wiki-page
https://github.com/ROCmSoftwarePlatform/tensorflow-internal/wiki/How-to-add-ROCm-version-specific-code-changes-in-the-TensorFlow-code%3F
A new script `find_rocm_config.py` is being added by this commit. This script does all the work of determining the version number information and it is pretty to extend it to query more information about the ROCM install.
The information collected by the script is available to `rocm_configure.bzl` and hence can be used to add version specific code in `rocm_configure.bzl` as well.
--
922e0e556c4f31f7ff8da1053f014964d01c0859 by Deven Desai <deven.desai.amd@gmail.com>:
Updating Dockerfile.rocm to use ROCm 3.9
--
cc0b4ae28218a83b3cc262ac83d0b2cf476939c8 by Deven Desai <deven.desai.amd@gmail.com>:
Changing CI scripts to use ROCm 3.9
--
fbfdb64c3375f79674a4f56433f944e1e4fd6b6e by Deven Desai <deven.desai.amd@gmail.com>:
Updating rocm_config.py to account for the new location of the rocblas version header file (in ROCm 3.8)
--
3f191faf8b8f2a0111bc386f41316079cad4aaaa by Deven Desai <deven.desai.amd@gmail.com>:
Removing references to TENSORFLOW_COMPILER_IS_HIP_CLANG
Now that we are way past the switch to use ROCm 3.5 and above (i.e. hip-clang), the codes within `#ifdef TENSORFLOW_COMPILER_IS_HIP_CLANG` are always enabled, and the codes within the corresponding `#else` blocks are deadcodes.
This commit removes the references to `#ifdef TENSORFLOW_COMPILER_IS_HIP_CLANG` and their corresponding `#else` blocks
--
9a4841c9bb8117e8228946be1f3752bdaea4a359 by Deven Desai <deven.desai.amd@gmail.com>:
Removing -DTENSORFLOW_COMPILER_IS_HIP_CLANG from the list of compile flags
--
745e2ad6db4282f5efcfef3155d9a46d9235dbf6 by Deven Desai <deven.desai.amd@gmail.com>:
Removing deadcode for the ROCm platform within the third_party/gpus dir
--
c96dc03986636badce7dbd87fb85cf26dff7a43b by Deven Desai <deven.desai.amd@gmail.com>:
Updating XLA code to account for the device lib files location change in ROCm 3.9
The location of the ROCm device lib files is changing in ROCm 3.9
Current (ROCm 3.8 and before) location is $ROCM_PATH/lib
```
root@ixt-rack-04:/opt/rocm-3.8.0# find . -name *.bc
./lib/oclc_isa_version_701.amdgcn.bc
./lib/ocml.amdgcn.bc
./lib/oclc_daz_opt_on.amdgcn.bc
./lib/oclc_isa_version_700.amdgcn.bc
./lib/oclc_isa_version_810.amdgcn.bc
./lib/oclc_unsafe_math_off.amdgcn.bc
./lib/oclc_wavefrontsize64_off.amdgcn.bc
./lib/oclc_isa_version_803.amdgcn.bc
./lib/oclc_isa_version_1011.amdgcn.bc
./lib/oclc_isa_version_1012.amdgcn.bc
./lib/opencl.amdgcn.bc
./lib/oclc_unsafe_math_on.amdgcn.bc
./lib/oclc_isa_version_1010.amdgcn.bc
./lib/oclc_finite_only_off.amdgcn.bc
./lib/oclc_correctly_rounded_sqrt_on.amdgcn.bc
./lib/oclc_daz_opt_off.amdgcn.bc
./lib/oclc_isa_version_802.amdgcn.bc
./lib/ockl.amdgcn.bc
./lib/oclc_isa_version_906.amdgcn.bc
./lib/oclc_isa_version_1030.amdgcn.bc
./lib/oclc_correctly_rounded_sqrt_off.amdgcn.bc
./lib/hip.amdgcn.bc
./lib/oclc_isa_version_908.amdgcn.bc
./lib/oclc_isa_version_900.amdgcn.bc
./lib/oclc_isa_version_702.amdgcn.bc
./lib/oclc_wavefrontsize64_on.amdgcn.bc
./lib/hc.amdgcn.bc
./lib/oclc_isa_version_902.amdgcn.bc
./lib/oclc_isa_version_801.amdgcn.bc
./lib/oclc_finite_only_on.amdgcn.bc
./lib/oclc_isa_version_904.amdgcn.bc
```
New (ROCm 3.9 and above) location is $ROCM_PATH/amdgcn/bitcode
```
root@ixt-hq-99:/opt/rocm-3.9.0-3703# find -name *.bc
./amdgcn/bitcode/oclc_isa_version_700.bc
./amdgcn/bitcode/ocml.bc
./amdgcn/bitcode/oclc_isa_version_1030.bc
./amdgcn/bitcode/oclc_isa_version_1010.bc
./amdgcn/bitcode/oclc_isa_version_904.bc
./amdgcn/bitcode/hip.bc
./amdgcn/bitcode/hc.bc
./amdgcn/bitcode/oclc_daz_opt_off.bc
./amdgcn/bitcode/oclc_wavefrontsize64_off.bc
./amdgcn/bitcode/oclc_wavefrontsize64_on.bc
./amdgcn/bitcode/oclc_isa_version_900.bc
./amdgcn/bitcode/oclc_isa_version_1012.bc
./amdgcn/bitcode/oclc_isa_version_702.bc
./amdgcn/bitcode/oclc_daz_opt_on.bc
./amdgcn/bitcode/oclc_unsafe_math_off.bc
./amdgcn/bitcode/ockl.bc
./amdgcn/bitcode/oclc_isa_version_803.bc
./amdgcn/bitcode/oclc_isa_version_908.bc
./amdgcn/bitcode/oclc_isa_version_802.bc
./amdgcn/bitcode/oclc_correctly_rounded_sqrt_off.bc
./amdgcn/bitcode/oclc_finite_only_on.bc
./amdgcn/bitcode/oclc_isa_version_701.bc
./amdgcn/bitcode/oclc_unsafe_math_on.bc
./amdgcn/bitcode/oclc_isa_version_902.bc
./amdgcn/bitcode/oclc_finite_only_off.bc
./amdgcn/bitcode/opencl.bc
./amdgcn/bitcode/oclc_isa_version_906.bc
./amdgcn/bitcode/oclc_isa_version_810.bc
./amdgcn/bitcode/oclc_isa_version_801.bc
./amdgcn/bitcode/oclc_correctly_rounded_sqrt_on.bc
./amdgcn/bitcode/oclc_isa_version_1011.bc
```
Also not the change in the filename(s)
This commit updates the XLA code, that has the device lib path + filename(s) hardcoded, to account for the change in location / filename
--
6f981a91c8d8a349c88b450c2191df9c62b2b38b by Deven Desai <deven.desai.amd@gmail.com>:
Adding "-fcuda-flush-denormals-to-zero" as a default hipcc option
Prior to ROCm 3.8, hipcc (hipclang) flushed denormal values to zero by default. Starting with ROCm 3.8 that is no longer true, denormal values are kept as is.
TF expects denormals to be flushed to zero. This is enforced on the CUDA side by explicitly passing the "-fcuda-flush-denormals-to-zero" (see tensorflow.bzl). This commit does the same for the ROCm side.
Also removing the no_rocm tag from the corresponding unit test - //tensorflow/python/kernel_tests:denormal_test_gpu
--
74810439720e0692f81ffb0cc3b97dc6ed50876d by Deven Desai <deven.desai.amd@gmail.com>:
Fix for TF build failure with ROCm 3.9 (error: call to 'min' is ambiguous)
When building TF with ROCm 3.9, we are running into the following compile error
```
In file included from tensorflow/core/kernels/reduction_ops_half_mean_sum.cu.cc:20:
./tensorflow/core/kernels/reduction_gpu_kernels.cu.h:430:9: error: call to 'min' is ambiguous
min(blockDim.y, num_rows - blockIdx.y * blockDim.y);
^~~
/opt/rocm-3.9.0-3805/llvm/lib/clang/12.0.0/include/__clang_hip_math.h:1183:23: note: candidate function
__DEVICE__ inline int min(int __arg1, int __arg2) {
^
/opt/rocm-3.9.0-3805/llvm/lib/clang/12.0.0/include/__clang_hip_math.h:1197:14: note: candidate function
inline float min(float __x, float __y) { return fminf(__x, __y); }
^
/opt/rocm-3.9.0-3805/llvm/lib/clang/12.0.0/include/__clang_hip_math.h:1200:15: note: candidate function
inline double min(double __x, double __y) { return fmin(__x, __y); }
^
1 error generated when compiling for gfx803.
```
The build error seems to be because ROCm 3.9 uses llvm header files from `llvm/lib/clang/12.0.0/include` (ROCm 3.8 uses the `11.0.0` version). `12.0.0` has a new `__clang_hip_math.h` file, which is not present in `11.0.0`. This file has the `min` function overloaded for the `float` and `double` types.
The first argument in the call to `min` (which leads to the error) is `blockDim.y` which has a `uint` type, and hence the compiler gets confused as to which overloaded type to resole to. Previously (i.e. ROCm 3.8 and before) there was only one option (`int`), with ROCm 3.9 there are three (`int`, `float`, and `double`) and hence the error.
The "fix" is to explicitly cast the first argument to `int` to remove the ambiguity (the second argument is already an `int` type).
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/tensorflow/pull/44471 from ROCmSoftwarePlatform:google_upstream_rocm_switch_to_rocm39 74810439720e0692f81ffb0cc3b97dc6ed50876d
PiperOrigin-RevId: 341569721
Change-Id: Ia614893881bf8db1ef8901034c35cc585a82dba8
This PR/commit introduces the following #defines in the `rocm/rocm_config.h` file
```
#define TF_ROCM_VERSION <Version Number of ROCm install>
#define TF_MIOPEN_VERSION <Verion Number of MIOpen in ROCm install>
#define TF_HIPRUNTIME_VERSION <Version Number of HIP Runtinme in ROCm install>
```
These #defines should be used within TF code to add ROCm/MIOpen/HIp Runtime version specific code.
Details on how we go about determining these version numbers can found on the following wiki-page
https://github.com/ROCmSoftwarePlatform/tensorflow-internal/wiki/How-to-add-ROCm-version-specific-code-changes-in-the-TensorFlow-code%3F
A new script `find_rocm_config.py` is being added by this commit. This script does all the work of determining the version number information and it is pretty to extend it to query more information about the ROCM install.
The information collected by the script is available to `rocm_configure.bzl` and hence can be used to add version specific code in `rocm_configure.bzl` as well.
With the switch to the new hipclang-vdi runtime (in ROCm 3.5), the new name for the HIP runtime library is libamdhip64.so.
For backwards compatibility, ROCm 3.5 and ROCm 3.6 include a "libhip_hcc.so" softlink, which points to libamdhip64.so. That softlink will be going away starting with ROCm 3.7(?).
This commit updates references to libhip_hcc.so (in the TF build) to use libamdhip64.so instead.
See following JIRA tickets for further details:
* http://ontrack-internal.amd.com/browse/SWDEV-244762
* http://ontrack-internal.amd.com/browse/SWDEV-238533
Prior to this commit, the AMD GPU targets (i.e. the `amdgpu_targets`), for which HSACO objects are created in the TF build, were determined as follows.
* No `--amdgpu-target=` option would be explicitly added to the `hipcc` command line (via `rocm_copts`)
* `hipcc` would, upon not seeing any `--amdgpu-target=` option specified, invoke the `$ROCM_PATH/bin/rocm_agent_enumerator` tool to determine the list of `amdgpu_targets`
This commit moves the determination of `amdgpu_targets` to be in `rocm_configure.bzl`. Instead of in `hipcc`, the `$ROCM_PATH/bin/rocm_agent_enumerator` tool will instead be invoked within `rocm_configure.bzl` to determine the list `amdgpu_targets`. For each `target` in the `amdgpu_targets` list, a `--amdgpu-target=<target>` option will be added to the `hipcc` command line (via `rocm_copts()`).
This commit also
* allows overriding the way `amdgpu_targets` are determined, by setting the env var `TF_ROCM_AMDGPU_TARGETS` instead.
* creates `rocm_gpu_archictectures` routine in `@local_config_rocm/build_defs.bzl`, which returns the `amgpu_targets` list.
* This will come in the handy when determining the `amdgpu_targets` to build for, when compiling MLIR generated kernels, using the XLA backend (in the non XLA path)
The following commit (which switched G's internal CI to use ROCm 3.5) breaks the ROCm CSB build (which still uses ROCm 3.3)
22def20bae
This PR/commit simply puts back a couple of codes that were removed the the previous commit, and makes them condition on ROCm 3.5.
Note that the ROCm CSB build will be switching to ROCm 3.5 or higher in the near future, at which point all codes the `true` block for `#if TENSORFLOW_COMPILER_IS_HIP_CLANG` will become default, and those in eht `false / #else` block will be removed.
Fix list of cxx_builtin_include_directories. Only a few are needed, but those are more complicated (mix of symlinked and real paths).
Properly return error from crosstool wrapper.
PiperOrigin-RevId: 318788040
Change-Id: Ia66898e98a9a4d8fb479c7e75317f4114f6081e5
Previously TF_CUDA_CONFIG_REPO would point to a pregenerated and checked in configuration. This changes has it point to a remote repository intead that generates the configuration during the build for the specific docker image. All supported configurations can be found in third_party/toolchains/remote_config/configs.bzl. Each tensorflow_rbe_config() macro creates a few remote repositories to which to point the TF_*_CONFIG_REPO environment variables to. The remote repository names are prefixed with the macro's name. For example, tensorflow_rbe_config(name = "ubuntu") will create @ubuntu_config_python, @ubuntu_config_cuda, @ubuntu_config_nccl, etc.
This change also introduces the platform_configure. All this rule does is create a remote repository with a single platform target for the tensorflow_rbe_config(). This will make the platforms defined in //third_party/toolchains/BUILD obsolete once remote config is fully rolled out.
PiperOrigin-RevId: 296065144
Change-Id: Ia54beeb771b28846444e27a2023f70abbd9f6ad5
Currently TF_*_CONFIG_REPO environment variables point to checked in preconfig packages. After migrating to remote config they will point to remote repositories. The "config_repo_label" function ensures both ways continue to work.
PiperOrigin-RevId: 295990961
Change-Id: I7637ff5298893d4ee77354e9b48f87b8c328c301
This change is in prepartion for rolling out remote config. It will
allow us to inject environment variables from repository rules as
well as from the shell enviroment.
PiperOrigin-RevId: 295782466
Change-Id: I1eb61fca3556473e94f2f12c45ee5eb1fe51625b
Any code path using exec_result.stderr won't work with RBE due to
a bug where the service returns stderr as stdout.
PiperOrigin-RevId: 295107492
Change-Id: I5738d46f7bb4cc049636a6f6625abc782d2d1e29
Caches the path to the bash interpreter in a variable instead of repeatedly asking the remote worker to lookup it. This reduces the number of execute calls by 1/3 from 34 to 22. This translates to a runtime reduction by 1/3.
PiperOrigin-RevId: 294630132
Change-Id: Iee60ba3c382b889393d36e51b3e0a3d735b1fd74
Batch all tests if a file/dir exists into a single command
in order to save roundtrips.
PiperOrigin-RevId: 294252998
Change-Id: I516cb9dadad47ef43a83d4b0340a2f3c04402052
Move get_cpu_value() to common.bzl and use it from cuda_configure and rocm_configure
PiperOrigin-RevId: 293807189
Change-Id: I2eb0ef0ab27a64060a99985bcab9ae4706f57fc5
This will fix Cuda and ROCm RBE build with Bazel 1.0
Related https://github.com/bazelbuild/bazel/issues/8531
Preconfigured toolchains are updated by:
tensorflow/third_party/toolchains/preconfig/generate/update.sh ubuntu16.04-py3-gcc7_manylinux2010-cuda10.1-cudnn7-tensorrt6.0
tensorflow/third_party/toolchains/preconfig/generate/update.sh ubuntu16.04-py3-gcc7_manylinux2010-cuda10.0-cudnn7-tensorrt5.1
tensorflow/third_party/toolchains/preconfig/generate/update.sh ubuntu16.04-py3_opt-gcc5-rocm
PiperOrigin-RevId: 279942421
Change-Id: Ic8538d49b970b074e35acebc1345482170d98847
1. Rewrote hipcc_cc_toolchain_config.bzl.tpl.oss based on third_party/bazel/tools/cpp/unix_cc_toolchain_config.bzl
2. Cleaned up non-Linux stuff in toolchain configuration
3. Added support for parameter file in the compiler wrapper script
4. Re-generated preconfigured toolchain by third_party/tensorflow/third_party/toolchains/preconfig/generate/update.sh ubuntu16.04-py3_opt-gcc5-rocm
5. Bumped min Bazel version to 0.27.1 because toolchain configure requires newer Bazel
6. Removed --noincompatible_do_not_split_linking_cmdline
PiperOrigin-RevId: 278844463
Change-Id: I477ec5b44e6c634db7c6d65d02b3e307f0be338b
Imported from GitHub PR #31485
Copybara import of the project:
- ba5748981bb02b9d0e91114cdc30eb64d1650a46 add ROCm RCCL support by Jeff Daily <jeff.daily@amd.com>
- 6f887a19731f030be58495ae4fea98b3ad1f1cc3 run buildifier against tensorflow/core/nccl/BUILD by Jeff Daily <jeff.daily@amd.com>
- 55ce583cf484953d90eb9b9310dc77cf63b4c0c9 Merge 6f887a19731f030be58495ae4fea98b3ad1f1cc3 into f9233... by Jeff Daily <jeff.daily@amd.com>
PiperOrigin-RevId: 264892468
This commit adds changes required for using "hipclang based hipcc" as the compiler when building TF with ROCm support.
The only visible (to TF Code) change in this commit is the introduction of a #define "TENSORFLOW_COMPILER_IS_HIP_CLANG" which will be defined (on the command line) when the compiler is "hipclang based hipcc".
TF code that needs to be special-cased when compiling with "hipclang based hipcc" will be put within "#if TENSORFLOW_COMPILER_IS_HIP_CLAN". This is expectd to be a lightly used #define. As of now, there are only 4 instances of its use in our fork.
The changes to `hipcc_cc_toolchain_config.bzl.tpl` can be reviewed better, if you diff them against `cc_toolchain_config.bzl.tpl` file. That will highlight all the things we need to differently to make ROCm work.
Note that `hipcc_cc_toolchain_config.bzl.tpl` was created using `cc_toolchain_config.bzl.tpl` as the base. The sections in `cc_toolchain_config.bzl.tpl` which deal with support for Mac (darwin) and Windows were left intact, even though ROCm currently does not support those platforms. This was done for two reasons
1. It allows to use `diff` to highlight the ROCm specific differences
2. Ideally we would like to get rid of the `hipcc_cc_toolchain_config.bzl.tpl` file, and have a single `cc_toolchain_config.bzl.tpl` if possible. So we want to keep the two files as similar as possible
This cl makes this toolchain forward compatible for Bazel's incompatible flags:
* https://github.com/bazelbuild/bazel/issues/7008
* https://github.com/bazelbuild/bazel/issues/6861
* https://github.com/bazelbuild/bazel/issues/7320
The current change creates a drop-in replacement for the proto crosstool, with the exception that:
* all legacy fields are removed
* templated variables are replaced by rule attributes
* instead of empty paths in msvc toolchain we now use 'msvc_not_used' path (CcToolchainConfigInfo doesn't allow empty strings for paths).
* introduced to_list_of_strings function so we can pass list of starlark string around
The mechanical transformation makes the crosstool definition less readable than before - this will be addressed in a subsequent change.
This change was tested by:
1) running cuda_configure.bzl and retrieving generated BUILD and CROSSTOOL files
2) applying this change
3) running cuda_configure.bzl and retrieving generated BUILD and cc_toolchain_config.bzl files
4) Using [cc_toolchain_compare_test](https://github.com/bazelbuild/rules_cc/blob/master/tools/migration/ctoolchain_compare.bzl#L24) rule to verify both CROSSTOOL and cc_toolchain_configs configure the C++ toolchain identically
PiperOrigin-RevId: 248094053