Fix error message printed on configuration exception to have a newline suffix
in CUDA configuration, without this the two lines run into each other.
Before:
```
Please specify the comma-separated list of base paths to look for CUDA libraries and headers. [Leave empty to use the default]:
Inconsistent CUDA toolkit path: /usr vs /usr/libAsking for detailed CUDA configuration...
```
After:
```
Please specify the comma-separated list of base paths to look for CUDA libraries and headers. [Leave empty to use the default]:
Inconsistent CUDA toolkit path: /usr vs /usr/lib
Asking for detailed CUDA configuration...
```
Also update the corresponding compressed file.
Also comment out the lines which try to find configs which do not
work. The currently used compressed file also has these commented out.
PiperOrigin-RevId: 344387253
Change-Id: I880642a483b332ab97cdc96fe42379e131121d74
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/44471
PR https://github.com/tensorflow/tensorflow/pull/43636 is a pre-requisite for this PR.
For the time being, this PR includes commits from it's pre-req as well. Once the pre-req PR is merged, I will rebase this PR to remove those commits.
--------------------------------------
/cc @cheshire @chsigg @nvining-work
Copybara import of the project:
--
3f0d378c14f55ac850ace17ac154e2333169329b by Deven Desai <deven.desai.amd@gmail.com>:
Adding #defines for ROCm / MIOpen / HIP Runtime version numbers
This PR/commit introduces the following #defines in the `rocm/rocm_config.h` file
```
#define TF_ROCM_VERSION <Version Number of ROCm install>
#define TF_MIOPEN_VERSION <Verion Number of MIOpen in ROCm install>
#define TF_HIPRUNTIME_VERSION <Version Number of HIP Runtinme in ROCm install>
```
These #defines should be used within TF code to add ROCm/MIOpen/HIp Runtime version specific code.
Details on how we go about determining these version numbers can found on the following wiki-page
https://github.com/ROCmSoftwarePlatform/tensorflow-internal/wiki/How-to-add-ROCm-version-specific-code-changes-in-the-TensorFlow-code%3F
A new script `find_rocm_config.py` is being added by this commit. This script does all the work of determining the version number information and it is pretty to extend it to query more information about the ROCM install.
The information collected by the script is available to `rocm_configure.bzl` and hence can be used to add version specific code in `rocm_configure.bzl` as well.
--
922e0e556c4f31f7ff8da1053f014964d01c0859 by Deven Desai <deven.desai.amd@gmail.com>:
Updating Dockerfile.rocm to use ROCm 3.9
--
cc0b4ae28218a83b3cc262ac83d0b2cf476939c8 by Deven Desai <deven.desai.amd@gmail.com>:
Changing CI scripts to use ROCm 3.9
--
fbfdb64c3375f79674a4f56433f944e1e4fd6b6e by Deven Desai <deven.desai.amd@gmail.com>:
Updating rocm_config.py to account for the new location of the rocblas version header file (in ROCm 3.8)
--
3f191faf8b8f2a0111bc386f41316079cad4aaaa by Deven Desai <deven.desai.amd@gmail.com>:
Removing references to TENSORFLOW_COMPILER_IS_HIP_CLANG
Now that we are way past the switch to use ROCm 3.5 and above (i.e. hip-clang), the codes within `#ifdef TENSORFLOW_COMPILER_IS_HIP_CLANG` are always enabled, and the codes within the corresponding `#else` blocks are deadcodes.
This commit removes the references to `#ifdef TENSORFLOW_COMPILER_IS_HIP_CLANG` and their corresponding `#else` blocks
--
9a4841c9bb8117e8228946be1f3752bdaea4a359 by Deven Desai <deven.desai.amd@gmail.com>:
Removing -DTENSORFLOW_COMPILER_IS_HIP_CLANG from the list of compile flags
--
745e2ad6db4282f5efcfef3155d9a46d9235dbf6 by Deven Desai <deven.desai.amd@gmail.com>:
Removing deadcode for the ROCm platform within the third_party/gpus dir
--
c96dc03986636badce7dbd87fb85cf26dff7a43b by Deven Desai <deven.desai.amd@gmail.com>:
Updating XLA code to account for the device lib files location change in ROCm 3.9
The location of the ROCm device lib files is changing in ROCm 3.9
Current (ROCm 3.8 and before) location is $ROCM_PATH/lib
```
root@ixt-rack-04:/opt/rocm-3.8.0# find . -name *.bc
./lib/oclc_isa_version_701.amdgcn.bc
./lib/ocml.amdgcn.bc
./lib/oclc_daz_opt_on.amdgcn.bc
./lib/oclc_isa_version_700.amdgcn.bc
./lib/oclc_isa_version_810.amdgcn.bc
./lib/oclc_unsafe_math_off.amdgcn.bc
./lib/oclc_wavefrontsize64_off.amdgcn.bc
./lib/oclc_isa_version_803.amdgcn.bc
./lib/oclc_isa_version_1011.amdgcn.bc
./lib/oclc_isa_version_1012.amdgcn.bc
./lib/opencl.amdgcn.bc
./lib/oclc_unsafe_math_on.amdgcn.bc
./lib/oclc_isa_version_1010.amdgcn.bc
./lib/oclc_finite_only_off.amdgcn.bc
./lib/oclc_correctly_rounded_sqrt_on.amdgcn.bc
./lib/oclc_daz_opt_off.amdgcn.bc
./lib/oclc_isa_version_802.amdgcn.bc
./lib/ockl.amdgcn.bc
./lib/oclc_isa_version_906.amdgcn.bc
./lib/oclc_isa_version_1030.amdgcn.bc
./lib/oclc_correctly_rounded_sqrt_off.amdgcn.bc
./lib/hip.amdgcn.bc
./lib/oclc_isa_version_908.amdgcn.bc
./lib/oclc_isa_version_900.amdgcn.bc
./lib/oclc_isa_version_702.amdgcn.bc
./lib/oclc_wavefrontsize64_on.amdgcn.bc
./lib/hc.amdgcn.bc
./lib/oclc_isa_version_902.amdgcn.bc
./lib/oclc_isa_version_801.amdgcn.bc
./lib/oclc_finite_only_on.amdgcn.bc
./lib/oclc_isa_version_904.amdgcn.bc
```
New (ROCm 3.9 and above) location is $ROCM_PATH/amdgcn/bitcode
```
root@ixt-hq-99:/opt/rocm-3.9.0-3703# find -name *.bc
./amdgcn/bitcode/oclc_isa_version_700.bc
./amdgcn/bitcode/ocml.bc
./amdgcn/bitcode/oclc_isa_version_1030.bc
./amdgcn/bitcode/oclc_isa_version_1010.bc
./amdgcn/bitcode/oclc_isa_version_904.bc
./amdgcn/bitcode/hip.bc
./amdgcn/bitcode/hc.bc
./amdgcn/bitcode/oclc_daz_opt_off.bc
./amdgcn/bitcode/oclc_wavefrontsize64_off.bc
./amdgcn/bitcode/oclc_wavefrontsize64_on.bc
./amdgcn/bitcode/oclc_isa_version_900.bc
./amdgcn/bitcode/oclc_isa_version_1012.bc
./amdgcn/bitcode/oclc_isa_version_702.bc
./amdgcn/bitcode/oclc_daz_opt_on.bc
./amdgcn/bitcode/oclc_unsafe_math_off.bc
./amdgcn/bitcode/ockl.bc
./amdgcn/bitcode/oclc_isa_version_803.bc
./amdgcn/bitcode/oclc_isa_version_908.bc
./amdgcn/bitcode/oclc_isa_version_802.bc
./amdgcn/bitcode/oclc_correctly_rounded_sqrt_off.bc
./amdgcn/bitcode/oclc_finite_only_on.bc
./amdgcn/bitcode/oclc_isa_version_701.bc
./amdgcn/bitcode/oclc_unsafe_math_on.bc
./amdgcn/bitcode/oclc_isa_version_902.bc
./amdgcn/bitcode/oclc_finite_only_off.bc
./amdgcn/bitcode/opencl.bc
./amdgcn/bitcode/oclc_isa_version_906.bc
./amdgcn/bitcode/oclc_isa_version_810.bc
./amdgcn/bitcode/oclc_isa_version_801.bc
./amdgcn/bitcode/oclc_correctly_rounded_sqrt_on.bc
./amdgcn/bitcode/oclc_isa_version_1011.bc
```
Also not the change in the filename(s)
This commit updates the XLA code, that has the device lib path + filename(s) hardcoded, to account for the change in location / filename
--
6f981a91c8d8a349c88b450c2191df9c62b2b38b by Deven Desai <deven.desai.amd@gmail.com>:
Adding "-fcuda-flush-denormals-to-zero" as a default hipcc option
Prior to ROCm 3.8, hipcc (hipclang) flushed denormal values to zero by default. Starting with ROCm 3.8 that is no longer true, denormal values are kept as is.
TF expects denormals to be flushed to zero. This is enforced on the CUDA side by explicitly passing the "-fcuda-flush-denormals-to-zero" (see tensorflow.bzl). This commit does the same for the ROCm side.
Also removing the no_rocm tag from the corresponding unit test - //tensorflow/python/kernel_tests:denormal_test_gpu
--
74810439720e0692f81ffb0cc3b97dc6ed50876d by Deven Desai <deven.desai.amd@gmail.com>:
Fix for TF build failure with ROCm 3.9 (error: call to 'min' is ambiguous)
When building TF with ROCm 3.9, we are running into the following compile error
```
In file included from tensorflow/core/kernels/reduction_ops_half_mean_sum.cu.cc:20:
./tensorflow/core/kernels/reduction_gpu_kernels.cu.h:430:9: error: call to 'min' is ambiguous
min(blockDim.y, num_rows - blockIdx.y * blockDim.y);
^~~
/opt/rocm-3.9.0-3805/llvm/lib/clang/12.0.0/include/__clang_hip_math.h:1183:23: note: candidate function
__DEVICE__ inline int min(int __arg1, int __arg2) {
^
/opt/rocm-3.9.0-3805/llvm/lib/clang/12.0.0/include/__clang_hip_math.h:1197:14: note: candidate function
inline float min(float __x, float __y) { return fminf(__x, __y); }
^
/opt/rocm-3.9.0-3805/llvm/lib/clang/12.0.0/include/__clang_hip_math.h:1200:15: note: candidate function
inline double min(double __x, double __y) { return fmin(__x, __y); }
^
1 error generated when compiling for gfx803.
```
The build error seems to be because ROCm 3.9 uses llvm header files from `llvm/lib/clang/12.0.0/include` (ROCm 3.8 uses the `11.0.0` version). `12.0.0` has a new `__clang_hip_math.h` file, which is not present in `11.0.0`. This file has the `min` function overloaded for the `float` and `double` types.
The first argument in the call to `min` (which leads to the error) is `blockDim.y` which has a `uint` type, and hence the compiler gets confused as to which overloaded type to resole to. Previously (i.e. ROCm 3.8 and before) there was only one option (`int`), with ROCm 3.9 there are three (`int`, `float`, and `double`) and hence the error.
The "fix" is to explicitly cast the first argument to `int` to remove the ambiguity (the second argument is already an `int` type).
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/tensorflow/pull/44471 from ROCmSoftwarePlatform:google_upstream_rocm_switch_to_rocm39 74810439720e0692f81ffb0cc3b97dc6ed50876d
PiperOrigin-RevId: 341569721
Change-Id: Ia614893881bf8db1ef8901034c35cc585a82dba8
This PR/commit introduces the following #defines in the `rocm/rocm_config.h` file
```
#define TF_ROCM_VERSION <Version Number of ROCm install>
#define TF_MIOPEN_VERSION <Verion Number of MIOpen in ROCm install>
#define TF_HIPRUNTIME_VERSION <Version Number of HIP Runtinme in ROCm install>
```
These #defines should be used within TF code to add ROCm/MIOpen/HIp Runtime version specific code.
Details on how we go about determining these version numbers can found on the following wiki-page
https://github.com/ROCmSoftwarePlatform/tensorflow-internal/wiki/How-to-add-ROCm-version-specific-code-changes-in-the-TensorFlow-code%3F
A new script `find_rocm_config.py` is being added by this commit. This script does all the work of determining the version number information and it is pretty to extend it to query more information about the ROCM install.
The information collected by the script is available to `rocm_configure.bzl` and hence can be used to add version specific code in `rocm_configure.bzl` as well.
If ctx.attr.linker_bin_path is empty (e.g. if should_download_clang is set)
the GPU build would add a lone `-B` to the build which swallows the next
argument leading to broken builds.
Fixes#41856, fixes#42313
With the switch to the new hipclang-vdi runtime (in ROCm 3.5), the new name for the HIP runtime library is libamdhip64.so.
For backwards compatibility, ROCm 3.5 and ROCm 3.6 include a "libhip_hcc.so" softlink, which points to libamdhip64.so. That softlink will be going away starting with ROCm 3.7(?).
This commit updates references to libhip_hcc.so (in the TF build) to use libamdhip64.so instead.
See following JIRA tickets for further details:
* http://ontrack-internal.amd.com/browse/SWDEV-244762
* http://ontrack-internal.amd.com/browse/SWDEV-238533
- Adds ThenBlasLtMatmul routines that behave similarly to
ThenBlasGemmWithAlgorithm but call into the cublasLt library and allow
separation of plan creation and execution.
- A list of heuristically-prioritized opaque algorithm objects can be
obtained via GetBlasLtMatmulAlgorithms.
- These routines are only supported when the CUDA version is >= 11.0.
Prior to this commit, the AMD GPU targets (i.e. the `amdgpu_targets`), for which HSACO objects are created in the TF build, were determined as follows.
* No `--amdgpu-target=` option would be explicitly added to the `hipcc` command line (via `rocm_copts`)
* `hipcc` would, upon not seeing any `--amdgpu-target=` option specified, invoke the `$ROCM_PATH/bin/rocm_agent_enumerator` tool to determine the list of `amdgpu_targets`
This commit moves the determination of `amdgpu_targets` to be in `rocm_configure.bzl`. Instead of in `hipcc`, the `$ROCM_PATH/bin/rocm_agent_enumerator` tool will instead be invoked within `rocm_configure.bzl` to determine the list `amdgpu_targets`. For each `target` in the `amdgpu_targets` list, a `--amdgpu-target=<target>` option will be added to the `hipcc` command line (via `rocm_copts()`).
This commit also
* allows overriding the way `amdgpu_targets` are determined, by setting the env var `TF_ROCM_AMDGPU_TARGETS` instead.
* creates `rocm_gpu_archictectures` routine in `@local_config_rocm/build_defs.bzl`, which returns the `amgpu_targets` list.
* This will come in the handy when determining the `amdgpu_targets` to build for, when compiling MLIR generated kernels, using the XLA backend (in the non XLA path)
The following commit (which switched G's internal CI to use ROCm 3.5) breaks the ROCm CSB build (which still uses ROCm 3.3)
22def20bae
This PR/commit simply puts back a couple of codes that were removed the the previous commit, and makes them condition on ROCm 3.5.
Note that the ROCm CSB build will be switching to ROCm 3.5 or higher in the near future, at which point all codes the `true` block for `#if TENSORFLOW_COMPILER_IS_HIP_CLANG` will become default, and those in eht `false / #else` block will be removed.
Prior to Bazel 3.1.0 repository_ctx.execute() did not support file uploads. We
worked around this limitation by pasting the contents of a file on the command
line string. In the case of find_cuda_config.py we would hit command line length
limits and worked around this by maintaining a separate gzip compressed base64
encoded version of find_cuda_config.py.
Bazel 3.1.0 added support for file uploads [1]. In this change we remove the the
hack and upload find_cuda_config.py as part of repository_ctx.execute().
[1] 54e9a0e7be
PiperOrigin-RevId: 321570043
Change-Id: Idaf86f1c4a3acf39ab75ebabd80a92b0a7e4b84f
With these Bazel generates typical Windows names (e.g. adds .exe extension to binaries, .dll to shared libraries etc.).
PiperOrigin-RevId: 321317292
Change-Id: I5d2f25cc918c81b3fdb7d924b93124ec9a5481b4
Fix list of cxx_builtin_include_directories. Only a few are needed, but those are more complicated (mix of symlinked and real paths).
Properly return error from crosstool wrapper.
PiperOrigin-RevId: 318788040
Change-Id: Ia66898e98a9a4d8fb479c7e75317f4114f6081e5
When I added this to the linux BUILD template, I forgot to add it here, too.
Adjust cuda_configure.bzl.oss so that it copies the binaries with .exe
extension on Windows.
PiperOrigin-RevId: 317601952
Change-Id: I0712bcd926372cb9d067ead7f92270d52883bfd9