STT-tensorflow/tensorflow/tools/ci_build
Pankaj Kanwar a98039c09e Upgrade to CUDA 11.2 and cuDNN 8.1 versions.
PiperOrigin-RevId: 358236420
Change-Id: I2d5d078b86f6197be1d546d11aeddc11e79ef8fb
2021-02-18 12:35:37 -08:00
..
a100 minor fix on the a100 script 2020-11-24 16:04:43 -08:00
build_scripts Don't explicitly include //tensorflow/compiler/mlir/lite/... 2020-12-30 10:05:31 -08:00
builds Increase the threshold for MAC_CPU_MAX_WHL_SIZE to 185M 2021-02-10 11:49:27 -08:00
ctpu
devtoolset Excise remaining python2 and python3.5 names. 2021-01-15 11:14:00 -08:00
gpu_build [ROCm] Raising the memory allocation cap for GPU unit tests from 1GB to 2GB 2021-01-06 15:33:12 +00:00
horovod/gpu
install Merge pull request #46691 from ROCmSoftwarePlatform:google_upstream_rocm_container_pip_install_fix 2021-02-16 08:29:36 -08:00
linux Removing deprecated CI scripts 2021-02-03 08:57:05 -08:00
nightly_release Add python<x.x> -m in front of twine commands in nightly_release/ubuntu 2021-02-18 09:21:02 -08:00
osx
pi
presubmit Convert MacOS builds to use virtualenvs for Python. 2021-01-25 16:05:26 -08:00
protobuf
rel Enable py39 for TF 2021-02-04 16:46:28 -08:00
release Enable py39 for TF 2021-02-04 16:46:28 -08:00
remote
windows Revamp the libtensorflow release process. 2021-01-26 16:40:09 -08:00
xla/linux/gpu Removing deprecated CI scripts 2021-02-03 08:57:05 -08:00
build_rbe.sh
ci_build.sh
ci_sanity.sh Force sanity configure step to use python3.8 2021-01-25 16:26:35 -08:00
cloudbuild.yaml
code_link_check.sh
copy_binary.py [*.py,tensorflow/cc/framework/cc_op_gen.cc] Rename "Arguments:" to "Args:" 2020-12-22 09:24:04 +11:00
cuda-clang.patch
Dockerfile.android Update TFLite to use Android NDK r18b 2020-10-01 11:58:25 -07:00
Dockerfile.cmake
Dockerfile.cpu
Dockerfile.cpu-py36 added mainter changes. 2020-11-20 17:37:46 -08:00
Dockerfile.cpu.ppc64le
Dockerfile.cuda-clang
Dockerfile.custom_op_gpu
Dockerfile.custom_op_ubuntu_16 Excise remaining python2 and python3.5 names. 2021-01-15 11:14:00 -08:00
Dockerfile.custom_op_ubuntu_16_cuda10.0 Excise remaining python2 and python3.5 names. 2021-01-15 11:14:00 -08:00
Dockerfile.custom_op_ubuntu_16_cuda10.1 Excise remaining python2 and python3.5 names. 2021-01-15 11:14:00 -08:00
Dockerfile.debian.jessie.cpu
Dockerfile.gpu
Dockerfile.gpu.ppc64le
Dockerfile.hadoop
Dockerfile.horovod.gpu
Dockerfile.local-toolchain-ubuntu18.04-manylinux2010
Dockerfile.micro Fix clang-format installation on the TFLM Docker container. 2021-02-01 09:21:49 -08:00
Dockerfile.pi
Dockerfile.pi-python3 Build tflite_runtime Python wheel with CMake 2021-02-09 16:44:50 -08:00
Dockerfile.pi-python37 Build tflite_runtime Python wheel with CMake 2021-02-09 16:44:50 -08:00
Dockerfile.pi-python38 Build tflite_runtime Python wheel with CMake 2021-02-09 16:44:50 -08:00
Dockerfile.rbe.cpu
Dockerfile.rbe.cuda9.0-cudnn7-ubuntu14.04
Dockerfile.rbe.cuda10.0-cudnn7-ubuntu14.04
Dockerfile.rbe.cuda10.0-cudnn7-ubuntu16.04-manylinux2010
Dockerfile.rbe.cuda10.1-cudnn7-ubuntu16.04-manylinux2010
Dockerfile.rbe.cuda10.1-cudnn7-ubuntu16.04-manylinux2010-multipython Excise remaining python2 and python3.5 names. 2021-01-15 11:14:00 -08:00
Dockerfile.rbe.cuda10.1-cudnn7-ubuntu18.04-manylinux2010-multipython Excise remaining python2 and python3.5 names. 2021-01-15 11:14:00 -08:00
Dockerfile.rbe.cuda11.0-cudnn8-ubuntu18.04-manylinux2010-multipython Excise remaining python2 and python3.5 names. 2021-01-15 11:14:00 -08:00
Dockerfile.rbe.cuda11.2-cudnn8.1-ubuntu18.04-manylinux2010-multipython Upgrade to CUDA 11.2 and cuDNN 8.1 versions. 2021-02-18 12:35:37 -08:00
Dockerfile.rbe.gpu
Dockerfile.rbe.rocm-ubuntu18.04-manylinux2010-multipython Excise remaining python2 and python3.5 names. 2021-01-15 11:14:00 -08:00
Dockerfile.rbe.ubuntu16.04-manylinux2010
Dockerfile.rocm Switch Dockerfile.rocm to use ROCm 4.0.1 2021-02-03 08:53:08 -08:00
pep8
pylint_allowlist Allow 2 more pylint errors. 2021-02-01 13:48:30 -08:00
pylintrc [tensorflow/tools/ci_build/pylintrc] Remove explicit rc_file specification 2020-12-22 09:17:25 +11:00
README.md
sizetrack_helper.py
update_version.py

TensorFlow Builds

This directory contains all the files and setup instructions to run all the important builds and tests. You can run it yourself!

Run It Yourself

You have two options when running TensorFlow tests locally on your machine. First, using docker, you can run our Continuous Integration (CI) scripts on tensorflow devel images. The other option is to install all TensorFlow dependencies on your machine and run the scripts natively on your system.

Run TensorFlow CI Scripts using Docker

  1. Install Docker following the instructions on the docker website.

  2. Start a container with one of the devel images here: https://hub.docker.com/r/tensorflow/tensorflow/tags/.

  3. Based on your choice of the image, pick one of the scripts under https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/ci_build/linux and run them from the TensorFlow repository root.

Run TensorFlow CI Scripts Natively on your Machine

  1. Follow the instructions at https://www.tensorflow.org/install/source, but stop when you get to the section "Configure the installation". You do not need to configure the installation to run the CI scripts.

  2. Pick the appropriate OS and python version you have installed, and run the script under tensorflow/tools/ci_build/.

TensorFlow Continuous Integration

To verify that new changes dont break TensorFlow, we run builds and tests on either Jenkins or a CI system internal to Google.

We can trigger builds and tests on updates to master or on each pull request. Contact one of the repository maintainers to trigger builds on your pull request.

View CI Results

The Pull Request will show if the change passed or failed the checks.

From the pull request, click Show all checks to see the list of builds and tests. Click on Details to see the results from Jenkins or the internal CI system.

Results from Jenkins are displayed in the Jenkins UI. For more information, see the Jenkins documentation.

Results from the internal CI system are displayed in the Build Status UI. In this UI, to see the logs for a failed build:

  • Click on the INVOCATION LOG tab to see the invocation log.

  • Click on the ARTIFACTS tab to see a list of all artifacts, including logs.

  • Individual test logs may be available. To see these logs, from the TARGETS tab, click on the failed target. Then, click on the TARGET LOG tab to see its test log.

    If youre looking at target that is sharded or a test that is flaky, then the build tool divided the target into multiple shards or ran the test multiple times. Each test log is specific to the shard, run, and attempt. To see a specific log:

    1. Click on the log icon that is on the right next to the shard, run, and attempt number.

    2. In the grid that appears on the right, click on the specific shard, run, and attempt to view its log. You can also type the desired shard, run, or attempt number in the field above its grid.

Third party TensorFlow CI

Mellanox TensorFlow CI

How to start CI
  • Submit special pull request (PR) comment to trigger CI: bot:mlx:test
  • Test session is run automatically.
  • Test results and artifacts (log files) are reported via PR comments
CI Steps

CI includes the following steps: * Build TensorFlow (GPU version) * Run TensorFlow tests: * TF CNN benchmarks (TensorFlow 1.13 and less) * TF models (TensorFlow 2.0): ResNet, synthetic data, NCCL, multi_worker_mirrored distributed strategy

Test Environment

CI is run in the Mellanox lab on a 2-node cluster with the following parameters:

  • Hardware * IB: 1x ConnectX-6 HCA (connected to Mellanox Quantum™ HDR switch) * GPU: 1x Nvidia Tesla K40m * Software * Ubuntu 16.04.6 * Internal stable MLNX_OFED, HPC-X™ and SHARP™ versions
Support (Mellanox)

With any questions/suggestions or in case of issues contact Artem Ryabov.