From 818c0112bf4282c5641883ac46e1307b3bf2dba5 Mon Sep 17 00:00:00 2001 From: Abdul Baseer Khan <54206941+AbdulBaseerMohammedKhan@users.noreply.github.com> Date: Thu, 4 Jun 2020 06:34:41 +0530 Subject: [PATCH] Update RELEASE.md --- RELEASE.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/RELEASE.md b/RELEASE.md index 6f3aa94c203..7fc620bc18a 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -164,7 +164,7 @@ Coinciding with this change, new releases of [TensorFlow's Docker images](https: * `saved_model_cli aot_compile_cpu` allows you to compile saved models to XLA header+object files and include them in your C++ programs. * Enable `Igamma`, `Igammac` for XLA. * Deterministic Op Functionality: - * XLA reduction emitter is deterministic when the environment variable `TF_DETERMINISTIC_OPS` is set to "true" or "1". This extends deterministic `tf.nn.bias_add` back-prop functionality (and therefore also deterministic back-prop of bias-addition in Keras layers) to include when XLA JIT complilation is enabled. + * XLA reduction emitter is deterministic when the environment variable `TF_DETERMINISTIC_OPS` is set to "true" or "1". This extends deterministic `tf.nn.bias_add` back-prop functionality (and therefore also deterministic back-prop of bias-addition in Keras layers) to include when XLA JIT compilation is enabled. * Fix problem, when running on a CUDA GPU and when either environment variable `TF_DETERMINSTIC_OPS` or environment variable `TF_CUDNN_DETERMINISTIC` is set to "true" or "1", in which some layer configurations led to an exception with the message "No algorithm worked!" * Tracing and Debugging: * Add source, destination name to `_send` traceme to allow easier debugging.