Ignore use_locking attribute when decomposing resources operations to target XLA

This is the behavior of the existing bridge and this is aligning the implementation
of the MLIR-based bridge with it. It also matches the original intent: the comment was
explicit about the intent but the implementation wasn't matching it.

PiperOrigin-RevId: 358107344
Change-Id: I789b5b97cd1dcab4099a7c0388d075d65cd13950
This commit is contained in:
Mehdi Amini 2021-02-17 22:28:10 -08:00 committed by TensorFlower Gardener
parent ff6efc701e
commit cc143617a7

View File

@ -357,7 +357,7 @@ def DecomposeResourceApplyCenteredRMSProp :
Pattern<
(TF_ResourceApplyCenteredRMSPropOp:$src_op
$var_resource, $mg_resource, $ms_resource, $mom_resource, $lr, $rho, $momentum, $epsilon,
$grad, ConstBoolAttrFalse:$use_locking
$grad, $use_locking
),
[(TF_ConstOp:$one (GetScalarOfType<1> $grad)),
(CreateTFReadVariableOp $src_op, $grad, $ms_resource),
@ -419,7 +419,7 @@ def DecomposeResourceApplyRMSProp :
Pattern<
(TF_ResourceApplyRMSPropOp:$src_op
$var_resource, $ms_resource, $mom_resource, $lr, $rho, $momentum, $epsilon,
$grad, ConstBoolAttrFalse:$use_locking
$grad, $use_locking
),
[(TF_ConstOp:$one (GetScalarOfType<1> $grad)),
(CreateTFReadVariableOp $src_op, $grad, $ms_resource),
@ -456,7 +456,7 @@ def DecomposeResourceApplyProximalAdagrad :
Pattern<
(TF_ResourceApplyProximalAdagradOp:$src_op
$var_resource, $accum_resource, $lr, $l1, $l2, $grad,
ConstBoolAttrFalse:$use_locking
$use_locking
),
[(TF_ConstOp:$one (GetScalarOfType<1> $grad)),
(TF_ConstOp:$zero (GetScalarOfType<0> $grad)),