Commit Graph

9 Commits

Author SHA1 Message Date
Jonathan Hseu
25251f057c Fix go proto handling 2020-01-31 12:47:04 -08:00
Tim Shen
1ba157f535 Add convolution kind FORWARD_BIAS_ACTIVATION to StreamExecutor. Previously we thought that the combination of kind = FORWARD and ActivationMode != kNone is a necessary-sufficient condition for the call target to be cudnnConvolutionBiasActivationForward(), thus eliminating the need to create FORWARD_BIAS_ACTIVATION. It turns out not. One can call the bias activation function conv with activation=kNone, indicating the identitiy activation function, but non-trivial bias.
PiperOrigin-RevId: 247473822
2019-05-09 14:23:39 -07:00
A. Unique TensorFlower
4bc6009daa Add layer name to ConvolutionDescriptor.
PiperOrigin-RevId: 241637565
2019-04-02 17:40:34 -07:00
Tim Shen
94be8f012a Roll-forward:
Log convolutions during Tensorflow GPU conv autotuning. Also removed the same functionality from StreamExecutor.

We decided to move the loggings from SE to TF and XLA for several reasons:
* Proto formats already exist in TF and XLA that are suitable for logging. No need to create a third proto.
* In TF and XLA autotuning stage, we also do/plan to do correctness checking. We want to log the checking results.
* We are considering simplifying SE, so we prefer to keep SE simple for now.

The original patch fails on Windows because the Windows linker crashes if it links an object file generated from an empty source file. In this CL, such empty source file is gpu_utils.cc, in the case where everything is #ifdef'ed out by GOOGLE_CUDA. To work-around it, simply don't compile such empty file at all for non-CUDA builds.

PiperOrigin-RevId: 237284443
2019-03-07 11:21:33 -08:00
Jeremy Lau
1ded777480 Automated rollback of commit 5aefc4e922
PiperOrigin-RevId: 236950480
2019-03-05 17:38:55 -08:00
Tim Shen
5aefc4e922 Log convolutions during Tensorflow GPU conv autotuning. Also removed the same functionality from StreamExecutor.
We decided to move the loggings from SE to TF and XLA for several reasons:
* Proto formats already exist in TF and XLA that are suitable for logging. No need to create a third proto.
* In TF and XLA autotuning stage, we also do/plan to do correctness checking. We want to log the checking results.
* We are considering simplifying SE, so we prefer to keep SE simple for now.

PiperOrigin-RevId: 236889526
2019-03-05 12:22:21 -08:00
Tim Shen
45db95c021 Log convolution calls to the custom logger, if the profiling is on.
PiperOrigin-RevId: 226958458
2018-12-26 15:16:03 -08:00
Tim Shen
1bd82e0959 Simplify some cuda_dnn logic:
* De-duplicate the accumulator type logic. Currently we have two ways to
specify them, "AccumulatorType" as template argument, or GetConvComputeType.
Removed GetConvComputeType.
* Simplified GetCudnnDataType.

PiperOrigin-RevId: 222144108
2018-11-19 15:05:56 -08:00
Tim Shen
219bb5d8dc Use protos for dnn.h data structure implementations.
This is the first of a series patches that log StreamExecutor
convolution calls. This patch introduced structured (proto) logging,
suitable for serialize and potentially deserialize.

PiperOrigin-RevId: 221667516
2018-11-15 12:42:20 -08:00