Honor auto_pad attribute in ConvTranspose #4271
Conversation
|
|
||
| } // namespace | ||
|
|
||
| TEST(ConvTransposeTest, ConvTranspose_1D_AsymmetricPads) { |
There was a problem hiding this comment.
There are 3 new tests associated with this change starting here. The rest are just formatting changes.
|
This issue has been automatically marked as stale due to inactivity and will be closed in 7 days if no further activity occurs. If further support is needed, please provide an update and/or more details. |
|
This issue has been automatically closed due to inactivity. Please reactivate if further support is needed. |
| // That said, we pad more on tail when total_pad is odd. | ||
| *pad_head = total_pad / 2; | ||
| *pad_tail = total_pad - total_pad / 2; | ||
| } |
There was a problem hiding this comment.
Could we merge line 193 to line 202 and line 214 to line 223 with the same code?
There was a problem hiding this comment.
sure - seems reasonable, I ll do that.
|
A little confused about the logic: |
| // symmetric padding. | ||
| // TODO: Remove this after we have supported asymmetric padding in the CUDA ConvTranspose kernel | ||
| if (auto_pad_attr == "SAME_UPPER" || auto_pad_attr == "SAME_LOWER") { | ||
| return true; |
There was a problem hiding this comment.
Drop down the ConvTranspose node to CPU if the auto_pad attribute is in play - because the dynamically computed pads may or may not be symmetric - it is better to be safe than sorry. This shouldn't cause perf regression in shipped prod models as auto_pad was never supported in ConvTranspose previously.
jiafatom
left a comment
There was a problem hiding this comment.
mainly checked the logic part, LGTM.
Description: Honor auto_pad attributes (if set) in ConvTranspose
Motivation and Context
Resolve #4086