Skip to content

Honor auto_pad attribute in ConvTranspose #4271

Merged
hariharans29 merged 21 commits intomasterfrom
convTranspose
Dec 12, 2020
Merged

Honor auto_pad attribute in ConvTranspose #4271
hariharans29 merged 21 commits intomasterfrom
convTranspose

Conversation

@hariharans29
Copy link
Member

Description: Honor auto_pad attributes (if set) in ConvTranspose

Motivation and Context
Resolve #4086

@hariharans29 hariharans29 requested a review from a team as a code owner June 18, 2020 06:24
@hariharans29 hariharans29 changed the title WIP: Honor auto_pad attribute in ConvTranspose Honor auto_pad attribute in ConvTranspose Jun 20, 2020

} // namespace

TEST(ConvTransposeTest, ConvTranspose_1D_AsymmetricPads) {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are 3 new tests associated with this change starting here. The rest are just formatting changes.

@stale
Copy link

stale bot commented Aug 19, 2020

This issue has been automatically marked as stale due to inactivity and will be closed in 7 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

@stale stale bot added the stale issues that have not been addressed in a while; categorized by a bot label Aug 19, 2020
@stale
Copy link

stale bot commented Aug 29, 2020

This issue has been automatically closed due to inactivity. Please reactivate if further support is needed.

// That said, we pad more on tail when total_pad is odd.
*pad_head = total_pad / 2;
*pad_tail = total_pad - total_pad / 2;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we merge line 193 to line 202 and line 214 to line 223 with the same code?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure - seems reasonable, I ll do that.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

@jiafatom
Copy link
Contributor

jiafatom commented Dec 9, 2020

A little confused about the logic: ComputeTotalPad considers strides there, then the output shape should also consider strides. Why output shape is the same as input shape at the end? Can we have a unit test that has stride > 1?

@hariharans29 hariharans29 reopened this Dec 10, 2020
@stale stale bot removed the stale issues that have not been addressed in a while; categorized by a bot label Dec 10, 2020
// symmetric padding.
// TODO: Remove this after we have supported asymmetric padding in the CUDA ConvTranspose kernel
if (auto_pad_attr == "SAME_UPPER" || auto_pad_attr == "SAME_LOWER") {
return true;
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Drop down the ConvTranspose node to CPU if the auto_pad attribute is in play - because the dynamically computed pads may or may not be symmetric - it is better to be safe than sorry. This shouldn't cause perf regression in shipped prod models as auto_pad was never supported in ConvTranspose previously.

jiafatom
jiafatom previously approved these changes Dec 11, 2020
Copy link
Contributor

@jiafatom jiafatom left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mainly checked the logic part, LGTM.

@hariharans29 hariharans29 merged commit c755ca0 into master Dec 12, 2020
@hariharans29 hariharans29 deleted the convTranspose branch December 12, 2020 06:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

ConvTranspose auto_pad=SAME_UPPER/SAME_LOWER not working properly

4 participants