Skip to content

Conversation

@lly-zero-one
Copy link
Contributor

This is actually a bug in both testing and the average pool implementation.
In testing, we used the quantized value as float input and failed to padding the value with zero_point.
In op implementation, the size for averaging is not correct for padding case when count_include_pad is true.

@lly-zero-one lly-zero-one added the oncall: quantization Quantization support in PyTorch label Oct 17, 2019
@lly-zero-one
Copy link
Contributor Author

I think it is ready to go. Can you stamp it? @supriyar @raghuramank100

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lly-zero-one has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@supriyar supriyar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please address comment. Otherwise looks good!

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lly-zero-one is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

zdevito pushed a commit to zdevito/ATen that referenced this pull request Oct 21, 2019
Summary:
This is actually a bug in both testing and the average pool implementation.
In testing, we used the quantized value as float input and failed to padding the value with zero_point.
In op implementation, the size for averaging is not correct for padding case when count_include_pad is true.
Pull Request resolved: pytorch/pytorch#28260

Differential Revision: D18039960

Pulled By: lly-zero-one

fbshipit-source-id: 7b5d34498b60f5d574a276a22798c9f576944734
@facebook-github-bot
Copy link
Contributor

@lly-zero-one merged this pull request in 4d9c017.

thiagocrepaldi pushed a commit to thiagocrepaldi/pytorch that referenced this pull request Feb 4, 2020
Summary:
This is actually a bug in both testing and the average pool implementation.
In testing, we used the quantized value as float input and failed to padding the value with zero_point.
In op implementation, the size for averaging is not correct for padding case when count_include_pad is true.
Pull Request resolved: pytorch#28260

Differential Revision: D18039960

Pulled By: lly-zero-one

fbshipit-source-id: 7b5d34498b60f5d574a276a22798c9f576944734
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged oncall: quantization Quantization support in PyTorch

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants