-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Refactor qconv_prepack and qconv_unpack to support conv3d (#146) #28481
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
This pull request was exported from Phabricator. Differential Revision: D18023651 |
|
link to #28480 |
|
link to pytorch/FBGEMM#146 |
e4cd874 to
9ceba6e
Compare
|
This pull request was exported from Phabricator. Differential Revision: D18023651 |
9ceba6e to
46bd15d
Compare
|
This pull request was exported from Phabricator. Differential Revision: D18023651 |
46bd15d to
ad241eb
Compare
|
This pull request was exported from Phabricator. Differential Revision: D18023651 |
ad241eb to
843687f
Compare
|
This pull request was exported from Phabricator. Differential Revision: D18023651 |
843687f to
6bb69ca
Compare
|
This pull request was exported from Phabricator. Differential Revision: D18023651 |
jamesr66a
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally looks OK. I'd like to defer to Vitaly to verify the memory layout changes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, I don't really like the naming inconsistency here (conv_prepack vs conv3d_prepack, i'd prefer conv2d_prepack). It would be BC breaking but quantization is experimental so it's probably okay
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem is conv_prepack op is already there and if we change the api name, there may be some backward compatibility issues.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same comment on naming inconsistency as above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as the comments above, because of backward compatibility tests, we cannot change the existing op names and args
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What I’m saying above is we probably should break backwards compatibility, otherwise we’re in a state that doesn’t make sense. We can defer this to another PR though for now
test/test_quantized.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Naming the function arguments qconv_prepack and qconv_unpack can be confusing for readers. Can you name them something like unpack_fn and pack_fn to distinguish them from the ops?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
c10/core/MemoryFormat.h
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @VitalyFedyunin does this look OK?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
:'( any plan to support it in the future?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess there is? Actually I am not quite sure about that part.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for this conv_param_t invocation and the one below, can we getp aram name comments (e.g. like
| /*packA=*/packA, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
6bb69ca to
4427107
Compare
|
This pull request was exported from Phabricator. Differential Revision: D18023651 |
4427107 to
f814542
Compare
66db06f to
7597d7e
Compare
|
This pull request was exported from Phabricator. Differential Revision: D18023651 |
7597d7e to
8d8fb86
Compare
|
This pull request was exported from Phabricator. Differential Revision: D18023651 |
1 similar comment
|
This pull request was exported from Phabricator. Differential Revision: D18023651 |
8d8fb86 to
7c61ae1
Compare
|
This pull request was exported from Phabricator. Differential Revision: D18023651 |
7c61ae1 to
6b06a62
Compare
6b06a62 to
a4a222c
Compare
|
This pull request was exported from Phabricator. Differential Revision: D18023651 |
VitalyFedyunin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTG for me in terms of memory formats. Took a couple of important notes for me on how to implement 3d channels last.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is nice candidate to be public function similar to empty_strided
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need to check is_contiguous, src.contiguous() will provide same functionality internally.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
) Summary: Pull Request resolved: pytorch#28481 Refactor qconv_prepack and qconv_unpack to support conv3d Test Plan: buck test mode/dev-nosan caffe2/test:quantized -- "Conv" Reviewed By: dskhudia Differential Revision: D18023651 fbshipit-source-id: 4c6be0e3ebf9d995e442c5be63d4316944fa9440
a4a222c to
22e8a18
Compare
|
This pull request was exported from Phabricator. Differential Revision: D18023651 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! @dskhudia has made the comments and approved in the internal tools. Please make sure all OSS tests pass.
|
This pull request has been merged in d5afd97. |
Summary: Pull Request resolved: pytorch/pytorch#28481 Refactor qconv_prepack and qconv_unpack to support conv3d Test Plan: buck test mode/dev-nosan caffe2/test:quantized -- "Conv" Reviewed By: dskhudia Differential Revision: D18023651 fbshipit-source-id: 8cbc9fe68f93bc4b247a4f41423c6d8c30a5ef90
Summary: Changelog: - Re-enable multinomial sampling when the probability tensor has `dtype == torch.half`. It seems to have been missed in #28481. Fixes #29211 Pull Request resolved: #29266 Differential Revision: D18619105 Pulled By: ezyang fbshipit-source-id: 1f87e5183e75de5c5e0ffde862fc72d040b32864
Summary:
Pull Request resolved: pytorch/FBGEMM#146
Refactor qconv_prepack and qconv_unpack to support conv3d
Test Plan: buck test mode/dev-nosan caffe2/test:quantized -- "Conv"
Reviewed By: dskhudia
Differential Revision: D18023651