Skip to content

Conversation

@xiaomengy
Copy link
Contributor

Summary:
Pull Request resolved: pytorch/FBGEMM#146

Refactor qconv_prepack and qconv_unpack to support conv3d

Test Plan: buck test mode/dev-nosan caffe2/test:quantized -- "Conv"

Reviewed By: dskhudia

Differential Revision: D18023651

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D18023651

@xiaomengy xiaomengy self-assigned this Oct 23, 2019
@xiaomengy
Copy link
Contributor Author

link to #28480

@xiaomengy
Copy link
Contributor Author

link to pytorch/FBGEMM#146

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D18023651

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D18023651

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D18023651

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D18023651

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D18023651

Copy link
Collaborator

@jamesr66a jamesr66a left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally looks OK. I'd like to defer to Vitaly to verify the memory layout changes

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I don't really like the naming inconsistency here (conv_prepack vs conv3d_prepack, i'd prefer conv2d_prepack). It would be BC breaking but quantization is experimental so it's probably okay

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem is conv_prepack op is already there and if we change the api name, there may be some backward compatibility issues.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment on naming inconsistency as above

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as the comments above, because of backward compatibility tests, we cannot change the existing op names and args

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I’m saying above is we probably should break backwards compatibility, otherwise we’re in a state that doesn’t make sense. We can defer this to another PR though for now

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Naming the function arguments qconv_prepack and qconv_unpack can be confusing for readers. Can you name them something like unpack_fn and pack_fn to distinguish them from the ops?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @VitalyFedyunin does this look OK?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:'( any plan to support it in the future?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess there is? Actually I am not quite sure about that part.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for this conv_param_t invocation and the one below, can we getp aram name comments (e.g. like

)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D18023651

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D18023651

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D18023651

1 similar comment
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D18023651

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D18023651

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D18023651

Copy link
Contributor

@VitalyFedyunin VitalyFedyunin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTG for me in terms of memory formats. Took a couple of important notes for me on how to implement 3d channels last.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is nice candidate to be public function similar to empty_strided

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need to check is_contiguous, src.contiguous() will provide same functionality internally.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

)

Summary:
Pull Request resolved: pytorch#28481

Refactor qconv_prepack and qconv_unpack to support conv3d

Test Plan: buck test mode/dev-nosan caffe2/test:quantized -- "Conv"

Reviewed By: dskhudia

Differential Revision: D18023651

fbshipit-source-id: 4c6be0e3ebf9d995e442c5be63d4316944fa9440
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D18023651

Copy link
Member

@jianyuh jianyuh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! @dskhudia has made the comments and approved in the internal tools. Please make sure all OSS tests pass.

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in d5afd97.

@xiaomengy xiaomengy deleted the export-D18023651 branch October 27, 2019 22:36
zdevito pushed a commit to zdevito/ATen that referenced this pull request Oct 27, 2019
Summary:
Pull Request resolved: pytorch/pytorch#28481

Refactor qconv_prepack and qconv_unpack to support conv3d

Test Plan: buck test mode/dev-nosan caffe2/test:quantized -- "Conv"

Reviewed By: dskhudia

Differential Revision: D18023651

fbshipit-source-id: 8cbc9fe68f93bc4b247a4f41423c6d8c30a5ef90
facebook-github-bot pushed a commit that referenced this pull request Nov 20, 2019
Summary:
Changelog:
- Re-enable multinomial sampling when the probability tensor has `dtype == torch.half`.

It seems to have been missed in #28481.

Fixes #29211
Pull Request resolved: #29266

Differential Revision: D18619105

Pulled By: ezyang

fbshipit-source-id: 1f87e5183e75de5c5e0ffde862fc72d040b32864
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants