-
Notifications
You must be signed in to change notification settings - Fork 26.3k
[quant] Quantized Tensor support deepcopy #28612
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary: att Test Plan: python test/test_quantized_tensor.py Reviewers: gchanan Subscribers: Tasks: Tags: [ghstack-poisoned]
Summary: att Test Plan: python test/test_quantized_tensor.py Reviewers: gchanan Subscribers: Tasks: Tags: [ghstack-poisoned]
Summary: att Test Plan: python test/test_quantized_tensor.py Reviewers: gchanan Subscribers: Tasks: Tags: [ghstack-poisoned]
gchanan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you remind me why quantized tensors use qint8 storages and not int8 storages?
I don't remember exactly, but one reason I can think of right now is because when we inspect the storage we want to distinguish between qint8 and int8 storage? |
Summary: att Test Plan: python test/test_quantized_tensor.py Reviewers: gchanan Subscribers: Tasks: Tags: [ghstack-poisoned]
gchanan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does this support copy_, i.e. a.copy_(b)?
Summary: att Test Plan: python test/test_quantized_tensor.py Reviewers: gchanan Subscribers: Tasks: Tags: [ghstack-poisoned]
Yes it does, test added |
Summary: att Test Plan: python test/test_quantized_tensor.py Reviewers: gchanan Subscribers: Tasks: Tags: [ghstack-poisoned]
Summary: att Test Plan: python test/test_quantized_tensor.py Reviewers: gchanan Subscribers: Tasks: Tags: [ghstack-poisoned]
Summary: att Test Plan: python test/test_quantized_tensor.py Reviewers: gchanan Subscribers: Tasks: Tags: [ghstack-poisoned]
gchanan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this, in-and-of-itself, looks fine for deepcopy.
Why was the existing copy behavior (of clobbering the quantizer of the destination) chosen? That seems surprising, and I don't know why you even want to copy in that case.
It's because of: pytorch/torch/nn/modules/module.py Line 771 in e5d6b75
|
|
This pull request has been merged in 23193c1. |
Summary: Pull Request resolved: pytorch/pytorch#28612 att Test Plan: python test/test_quantized_tensor.py Imported from OSS Differential Revision: D18255247 fbshipit-source-id: 814b12640fdf9d79b27482ee642ce430dbaeea68
Summary: Pull Request resolved: #28612 att Test Plan: python test/test_quantized_tensor.py Imported from OSS Differential Revision: D18255247 fbshipit-source-id: 814b12640fdf9d79b27482ee642ce430dbaeea68
Stack from ghstack:
Summary:
att
Test Plan:
python test/test_quantized_tensor.py
Reviewers:
gchanan
Subscribers:
Tasks:
Tags:
Differential Revision: D18255247