Skip to content

Conversation

@supriyar
Copy link
Contributor

@supriyar supriyar commented Sep 4, 2019

Stack from ghstack:

Add bias as an optional parameter in the packed conv weight struct.

Differential Revision: D17177723

@supriyar supriyar requested a review from apaszke as a code owner September 4, 2019 02:41
@pytorchbot pytorchbot added module: nn Related to torch.nn module: operators oncall: quantization Quantization support in PyTorch labels Sep 4, 2019
supriyar added a commit that referenced this pull request Sep 4, 2019
Add bias as an optional parameter in the packed conv weight struct.

Differential Revision: [D17177723](https://our.internmc.facebook.com/intern/diff/D17177723/)

ghstack-source-id: 89450557
Pull Request resolved: #25626
supriyar added a commit that referenced this pull request Sep 4, 2019
Pull Request resolved: #25626

Add bias as an optional parameter in the packed conv weight struct.
ghstack-source-id: 89452640

Differential Revision: [D17177723](https://our.internmc.facebook.com/intern/diff/D17177723/)
supriyar added a commit that referenced this pull request Sep 4, 2019
Pull Request resolved: #25626

Add bias as an optional parameter in the packed conv weight struct.
ghstack-source-id: 89454242

Differential Revision: [D17177723](https://our.internmc.facebook.com/intern/diff/D17177723/)
for (int i = 0; i < K; ++i) {
bias_scale.data_ptr<double>()[i] = pack_ptr.w_scale[i] * act_scale;
}
qbias = quantize_linear_per_channel_cpu(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

quantize_linear_per_channel instead of quantize_linear_per_channel_cpu

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using the quantize_linear_per_channel function causes the following error

RuntimeError: Expected object of type Variable but found type CPUFloatType for argument #1 'scales' (checked_cast_variable at caffe2/torch/csrc/autograd/VariableTypeManual.cpp:38)

Spoke to @yf225 offline and he said the error suggests that we are missing a at::AutoNonVariableTypeMode thread-local guard somewhere in the call path of QLinearInt8 operator().
quantize_linear_per_channel dispatches to VariableType dispatcher, which expects the input tensor to be a Variable.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you know why are we missing this guard?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure. @smessmer any thoughts on if this guard should be added?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this issue resolved?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not yet, I will file a separate issue for this.

stride=stride, padding=padding, dilation=dilation,
groups=groups, bias=bias, padding_mode=padding_mode)

def weight(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are these two methods being removed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I don't see why they are needed as the parent module already defined them. Looks like linear_relu.py also doesn't have them

@supriyar supriyar requested a review from dskhudia September 4, 2019 21:05
supriyar added a commit that referenced this pull request Sep 5, 2019
Pull Request resolved: #25626

Add bias as an optional parameter in the packed conv weight struct.
ghstack-source-id: 89519225

Differential Revision: [D17177723](https://our.internmc.facebook.com/intern/diff/D17177723/)
supriyar added a commit that referenced this pull request Sep 5, 2019
Pull Request resolved: #25626

Add bias as an optional parameter in the packed conv weight struct.
ghstack-source-id: 89559766

Differential Revision: [D17177723](https://our.internmc.facebook.com/intern/diff/D17177723/)
supriyar added a commit that referenced this pull request Sep 6, 2019
Pull Request resolved: #25626

Add bias as an optional parameter in the packed conv weight struct.
ghstack-source-id: 89601358

Differential Revision: [D17177723](https://our.internmc.facebook.com/intern/diff/D17177723/)
supriyar added a commit that referenced this pull request Sep 6, 2019
Pull Request resolved: #25626

Add bias as an optional parameter in the packed conv weight struct.
ghstack-source-id: 89630144

Differential Revision: [D17177723](https://our.internmc.facebook.com/intern/diff/D17177723/)
Copy link
Contributor

@raghuramank100 raghuramank100 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great, please address comments prior to submitting.

supriyar added a commit that referenced this pull request Sep 10, 2019
Pull Request resolved: #25626

Add bias as an optional parameter in the packed conv weight struct.
ghstack-source-id: 89780639

Differential Revision: [D17177723](https://our.internmc.facebook.com/intern/diff/D17177723/)
@facebook-github-bot
Copy link
Contributor

This pull request has been merged in c60dddb.

zdevito pushed a commit to zdevito/ATen that referenced this pull request Sep 10, 2019
Summary:
Pull Request resolved: pytorch/pytorch#25626

Add bias as an optional parameter in the packed conv weight struct.
ghstack-source-id: 89780639

Test Plan: python test/run_test.py --exclude nn --verbose --bring-to-front quantization quantized quantized_tensor quantized_nn_mods quantizer

Reviewed By: raghuramank100

Differential Revision: D17177723

fbshipit-source-id: e502f2196cb1c002db8b691124db740368944c92
jerryzh168 added a commit that referenced this pull request Sep 18, 2019
Summary:
Fix the patterns after changes to prepack functions(#25626)

Test Plan:
pytho test/test_jit.py 'TestJit.test_quant_fusion'

Reviewers:
mvz

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
jerryzh168 added a commit that referenced this pull request Sep 18, 2019
…rns"

Summary:
Fix the patterns after changes to prepack functions(#25626)

Test Plan:
pytho test/test_jit.py 'TestJit.test_quant_fusion'

Reviewers:
mvz

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
jerryzh168 added a commit that referenced this pull request Sep 18, 2019
…rns"

Summary:
Fix the patterns after changes to prepack functions(#25626)

Test Plan:
pytho test/test_jit.py 'TestJit.test_quant_fusion'

Reviewers:
mvz

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
jerryzh168 added a commit that referenced this pull request Sep 18, 2019
…rns"

Summary:
Fix the patterns after changes to prepack functions(#25626)

Test Plan:
pytho test/test_jit.py 'TestJit.test_quant_fusion'

Reviewers:
mvz

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
jerryzh168 added a commit that referenced this pull request Sep 18, 2019
…rns"

Summary:
Fix the patterns after changes to prepack functions(#25626)

Test Plan:
pytho test/test_jit.py 'TestJit.test_quant_fusion'

Reviewers:
mvz

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
jerryzh168 added a commit that referenced this pull request Sep 19, 2019
…rns"

Summary:
Fix the patterns after changes to prepack functions(#25626)

Test Plan:
pytho test/test_jit.py 'TestJit.test_quant_fusion'

Reviewers:
mvz

Subscribers:

Tasks:

Tags:

Differential Revision: [D17465553](https://our.internmc.facebook.com/intern/diff/D17465553)

[ghstack-poisoned]
facebook-github-bot pushed a commit that referenced this pull request Sep 19, 2019
Summary:
Pull Request resolved: #26414

Fix the patterns after changes to prepack functions(#25626)

Test Plan:
pytho test/test_jit.py 'TestJit.test_quant_fusion'

Imported from OSS

Differential Revision: D17465553

fbshipit-source-id: 7df6a6aa8389bb4a7a370c65ade4c2585b45b882
@facebook-github-bot facebook-github-bot deleted the gh/supriyar/9/head branch October 28, 2019 22:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: nn Related to torch.nn oncall: quantization Quantization support in PyTorch

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants