-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Closed
Labels
oncall: quantizationQuantization support in PyTorchQuantization support in PyTorchtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
🐛 Bug
Currently quantized LSTM supports only non-packed sequence, we should fix it.
To Reproduce
import torch
from torch.nn.utils.rnn import pack_sequence
# quantized apis require a container, not a top-level module - thus sequential
m = torch.nn.Sequential(torch.nn.LSTM(2,5,num_layers=2))
x = pack_sequence([torch.rand(4,2), torch.rand(3,2), torch.rand(2,2)])
m(x) # works
qm = torch.quantization.quantize_dynamic(m, dtype=torch.qint8)
qm(x)
Today it fails with AssertionError: assert batch_sizes is None.
chmenet
Metadata
Metadata
Labels
oncall: quantizationQuantization support in PyTorchQuantization support in PyTorchtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module