Skip to content

Conversation

@ssnl
Copy link
Collaborator

@ssnl ssnl commented Jul 26, 2018

PackedSequence is never supposed to be created by user, but unfortunately some community repo is already doing this (e.g., here). Some change we made break the calling pattern PackedSequence(data=x, batch_sizes=y). This patch adds back support for that.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ssnl has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

@weiyangfb
Copy link
Contributor

test failure looks legit:

pytorch/test/test_jit.py

Lines 2228 to 2253 in 372d1d6

def test_pack_padded_pad_packed_trace(self):
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
T, B, C = 3, 5, 7
class PadPackedWrapper(torch.nn.Module):
def __init__(self):
super(PadPackedWrapper, self).__init__()
def forward(self, x, seq_lens):
x = pack_padded_sequence(x, seq_lens)
x, _ = pad_packed_sequence(x)
return x
x = np.ones((T, B, C))
seq_lens = np.array([3, 3, 2, 2, 1], dtype=np.int32)
# set padding value so we can test equivalence
for b in range(B):
if seq_lens[b] < T:
x[seq_lens[b]:, b, :] = 0
seq_lens = torch.from_numpy(seq_lens)
x = torch.autograd.Variable(torch.from_numpy(x), requires_grad=True)
m = PadPackedWrapper()
m_traced = torch.jit.trace(x, seq_lens)(m)
y = m(x, seq_lens)

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SsnL has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

# Using an `*args` and an if statement on `len(args)` breaks BC of the
# calling pattern `PackedSequence(data=..., batch_sizes=...)`, so we
# have to provide two arguments with names `data` and `batch_sizes`.
#

This comment was marked as off-topic.

@ailzhang ailzhang added the ready for review (this tag is deprecated) All PRs are ready for review unless they are draft, WIP, or have undismissed requested changes label Aug 14, 2018
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SsnL has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@ssnl
Copy link
Collaborator Author

ssnl commented Aug 21, 2018

@apaszke does this look good now?

@ssnl ssnl deleted the bc_ps branch August 27, 2018 15:51
petrex pushed a commit to petrex/pytorch that referenced this pull request Aug 27, 2018
* upstream/master: (89 commits)
  move HeatmapMaxKeypointOp unittest to oss
  fix xfails involving literals (pytorch#10905)
  Bag of Distributions doc fixes (pytorch#10894)
  Remove FIXME_zerol() from test_jit.py (pytorch#10900)
  Increase BC for PackedSequence ctor (pytorch#9864)
  Remove ability of Scalars to hold Tensors.
  Begin a bestiary of MSVC/NVCC bugs. (pytorch#10883)
  Prevent JIT from overspecializing to every single size configuration (pytorch#10844)
  Handling failing test on ROCm.
  Update mobile predictor caller's interface
  Cache isContiguous and numel
  Create class constant for string literal 'blob_names'
  Conv BN fusion for 3D conv (pytorch#10239)
  Stop using symbolic override for tracing RNNs (pytorch#10638)
  Add registry to pybind_state (pytorch#10759)
  Remove the nanopb submodule
  Create at::linear (pytorch#10799)
  Refactor THCNumerics and add common math functions for at::Half (pytorch#10301)
  Remove Tensor constructor of Scalar. (pytorch#10852)
  Revert D9492561: [pytorch][PR] Moving the operator argument to the front for kernelPointwiseApply.
  ...
PenghuiCheng pushed a commit to PenghuiCheng/pytorch that referenced this pull request Sep 11, 2018
Summary:
PackedSequence is never supposed to be created by user, but unfortunately some community repo is already doing this (e.g., [here](https://github.com/huggingface/torchMoji/blob/7c191048ce906fc0404fe156827d97cb990ebecb/torchmoji/model_def.py#L218-L229)). Some change we made break the calling pattern `PackedSequence(data=x, batch_sizes=y)`. This patch adds back support for that.
Pull Request resolved: pytorch#9864

Differential Revision: D9011739

Pulled By: SsnL

fbshipit-source-id: 0e2012655d7f4863ec54803550df30874ec35d75
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

open source ready for review (this tag is deprecated) All PRs are ready for review unless they are draft, WIP, or have undismissed requested changes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants