-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Increase BC for PackedSequence ctor #9864
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ssnl has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
torch/nn/utils/rnn.py
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
test failure looks legit: Lines 2228 to 2253 in 372d1d6
|
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SsnL has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
| # Using an `*args` and an if statement on `len(args)` breaks BC of the | ||
| # calling pattern `PackedSequence(data=..., batch_sizes=...)`, so we | ||
| # have to provide two arguments with names `data` and `batch_sizes`. | ||
| # |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SsnL has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
@apaszke does this look good now? |
* upstream/master: (89 commits) move HeatmapMaxKeypointOp unittest to oss fix xfails involving literals (pytorch#10905) Bag of Distributions doc fixes (pytorch#10894) Remove FIXME_zerol() from test_jit.py (pytorch#10900) Increase BC for PackedSequence ctor (pytorch#9864) Remove ability of Scalars to hold Tensors. Begin a bestiary of MSVC/NVCC bugs. (pytorch#10883) Prevent JIT from overspecializing to every single size configuration (pytorch#10844) Handling failing test on ROCm. Update mobile predictor caller's interface Cache isContiguous and numel Create class constant for string literal 'blob_names' Conv BN fusion for 3D conv (pytorch#10239) Stop using symbolic override for tracing RNNs (pytorch#10638) Add registry to pybind_state (pytorch#10759) Remove the nanopb submodule Create at::linear (pytorch#10799) Refactor THCNumerics and add common math functions for at::Half (pytorch#10301) Remove Tensor constructor of Scalar. (pytorch#10852) Revert D9492561: [pytorch][PR] Moving the operator argument to the front for kernelPointwiseApply. ...
Summary: PackedSequence is never supposed to be created by user, but unfortunately some community repo is already doing this (e.g., [here](https://github.com/huggingface/torchMoji/blob/7c191048ce906fc0404fe156827d97cb990ebecb/torchmoji/model_def.py#L218-L229)). Some change we made break the calling pattern `PackedSequence(data=x, batch_sizes=y)`. This patch adds back support for that. Pull Request resolved: pytorch#9864 Differential Revision: D9011739 Pulled By: SsnL fbshipit-source-id: 0e2012655d7f4863ec54803550df30874ec35d75
PackedSequence is never supposed to be created by user, but unfortunately some community repo is already doing this (e.g., here). Some change we made break the calling pattern
PackedSequence(data=x, batch_sizes=y). This patch adds back support for that.