-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Closed
Closed
Copy link
Labels
high prioritymodule: crashProblem manifests as a hard crash, as opposed to a RuntimeErrorProblem manifests as a hard crash, as opposed to a RuntimeErrormodule: nnRelated to torch.nnRelated to torch.nnmodule: rnnIssues related to RNN support (LSTM, GRU, etc)Issues related to RNN support (LSTM, GRU, etc)smallWe think this is a small issue to fix. Consider knocking off high priority small issuesWe think this is a small issue to fix. Consider knocking off high priority small issuestriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
Passing an empty tensor (so with shape (0,n,m)) to pack_padded_sequence causes a "segmentation fault (core dumped)"
Produce the error:
import torch
from torch.nn.utils.rnn import pack_padded_sequence
#bs, max_sequence_len, emb_dim
x = torch.zeros([16,10,20])
length = torch.zeros([16])
mask = [length != 0]
masked_x = x[mask]
masked_length = length[mask]
print(masked_x.shape, masked_length.shape)
pack_padded_sequence(masked_x, masked_length, batch_first = True)
Expected behavior:
An exception raised stating that the input tensor must not be empty.
Environments created on:
-PyTorch 1.0, Windows (installed via conda), CUDA (9.0)
-PyTorch 1.0, Linux (installed via pip), CUDA ('9.0.176')
Thanks,
Jamie
Metadata
Metadata
Assignees
Labels
high prioritymodule: crashProblem manifests as a hard crash, as opposed to a RuntimeErrorProblem manifests as a hard crash, as opposed to a RuntimeErrormodule: nnRelated to torch.nnRelated to torch.nnmodule: rnnIssues related to RNN support (LSTM, GRU, etc)Issues related to RNN support (LSTM, GRU, etc)smallWe think this is a small issue to fix. Consider knocking off high priority small issuesWe think this is a small issue to fix. Consider knocking off high priority small issuestriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module