-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Open
Labels
module: nnRelated to torch.nnRelated to torch.nntriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
As we move from images to videos, it seems imperative to feed sequential data into common image layers. However, the problem of dealing sequential data with such layers is not clears on Pytorch. I suggest here to devise something like -time-distributed layers for wrapping normal layers and aligning them for sequential inputs and outputs. I think in the course of time, it is going to be a more important pattern to have in Pytorch. Hope, I explain the problem clearly?
Lately, I solved that problem for 1D feat vectors as follows; I wait some comments to generalize it if necessary.
class TimeDistributed(nn.Module):
def __init__(self, module):
super(TimeDistributed, self).__init__()
self.module = module
def forward(self, x):
if len(x.size()) <= 2:
return self.module(x)
t, n = x.size(0), x.size(1)
# merge batch and seq dimensions
x_reshape = x.contiguous().view(t * n, x.size(2))
y = self.module(x_reshape)
# We have to reshape Y
y = y.contiguous().view(t, n, y.size()[1])
return ystefbraun, ixaxaar, Separius, magnumw, MSC19950601 and 12 more
Metadata
Metadata
Assignees
Labels
module: nnRelated to torch.nnRelated to torch.nntriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module