Skip to content

[feature request] time-distributed layers for application of normal layers to sequence data #1927

@erogol

Description

@erogol

As we move from images to videos, it seems imperative to feed sequential data into common image layers. However, the problem of dealing sequential data with such layers is not clears on Pytorch. I suggest here to devise something like -time-distributed layers for wrapping normal layers and aligning them for sequential inputs and outputs. I think in the course of time, it is going to be a more important pattern to have in Pytorch. Hope, I explain the problem clearly?

Lately, I solved that problem for 1D feat vectors as follows; I wait some comments to generalize it if necessary.

class TimeDistributed(nn.Module):
    def __init__(self, module):
        super(TimeDistributed, self).__init__()
        self.module = module

    def forward(self, x):
        if len(x.size()) <= 2:
            return self.module(x)
        t, n = x.size(0), x.size(1) 
        # merge batch and seq dimensions
        x_reshape = x.contiguous().view(t * n, x.size(2))
        y = self.module(x_reshape)
        # We have to reshape Y
        y = y.contiguous().view(t, n, y.size()[1])
        return y

Metadata

Metadata

Assignees

Labels

module: nnRelated to torch.nntriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions