Skip to content

Require every Tensor to have an associated Storage #130

@colesbury

Description

@colesbury

Currently, empty Tensors may not have a Storage:

>>> x = torch.Tensor()
>>> x.storage() is None
True

I propose that we make every Tensor, even empty ones, have a Storage. I don't think this will require any changes to TH/THC (just pytorch).

This has some advantages:

  • We often have code that assumes a valid storage. These bugs are hard to catch because they only occur with empty tensors.
  • Every CUDA tensor will have an associated device. Currently, empty CUDA tensors may not be on any device

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions