Skip to content

documentation for sparse tensor creation after 0.4.0 update #7995

@juniorrojas

Description

@juniorrojas

From the 0.4.0 migration guide https://pytorch.org/2018/04/22/0_4_0-migration-guide.html

dtypes, devices and NumPy-style creation functions

In previous versions of PyTorch, we used to specify data type (e.g. float vs double), device type (cpu vs cuda) and layout (dense vs sparse) together as a “tensor type”. For example, torch.cuda.sparse.DoubleTensor was the Tensor type respresenting the double data type, living on CUDA devices, and with COO sparse tensor layout.

In this release, we introduce torch.dtype, torch.device and torch.layout classes to allow better management of these properties via NumPy-style creation functions.

Although this introduction to the dtype, device and layout features mentions sparse tensors, there are no examples for sparse tensor creation using the 0.4.0 style. All references I could find online still use things like torch.sparse.FloatTensor rather than torch.tensor, even in the official torch.sparse documentation https://pytorch.org/docs/master/sparse.html . Is there a way to create a sparse tensor using torch.tensor? For example, the migration guide mentions examples like torch.tensor([[1], [2], [3]], dtype=torch.half, device=cuda) for dense tensors, but for sparse tensors it's not obvious how to use it, since it needs indices, values and size rather than just an array of numbers.

Metadata

Metadata

Assignees

No one assigned

    Labels

    high prioritytodoNot as important as medium or high priority tasks, but we will work on these.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions