Skip to content

Auto-convert GPU arrays that support the __cuda_array_interface__ protocol #15601

@mrocklin

Description

@mrocklin

A few projects now implement the __cuda_array_interface__ protocol, which was originally designed by the Numba developers to enable clean operation on arrays managed by other projects. I believe that today both PyTorch and CuPy have implemented this protocol, and that Numba is the only consumer.

However, libraries like PyTorch and CuPy could also identify this protocol on input objects and use it within their functions. Probably the first use case would be to allow users to easily convert between different GPU array libraries. For example, ideally the following would work.

import cupy
x = cupy.random.random((1000, 1000))

import torch
t = torch.tensor(x)

Ideally the check within torch.tensor would be something like the following:

if hasattr(x, '__cuda_array_interface__'):
    ...

rather than the following:

if isinstance(x, cupy.ndarray):
    ...

which would be useful for support of future GPU libraries.

xref #11914

Metadata

Metadata

Assignees

No one assigned

    Labels

    featureA request for a proper, new feature.module: cudaRelated to torch.cuda, and CUDA support in generaltriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions