-
Notifications
You must be signed in to change notification settings - Fork 26.2k
Description
A few projects now implement the __cuda_array_interface__ protocol, which was originally designed by the Numba developers to enable clean operation on arrays managed by other projects. I believe that today both PyTorch and CuPy have implemented this protocol, and that Numba is the only consumer.
However, libraries like PyTorch and CuPy could also identify this protocol on input objects and use it within their functions. Probably the first use case would be to allow users to easily convert between different GPU array libraries. For example, ideally the following would work.
import cupy
x = cupy.random.random((1000, 1000))
import torch
t = torch.tensor(x)Ideally the check within torch.tensor would be something like the following:
if hasattr(x, '__cuda_array_interface__'):
...rather than the following:
if isinstance(x, cupy.ndarray):
...which would be useful for support of future GPU libraries.
xref #11914