PyTorch version:
torch.__version__: 1.3.0
CuPy config:
CuPy Version : 7.0.0rc1
CUDA Root : /usr/local/cuda-10.0
CUDA Build Version : 10000
CUDA Driver Version : 10020
CUDA Runtime Version : 10000
cuDNN Build Version : None
cuDNN Version : None
NCCL Build Version : None
NCCL Runtime Version : None
With PyTorch accepting the cuda_array_interface, I'd expect to be able to use a CuPy generated CUDA array in PyTorch without going through DLPack.
import cupy as cp
import torch
a = cp.random.rand(10000)
b = torch.as_tensor(a)
Throws: ValueError: given array strides not a multiple of the element byte size. Make a copy of the array to reallocate the memory.
If this is on PyTorch's side, please let me know, and I'll file an issue there. Thanks!