Skip to content

torch.bmm Segmentation Fault with mixed CPU / GPU #12406

@jcjohnson

Description

@jcjohnson

Most torch functions throw a user-friendly exception when they encounter tensors on incompatible devices; however torch.bmm segfaults instead.

To Reproduce

In an interactive shell:

> x = torch.randn(3, 3).cpu()
> x.mm(x.cuda())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
  RuntimeError: Expected object of backend CPU but got backend CUDA for argument #2 'mat2'

x[None].bmm(x[None].cuda())
> Segmentation fault

Expected behavior

torch.bmm should throw an exception like torch.mm rather than segfault

Environment

PyTorch version: 1.0.0.dev20181005
Is debug build: No
CUDA used to build PyTorch: 9.0.176

OS: Ubuntu 16.04.4 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609
CMake version: version 3.5.1

Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 9.0.176
GPU models and configuration:
GPU 0: Quadro GP100
GPU 1: Quadro GP100

Nvidia driver version: 396.26
cuDNN version: Could not collect

Versions of relevant libraries:
[pip] numpy (1.15.2)
[pip] torch (1.0.0.dev20181005)
[conda] pytorch-nightly 1.0.0.dev20181005 py3.7_cuda9.0.176_cudnn7.1.2_0 pytorch

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions