Skip to content

Need GPU implementation of dirichlet_grad (originally: Reparameterized gradient on GPU for beta / Dirichlet) #15773

@vmoens

Description

@vmoens

🐛 Bug

Reparameterized gradient computation fails when dirichlet (hence beta) parameters are dispached on GPU.

this works:

a,b = torch.ones(3,4,1,5,requires_grad=True),torch.ones(3,4,1,5,requires_grad=True)
s = torch.distributions.beta.Beta(a,b).rsample(torch.Size((10,)))
torch.sum(s).backward()
a.grad

but this doesn't

a,b = torch.ones(3,4,1,5,requires_grad=True,device='cuda'),torch.ones(3,4,1,5,requires_grad=True,device='cuda')
s = torch.distributions.beta.Beta(a,b).rsample(torch.Size((10,)))
torch.sum(s).backward()
a.grad

Environment

PyTorch version: 1.0.0
Is debug build: No
CUDA used to build PyTorch: 9.0.176

OS: Ubuntu 16.04.5 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609
CMake version: version 3.12.2

Python version: 3.5
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: TITAN X (Pascal)
...

Nvidia driver version: 384.130
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_static_v7.a

Versions of relevant libraries:
[pip] Could not collect
[conda] blas 1.0 mkl
[conda] magma-cuda92 2.4.0 1 pytorch
[conda] mkl 2018.0.3 1
[conda] mkl-include 2019.1 144
[conda] mkl_fft 1.0.6 py37h7dd41cf_0
[conda] mkl_random 1.0.1 py37h4414c95_1
In [ ]:

Metadata

Metadata

Assignees

No one assigned

    Labels

    featureA request for a proper, new feature.module: bootcampWe plan to do a full writeup on the issue, and then get someone to do it for onboardingmodule: cudaRelated to torch.cuda, and CUDA support in generaltriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions