-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Description
🐛 Bug
When a Binomial instance is initialized with a large logit, effectively setting the distribution mean to 1, log_prob(1) returns -inf whereas it should return 0.
This is likely to occur when a neural net outputs the logit and tries to push for a probability of 1.
To Reproduce
Steps to reproduce the behavior:
dist = torch.distributions.Binomial(logits=torch.Tensor([90.5229]))
dist.sample()
>> tensor([1.])
dist.log_prob(dist.sample())
>> tensor([-inf])
Expected behavior
Expected behavior should be as is its when initializing with smaller logits, such as:
dist = torch.distributions.Binomial(logits=torch.Tensor([15]))
dist.sample()
>> tensor([1.])
dist.log_prob(dist.sample())
>> tensor([0.])
Environment
PyTorch version: 0.4.1
Is debug build: No
CUDA used to build PyTorch: None
OS: Microsoft Windows 10 Pro
GCC version: Could not collect
CMake version: Could not collect
Python version: 3.6
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Versions of relevant libraries:
[pip3] numpy==1.15.1
[pip3] torch==0.4.1
[pip3] torchvision==0.2.1
[conda] Could not collect