Skip to content

Binomial.log_prob returns -inf when actual probability is 1 if logit is large #17843

@HenDriess

Description

@HenDriess

🐛 Bug

When a Binomial instance is initialized with a large logit, effectively setting the distribution mean to 1, log_prob(1) returns -inf whereas it should return 0.

This is likely to occur when a neural net outputs the logit and tries to push for a probability of 1.

To Reproduce

Steps to reproduce the behavior:

dist = torch.distributions.Binomial(logits=torch.Tensor([90.5229]))

dist.sample() 
>> tensor([1.])

dist.log_prob(dist.sample())
>> tensor([-inf])

Expected behavior

Expected behavior should be as is its when initializing with smaller logits, such as:

dist = torch.distributions.Binomial(logits=torch.Tensor([15]))

dist.sample() 
>> tensor([1.])

dist.log_prob(dist.sample())
>> tensor([0.])

Environment

PyTorch version: 0.4.1
Is debug build: No
CUDA used to build PyTorch: None

OS: Microsoft Windows 10 Pro
GCC version: Could not collect
CMake version: Could not collect

Python version: 3.6
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA

Versions of relevant libraries:
[pip3] numpy==1.15.1
[pip3] torch==0.4.1
[pip3] torchvision==0.2.1
[conda] Could not collect

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions