-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Closed
Description
I'm using Gumbel-Softmax trick, that is I need to compute:
softmax(-log(-log(uniform(0, 1))))
In Tenserflow,
running tf.nn.softmax(-tf.log(-tf.log(tf.uniform(...)))) is fine.
But in Pytorch,
if you run F.softmax(-torch.log(-torch.log(torch.rand())))
You will get "nan".
I think the reason is: rand() will sometimes return exactly 0; while in tensorflow exp(-inf) will produce 0 while in pytorch "nan" will be thrown.
Any option for pytorch to allow "infinity-math"???
Metadata
Metadata
Assignees
Labels
No labels