-
Notifications
You must be signed in to change notification settings - Fork 27.4k
[Feature Request] Make nn layers accept empty batch size #12013
Copy link
Copy link
Closed
Labels
function requestA request for a new function or the addition of new arguments/modes to an existing function.A request for a new function or the addition of new arguments/modes to an existing function.high prioritymodule: nnRelated to torch.nnRelated to torch.nntriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
Now that we have support for tensors with zero in its size, I believe it would be very handy to have support for accepting batches of size 0 in nn.functional functions.
A (non-exhaustive) list of functions that would be good supporting:
-
conv{1-2-3}d -
conv_transpose{1-2-3}d -
batch_norm -
interpolate
Handling the losses is a bit trickier, because it generally involves computing a .mean(), which results in NaN due to 0 / 0 division. I'd expect having a 0 loss for empty batches to make sense, but that's debatable so might be worth postponing this decision.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
function requestA request for a new function or the addition of new arguments/modes to an existing function.A request for a new function or the addition of new arguments/modes to an existing function.high prioritymodule: nnRelated to torch.nnRelated to torch.nntriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module