-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Open
Labels
module: TensorIteratormodule: bc-breakingRelated to a BC-breaking changeRelated to a BC-breaking changemodule: deprecationmodule: numpyRelated to numpy support, and also numpy compatibility of our operatorsRelated to numpy support, and also numpy compatibility of our operatorsmodule: reductionstriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
🐛 Bug
torch.sum(tensor, dim=()) performs a full reduce, while np.sum(arr, axis=()) performs no reduce.
To Reproduce
Steps to reproduce the behavior:
import torch
import numpy as np
arr = np.array([1, 2, 3])
tensor = torch.from_numpy(arr)
tensor.sum(()) # gives tensor(6)
arr.sum(()) # gives np.array([1, 2, 3])
Expected behavior
These should probably be the same. Also, I don't think it makes sense for tensor.sum(dim=()) to do a full reduce. This special casing caused the bug at #28993 and I suspect the gradient for things like torch.var may be incorrect right now.
Environment
Pytorch master
Additional context
Some people who have been involved in multidim support who may be interested: @t-vi, @ssnl, @umanwizard @gchanan. Please let me know your opinions on this subject.
bdusell, teichert, j824h, randolf-scholz, YodaEmbedding and 1 morej824h and randolf-scholz
Metadata
Metadata
Assignees
Labels
module: TensorIteratormodule: bc-breakingRelated to a BC-breaking changeRelated to a BC-breaking changemodule: deprecationmodule: numpyRelated to numpy support, and also numpy compatibility of our operatorsRelated to numpy support, and also numpy compatibility of our operatorsmodule: reductionstriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module