Skip to content

Tensor.new_tensor now preserves input tensor device, making it inconsistent with documentation #73838

@ezyang

Description

@ezyang

🐛 Describe the bug

>>> import torch
>>> torch.zeros(2, device='cuda:1').new_tensor(torch.zeros(2, device='cpu'))
__main__:1: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than tensor.new_tensor(sourceTensor).
tensor([0., 0.])
>>> torch.__version__
'1.10.2'

However, docs say:

Tensor.new_tensor(data, dtype=None, device=None, requires_grad=False) → Tensor
Returns a new Tensor with data as the tensor data. By default, the returned Tensor has the same torch.dtype and torch.device as this tensor.

The regression was caused by #41984 (cc @gchanan @mruberry) which was intended to change the behavior of non-method constructors but accidentally also affected method constructors as well.

Please don't actually try to fix this, I'm in the middle of a big refactor at #73824

Versions

master

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: tensor creationtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions