Skip to content

Make sure data_ptr for non-zero-size input tensors stays the same after the VariableType dispatch #16589

@yf225

Description

@yf225

In VariableType.cpp, #16305 adds checks to make sure we are still using the same input TensorImpl and Storage after we call the non-autograd in-place function. However, it is still possible that the in-place function changes the data_ptr of the input tensors without changing TensorImpl or Storage (e.g. by calling tensor.resize_({bigger_than_original}). If the input tensors have 0-size, then we expect this behavior to happen and all is good; if the input tensors have non-zero-size and is smaller than needed, we want to tell the user to resize their input tensors to the correct size before passing them into the VariableType function, to prevent tensor.resize_({size_needed}) from allocating a new data_ptr and to make sure the buffer they pass in would be the buffer that contains the returned data.

cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @lezcano @Varal7

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: assert failureThe issue involves an assert failuremodule: autogradRelated to torch.autograd, and the autograd engine in generalmodule: molly-guardFeatures which help prevent users from committing common mistakestriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions