-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Closed
Closed
Copy link
Labels
module: custom-operatorscustom operators, custom ops, custom-operators, custom-opscustom operators, custom ops, custom-operators, custom-opsmodule: pt2-dispatcherPT2 dispatcher-related issues (e.g., aotdispatch, functionalization, faketensor, custom-op,PT2 dispatcher-related issues (e.g., aotdispatch, functionalization, faketensor, custom-op,oncall: pt2
Description
Repro: the following code behaves differently between PyTorch 2.5 and PyTorch 2.6. It errors in PyTorch 2.6 but succeeds in PyTorch 2.5
import torch
with torch.library._scoped_library("mylib", "DEF") as lib:
lib.define(
"copy_(Tensor(a!) dst, Tensor src) -> ()",
# tags=torch.Tag.needs_fixed_stride_order,
)
@torch.library.impl(lib, "copy_", "Meta")
def _(dst, src):
return None
@torch.library.impl(lib, "copy_", "CompositeExplicitAutograd")
def _(dst, src):
if src.is_contiguous():
dst.copy_(src + 1)
else:
dst.copy_(src)
def f(x):
full_default_3 = torch.full([3, 3], 7.0, device="cpu")
chunk_cat_default_1 = torch.ops.mylib.copy_.default(full_default_3, x)
mul_out = torch.mul(full_default_3, full_default_3)
return mul_out
x = torch.arange(9, dtype=torch.float, device="cpu").view(3, 3).t().contiguous().t()
eager_out = f(x)
compiled_inductor_f = torch.compile(f, backend="inductor", fullgraph=True)
compiled_inductor_out = compiled_inductor_f(x)
assert torch.allclose(compiled_inductor_out, eager_out)Metadata
Metadata
Assignees
Labels
module: custom-operatorscustom operators, custom ops, custom-operators, custom-opscustom operators, custom ops, custom-operators, custom-opsmodule: pt2-dispatcherPT2 dispatcher-related issues (e.g., aotdispatch, functionalization, faketensor, custom-op,PT2 dispatcher-related issues (e.g., aotdispatch, functionalization, faketensor, custom-op,oncall: pt2