-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Fix another item memo loss location + bool specialization bug #139587
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/139587
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 73bdc16 with merge base e6ff07f ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Fixes `PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=6 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCPU.test_comprehensive_linalg_norm_cpu_float16` when `specialize_float=False` [ghstack-poisoned]
Fixes `PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=6 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCPU.test_comprehensive_linalg_norm_cpu_float16` when `specialize_float=False` cc ezyang SherlockNoMad EikanWang jgong5 wenzhe-nrv voznesenskym penguinwu Guobing-Chen XiaobingSuper zhuhaozhe blzheng jiayisunx chenyang78 kadeng chauhang amjames [ghstack-poisoned]
Fixes `PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=6 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCPU.test_comprehensive_linalg_norm_cpu_float16` when `specialize_float=False` while ensuring `python test/dynamo/test_dynamic_shapes.py DynamicShapesMiscTests.test_runtime_assert_replacement_dynamic_shapes` doesn't regress cc ezyang SherlockNoMad EikanWang jgong5 wenzhe-nrv voznesenskym penguinwu Guobing-Chen XiaobingSuper zhuhaozhe blzheng jiayisunx chenyang78 kadeng chauhang amjames [ghstack-poisoned]
Fixes `PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=6 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCPU.test_comprehensive_linalg_norm_cpu_float16` when `specialize_float=False` while ensuring `python test/dynamo/test_dynamic_shapes.py DynamicShapesMiscTests.test_runtime_assert_replacement_dynamic_shapes` doesn't regress cc ezyang SherlockNoMad EikanWang jgong5 wenzhe-nrv voznesenskym penguinwu Guobing-Chen XiaobingSuper zhuhaozhe blzheng jiayisunx chenyang78 kadeng chauhang amjames [ghstack-poisoned]
This fix was a bit more involved: 1) It fixes a item_memo loss place. 2) It fixes a bug where we would specialize bools without guarding correctly. 3) It updates tensorify to specialize more places now that the aforementioned bug is fixed. Fixes `PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=6 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCPU.test_comprehensive_linalg_norm_cpu_float16` when `specialize_float=False` while ensuring `python test/dynamo/test_dynamic_shapes.py DynamicShapesMiscTests.test_runtime_assert_replacement_dynamic_shapes` doesn't regress cc ezyang SherlockNoMad EikanWang jgong5 wenzhe-nrv voznesenskym penguinwu Guobing-Chen XiaobingSuper zhuhaozhe blzheng jiayisunx chenyang78 kadeng chauhang amjames [ghstack-poisoned]
This fix was a bit more involved: 1) It fixes a item_memo loss place. 2) It fixes a bug where we would specialize bools without guarding correctly. 3) It updates tensorify to specialize more places now that the aforementioned bug is fixed. Fixes `PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=6 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCPU.test_comprehensive_linalg_norm_cpu_float16` when `specialize_float=False` while ensuring `python test/dynamo/test_dynamic_shapes.py DynamicShapesMiscTests.test_runtime_assert_replacement_dynamic_shapes` doesn't regress cc ezyang SherlockNoMad EikanWang jgong5 wenzhe-nrv voznesenskym penguinwu Guobing-Chen XiaobingSuper zhuhaozhe blzheng jiayisunx chenyang78 kadeng chauhang amjames [ghstack-poisoned]
This fix was a bit more involved: 1) It fixes a item_memo loss place. 2) It fixes a bug where we would specialize bools without guarding correctly. 3) It updates tensorify to specialize more places now that the aforementioned bug is fixed. Fixes `PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=6 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCPU.test_comprehensive_linalg_norm_cpu_float16` when `specialize_float=False` while ensuring `python test/dynamo/test_dynamic_shapes.py DynamicShapesMiscTests.test_runtime_assert_replacement_dynamic_shapes` doesn't regress cc ezyang SherlockNoMad EikanWang jgong5 wenzhe-nrv voznesenskym penguinwu Guobing-Chen XiaobingSuper zhuhaozhe blzheng jiayisunx chenyang78 kadeng chauhang amjames [ghstack-poisoned]
This fix was a bit more involved: 1) It fixes a item_memo loss place. 2) It fixes a bug where we would specialize bools without guarding correctly. 3) It updates tensorify to specialize more places now that the aforementioned bug is fixed. Fixes `PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=6 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCPU.test_comprehensive_linalg_norm_cpu_float16` when `specialize_float=False` while ensuring `python test/dynamo/test_dynamic_shapes.py DynamicShapesMiscTests.test_runtime_assert_replacement_dynamic_shapes` doesn't regress cc ezyang SherlockNoMad EikanWang jgong5 wenzhe-nrv voznesenskym penguinwu Guobing-Chen XiaobingSuper zhuhaozhe blzheng jiayisunx chenyang78 kadeng chauhang amjames [ghstack-poisoned]
| # would result in replacements[u0] = 3, which would end | ||
| # result in guards not being added during specialization. | ||
| # See https://github.com/pytorch/pytorch/pull/138868#discussion_r1823076611 | ||
| # for more information. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to talk about this more? Isn't it better to just fix this problem?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes I think it'd be good to sync on this problem. Here's my perspective:
These constant replacements seem unsound. Let's take a look at this test case:
@torch._dynamo.config.patch(capture_scalar_outputs=True)
def test_runtime_assert_replacement(self):
@torch.compile(backend="aot_eager")
def fn(x, y):
z = y.item()
torch._check(z == 3)
return x + z
fn(torch.randn(4), torch.tensor([3]))
self.assertRaises(RuntimeError, lambda: fn(torch.randn(4), torch.tensor([4])))
The torch._check(z == 3) results in calling defer_runtime_assert on Eq(u0, 3)
which subsequently calls self._maybe_guard_rel with Eq(u0, 3)
which subsequently calls self._set_replacement(lhs, self._find(rhs), "trivial_lhs") where lhs is u0 and rhs is 3
without early returning if the rhs has no free symbols, we will end up setting a replacement of u0 => 3 WITHOUT adding a guard
This makes it such that the SymBool Eq(u0, 3) gets simplified to Eq(3, 3) which gets simplified to True
Now when we get to our tensorify pass and we specialize
if isinstance(
(val := node.meta.get("val")),
(torch.SymFloat, torch.SymInt, torch.SymBool),
):
if all(
symbol_is_type(s, SymT.FLOAT) for s in val.node.expr.free_symbols
):
# If all symbols are backed symfloats, we can just specialize the whole node
# and get more precise guards. eg.
#
# zf = a.item()
# zf2 = zf // 2
# op(.. zf2 ..)
#
# It's better to guard on zf // 2 == 2.0 than zf == 5.0
node.replace_all_uses_with(guard_scalar(val))
graph.erase_node(node)
We don't actually end up guarding on u0 == 3 as we expect since the eq SymBool meta val is simply True without any free symbols
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, I'm not exactly sure what the bug fix is, but let me describe how it should work.
We must distinguish between two cases for output value of item(). If item() returns an unbacked SymInt, it can never be truly "eliminated". Even if we setup a replacement (u0 == 3), we still know that this item() is the binding site for u0 (via unbacked_bindings) and we are obligated to generate a replacement assert for it when u0 comes into scope. Now, I could believe this is not currently done. Here is where assertions are codegenned
def make_assert(expr: SympyBoolean, msg: str) -> None:
assert_op = ir.AssertScalar(expr, msg)
self.register_buffer(assert_op, set_name=True)
self.register_operation(assert_op)
for i0 in new_unbacked_defs:
ras = self.ras_by_symbol.pop(i0, [])
# NB: size-like not needed, we won't retrace
vr = shape_env.var_to_range[i0]
if not shape_env._default_unspecified_value_range().issubset(vr):
def is_convertible(s: Expr) -> bool:
if s in (int_oo, -int_oo):
return False
try:
int(s)
return True
except TypeError:
return False
if is_convertible(vr.lower):
make_assert(i0 >= vr.lower, f"{i0} >= {vr.lower}")
if is_convertible(vr.upper):
make_assert(i0 <= vr.upper, f"{i0} <= {vr.upper}")
for ra in ras:
fvs = free_unbacked_symbols(ra.expr)
missing = fvs - self.bound_unbacked_symbols
if missing:
i1 = min(missing, key=str)
self.ras_by_symbol.setdefault(i1, []).append(ra)
else:
make_assert(ra.expr, f"{ra.expr}")
I don't see handling for replacements. So it seems plausible this is just an oopsie that needs to be fixed here
…bug" This fix was a bit more involved: 1) It fixes a item_memo loss place. 2) It fixes a bug where we would specialize bools without guarding correctly. 3) It updates tensorify to specialize more places now that the aforementioned bug is fixed. Fixes `PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=6 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCPU.test_comprehensive_linalg_norm_cpu_float16` when `specialize_float=False` while ensuring `python test/dynamo/test_dynamic_shapes.py DynamicShapesMiscTests.test_runtime_assert_replacement_dynamic_shapes` doesn't regress cc ezyang SherlockNoMad EikanWang jgong5 wenzhe-nrv voznesenskym penguinwu Guobing-Chen XiaobingSuper zhuhaozhe blzheng jiayisunx chenyang78 kadeng chauhang amjames [ghstack-poisoned]
…bug" This fix was a bit more involved: 1) It fixes a item_memo loss place. 2) It fixes a bug where we would specialize bools without guarding correctly. 3) It updates tensorify to specialize more places now that the aforementioned bug is fixed. Fixes `PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=6 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCPU.test_comprehensive_linalg_norm_cpu_float16` when `specialize_float=False` while ensuring `python test/dynamo/test_dynamic_shapes.py DynamicShapesMiscTests.test_runtime_assert_replacement_dynamic_shapes` doesn't regress cc ezyang SherlockNoMad EikanWang jgong5 wenzhe-nrv voznesenskym penguinwu Guobing-Chen XiaobingSuper zhuhaozhe blzheng jiayisunx chenyang78 kadeng chauhang amjames [ghstack-poisoned]
…bug" This fix was a bit more involved: 1) It fixes a item_memo loss place. 2) It fixes a bug where we would specialize bools without guarding correctly. 3) It updates tensorify to specialize more places now that the aforementioned bug is fixed. Fixes `PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=6 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCPU.test_comprehensive_linalg_norm_cpu_float16` when `specialize_float=False` while ensuring `python test/dynamo/test_dynamic_shapes.py DynamicShapesMiscTests.test_runtime_assert_replacement_dynamic_shapes` doesn't regress cc ezyang SherlockNoMad EikanWang jgong5 wenzhe-nrv voznesenskym penguinwu Guobing-Chen XiaobingSuper zhuhaozhe blzheng jiayisunx chenyang78 kadeng chauhang amjames [ghstack-poisoned]
…bug" This fix was a bit more involved: 1) It fixes a item_memo loss place. 2) It fixes a bug where we would specialize bools without guarding correctly. 3) It updates tensorify to specialize more places now that the aforementioned bug is fixed. Fixes `PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=6 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCPU.test_comprehensive_linalg_norm_cpu_float16` when `specialize_float=False` while ensuring `python test/dynamo/test_dynamic_shapes.py DynamicShapesMiscTests.test_runtime_assert_replacement_dynamic_shapes` doesn't regress cc ezyang SherlockNoMad EikanWang jgong5 wenzhe-nrv voznesenskym penguinwu Guobing-Chen XiaobingSuper zhuhaozhe blzheng jiayisunx chenyang78 kadeng chauhang amjames [ghstack-poisoned]
…bug" This fix was a bit more involved: 1) It fixes a item_memo loss place. 2) It updates a test to be eager instead of aot_eager since it reveals a very obscure bug related to replacements that's not worth solving since in practice inductor will regenerate the runtime asserts anyways 3) It updates tensorify to specialize more places now that the aforementioned bug is fixed. Fixes `PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=6 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCPU.test_comprehensive_linalg_norm_cpu_float16` when `specialize_float=False` while ensuring `python test/dynamo/test_dynamic_shapes.py DynamicShapesMiscTests.test_runtime_assert_replacement_dynamic_shapes` doesn't regress cc ezyang SherlockNoMad EikanWang jgong5 wenzhe-nrv voznesenskym penguinwu Guobing-Chen XiaobingSuper zhuhaozhe blzheng jiayisunx chenyang78 kadeng chauhang amjames [ghstack-poisoned]
ezyang
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bwahaha, lol
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Fixes `PYTORCH_TEST_WITH_INDUCTOR=1 tlp python test/test_torch.py TestTorchDeviceTypeCUDA.test_cauchy_cuda_float64` when specialize_float=False Pull Request resolved: #139583 Approved by: https://github.com/ezyang ghstack dependencies: #139569, #139457, #139568, #139572, #139846, #139454, #139896, #139935, #139587
…h#139587) This fix was a bit more involved: 1) It fixes a item_memo loss place. 2) It updates a test to be eager instead of aot_eager since it reveals a very obscure bug related to replacements that's not worth solving since in practice inductor will regenerate the runtime asserts anyways 3) It updates tensorify to specialize more places now that the aforementioned bug is fixed. Fixes `PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=6 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCPU.test_comprehensive_linalg_norm_cpu_float16` when `specialize_float=False` while ensuring `python test/dynamo/test_dynamic_shapes.py DynamicShapesMiscTests.test_runtime_assert_replacement_dynamic_shapes` doesn't regress Pull Request resolved: pytorch#139587 Approved by: https://github.com/ezyang ghstack dependencies: pytorch#139569, pytorch#139457, pytorch#139568, pytorch#139572, pytorch#139846, pytorch#139454, pytorch#139896, pytorch#139935
Fixes `PYTORCH_TEST_WITH_INDUCTOR=1 tlp python test/test_torch.py TestTorchDeviceTypeCUDA.test_cauchy_cuda_float64` when specialize_float=False Pull Request resolved: pytorch#139583 Approved by: https://github.com/ezyang ghstack dependencies: pytorch#139569, pytorch#139457, pytorch#139568, pytorch#139572, pytorch#139846, pytorch#139454, pytorch#139896, pytorch#139935, pytorch#139587
Stack from ghstack (oldest at bottom):
This fix was a bit more involved:
Fixes
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=6 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCPU.test_comprehensive_linalg_norm_cpu_float16whenspecialize_float=Falsewhile ensuring
python test/dynamo/test_dynamic_shapes.py DynamicShapesMiscTests.test_runtime_assert_replacement_dynamic_shapesdoesn't regresscc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov