-
Notifications
You must be signed in to change notification settings - Fork 26.3k
[inductor] Relax the conditions for loop split #135335
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/135335
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 3c82af4 with merge base bf68e16 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
| for div_expr in expr.find(FloorDiv): | ||
| if any(div_expr.has(var) for var in original_body.iter_vars): | ||
| num_div += 1 | ||
| if num_div > 1: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible that the div variable and the divider are the same even though there are multiple divs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While I have not encountered this situation, it is possible and I have further relaxed the conditions to support it. Thanks!
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Revert "[PT2][Inductor][Optmus] fix test_pad_mm_bf16 and reland to fix long computation kernel (#136349)" This reverts commit e184391. Revert "Fix clang-tidy warnings in torch/csrc/lazy (#134655)" This reverts commit 0287146. Revert "Remove duplicate line (#136383)" This reverts commit 0b91e7e. Revert "[TF32] Account for TF32 in `test_conv_double_backward` (#135716)" This reverts commit 29f7b8d. Revert "Fix `Vectorized<double>::next_after` SVE compilation (#136388)" This reverts commit 7936584. Revert "Upgrade pybind11 API calls for 3.13t (#136370)" This reverts commit 067d203. Revert "[AOTI][Tooling] Filter out kernels based off lowercase names (#135395)" This reverts commit 1a10751. Revert "Add decomps for max_unpool (#133146)" This reverts commit 0c936c3. Revert "add TORCH_CUDA_CPP_API for AutoNcclGroup (#130012)" This reverts commit 293fccf. Revert "Use cpython declaration of _PyWeakref_ClearRef (#136300)" This reverts commit d2455b9. Revert "fix mypi in utils/_sympy/functions.py (#136339)" This reverts commit 7f9c064. Revert "[Inductor] Fix test_profiler_mark_wrapper_call_cuda_cuda_wrapper (#136356)" This reverts commit f53a0f9. Revert "Add more distributed examples (#130427)" This reverts commit 5997354. Revert "return instead of using skipTest (#136244)" This reverts commit 29affa6. Reapply "[PT2/Profiler] Add Context Info to Torch-Compiled Regions (#132765)" This reverts commit 783c5ba. Revert "Enable torch build with SLEEF on ARM by default (#133339)" This reverts commit 4842f0f. Revert "[inductor] Relax the conditions for loop split (#135335)" This reverts commit 687e5cf. [ghstack-poisoned]
Revert "[PT2][Inductor][Optmus] fix test_pad_mm_bf16 and reland to fix long computation kernel (#136349)" This reverts commit e184391. Revert "Fix clang-tidy warnings in torch/csrc/lazy (#134655)" This reverts commit 0287146. Revert "Remove duplicate line (#136383)" This reverts commit 0b91e7e. Revert "[TF32] Account for TF32 in `test_conv_double_backward` (#135716)" This reverts commit 29f7b8d. Revert "Fix `Vectorized<double>::next_after` SVE compilation (#136388)" This reverts commit 7936584. Revert "Upgrade pybind11 API calls for 3.13t (#136370)" This reverts commit 067d203. Revert "[AOTI][Tooling] Filter out kernels based off lowercase names (#135395)" This reverts commit 1a10751. Revert "Add decomps for max_unpool (#133146)" This reverts commit 0c936c3. Revert "add TORCH_CUDA_CPP_API for AutoNcclGroup (#130012)" This reverts commit 293fccf. Revert "Use cpython declaration of _PyWeakref_ClearRef (#136300)" This reverts commit d2455b9. Revert "fix mypi in utils/_sympy/functions.py (#136339)" This reverts commit 7f9c064. Revert "[Inductor] Fix test_profiler_mark_wrapper_call_cuda_cuda_wrapper (#136356)" This reverts commit f53a0f9. Revert "Add more distributed examples (#130427)" This reverts commit 5997354. Revert "return instead of using skipTest (#136244)" This reverts commit 29affa6. Reapply "[PT2/Profiler] Add Context Info to Torch-Compiled Regions (#132765)" This reverts commit 783c5ba. Revert "Enable torch build with SLEEF on ARM by default (#133339)" This reverts commit 4842f0f. Revert "[inductor] Relax the conditions for loop split (#135335)" This reverts commit 687e5cf. ghstack-source-id: b0fb91e Pull Request resolved: #136668
Stack from ghstack (oldest at bottom):
Summary
This PR Relaxes the conditions for loop split to support dynamic shape cases.
Now the conditions that need to be met to apply loop split optimization are as follows:
Example:
Before loop split, the node's var_ranges:
{z0: s0, z1: s2, z2: s2, z3: 960}and indexing_exprs:{'index0': 960*s2**2*z0 + 960*s2*z1 + 960*z2 + z3, 'index1': 32*z0 + (z3//30), 'index2': 30*s2**2, 'index3': z3, 'index4': 960*s2*z0*((s2**2//s2)) + 960*z1*((s2**2//s2)) + 960*z2 + z3}. After loop splitz3will split to30*z3 + z4, then the node's var_ranges will be changed to{z0: s0, z1: s2, z2: s2, z3: 32, z4: 30}and indexing_exprs will be changed to{'index0': 960*s2**2*z0 + 960*s2*z1 + 960*z2 + 30*z3 + z4, 'index1': 32*z0 + z3, 'index2': 30*s2**2, 'index3': 30*z3 + z4, 'index4': 960*s2*z0*((s2**2//s2)) + 960*z1*((s2**2//s2)) + 960*z2 + 30*z3 + z4}Generated code:
After:
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang