-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Add Inductor config for default stride behavior #135238
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
By default, Inductor is allowed to manipulate the layout (strides+storage offset) of input tensors to custom operators. We want to change it so that the default is that Inductor should respect the stride order of input tensors to custom operators. This PR adds a config to toggle the behavior, in the next PR up we'll change the default. We also make the following changes: - We add a new operator Tag (flexible_layout), which means that inductor is allowed to manipulate the layout. When we flip the default, users can specify they want the old behavior by using this tag. This is a reland of #126986, which was previously reverted due to silent incorrectness. We've since fixed the silent incorrectness (#133639) Test Plan: - new test [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/135238
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New FailureAs of commit c4f4d44 with merge base c7328df ( NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
| desc: | | ||
| This tag indicates that the operator should be passed Tensors following | ||
| the same stride permutation as observed in eager when compiled in inductor. | ||
| Only one of {needs_fixed_stride_order, flexible_layout} can apply; if |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this PR only focus on fixed_stride_order? Do we need another needs_exact_stride?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to add a needs_exact_stride at some point. It would be better for the default to be needs_exact_stride. This is a larger project though and needs some things we don't have yet
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it makes sense.
albanD
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SGTM
|
@pytorchbot merge -f "unrelated failure" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
…out constraint for custom ops" Fixes two things: - For regular PyTorch ops, the default layout constraint tag is always flexible_layout. This was a bug with #135238 - Mark the new quantized _wrapped_linear_prepack ops as flexible_layout. The metas for these are incorrect, I didn't want to fix them (and changing the default requires the metas actually be correct). Test Plan: - The next PR up in the stack. The PRs are split because the next one is riskier. foo [ghstack-poisoned]
…or custom ops" Fixes two things: - For regular PyTorch ops, the default layout constraint tag is always flexible_layout. This was a bug with #135238 - Mark the new quantized _wrapped_linear_prepack ops as flexible_layout. The metas for these are incorrect, I didn't want to fix them (and changing the default requires the metas actually be correct). Test Plan: - The next PR up in the stack. The PRs are split because the next one is riskier. foo [ghstack-poisoned]
…ps (#135391) Fixes two things: - For regular PyTorch ops, the default layout constraint tag is always flexible_layout. This was a bug with #135238 - Mark the new quantized _wrapped_linear_prepack ops as flexible_layout. The metas for these are incorrect, I didn't want to fix them (and changing the default requires the metas actually be correct). Test Plan: - The next PR up in the stack. The PRs are split because the next one is riskier. foo Pull Request resolved: #135391 Approved by: https://github.com/albanD
By default, Inductor is allowed to manipulate the layout (strides+storage offset) of input tensors to custom operators. We want to change it so that the default is that Inductor should respect the stride order of input tensors to custom operators. This PR adds a config to toggle the behavior, in the next PR up we'll change the default. We also make the following changes: - We add a new operator Tag (flexible_layout), which means that inductor is allowed to manipulate the layout. When we flip the default, users can specify they want the old behavior by using this tag. This is a reland of pytorch/pytorch#126986, which was previously reverted due to silent incorrectness. We've since fixed the silent incorrectness (pytorch/pytorch#133639) Test Plan: - new test ghstack-source-id: ce6c52d Pull Request resolved: pytorch/pytorch#135238
By default, Inductor is allowed to manipulate the layout (strides+storage offset) of input tensors to custom operators. We want to change it so that the default is that Inductor should respect the stride order of input tensors to custom operators. This PR adds a config to toggle the behavior, in the next PR up we'll change the default. We also make the following changes: - We add a new operator Tag (flexible_layout), which means that inductor is allowed to manipulate the layout. When we flip the default, users can specify they want the old behavior by using this tag. This is a reland of pytorch#126986, which was previously reverted due to silent incorrectness. We've since fixed the silent incorrectness (pytorch#133639) Test Plan: - new test Pull Request resolved: pytorch#135238 Approved by: https://github.com/albanD
…ps (pytorch#135391) Fixes two things: - For regular PyTorch ops, the default layout constraint tag is always flexible_layout. This was a bug with pytorch#135238 - Mark the new quantized _wrapped_linear_prepack ops as flexible_layout. The metas for these are incorrect, I didn't want to fix them (and changing the default requires the metas actually be correct). Test Plan: - The next PR up in the stack. The PRs are split because the next one is riskier. foo Pull Request resolved: pytorch#135391 Approved by: https://github.com/albanD
Stack from ghstack (oldest at bottom):
By default, Inductor is allowed to manipulate the layout
(strides+storage offset) of input tensors to custom operators.
We want to change it so that the default is that Inductor should respect
the stride order of input tensors to custom operators.
This PR adds a config to toggle the behavior, in the next PR up we'll
change the default. We also make the following changes:
inductor is allowed to manipulate the layout. When we flip the default,
users can specify they want the old behavior by using this tag.
This is a reland of #126986,
which was previously reverted due to silent incorrectness. We've since
fixed the silent incorrectness
(#133639)
Test Plan:
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang