-
Notifications
You must be signed in to change notification settings - Fork 26.3k
[aoti] Fix constant inputs passed to aoti #131594
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/131594
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 16f315b with merge base 980bb54 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@angelayi has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
5a9b27a to
bab80fa
Compare
|
@angelayi has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
bab80fa to
22cf510
Compare
| in_spec = pytree.treespec_loads(call_spec[0]) | ||
| out_spec = pytree.treespec_loads(call_spec[1]) | ||
| flat_inputs = pytree.tree_flatten((args, reorder_kwargs(kwargs, in_spec)))[0] | ||
| flat_inputs = [x for x in flat_inputs if isinstance(x, torch.Tensor)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about the C++ side? Is there a way to extend the Runner interface?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, right now it results in a type error if you pass non-tensors to the runner because the cpp interface is for a List[Tensor]. Does this mean that we should convert it to being a union of inputs?
22cf510 to
16f315b
Compare
|
@angelayi has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
|
@pytorchbot merge -f 'Landed internally' (Initiating merge automatically since Phabricator Diff has merged, using force because this PR might not pass merge_rules.json but landed internally) |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Summary: # Why The arguments are filtered out as they are just const in the compiled graph, but the logger still expects a non-None type # What When passing a filtered out arg (None) to the debug logger, just log that it's a filtered out argument, instead of throwing a Type error # Background pytorch#131594 Test Plan: - execute repro from pytorch#135584 (comment) with and without the edits Differential Revision: D63652564
…#137032) Summary: Pull Request resolved: pytorch#137032 # Why The arguments are filtered out as they are just const in the compiled graph, but the logger still expects a non-None type # What When passing a filtered out arg (None) to the debug logger, just log that it's a filtered out argument, instead of throwing a Type error also add unit test to capture this behavior going forward # Background pytorch#131594 Test Plan: - execute repro from pytorch#135584 (comment) with and without the edits Reviewed By: angelayi Differential Revision: D63652564
…#137032) Summary: Pull Request resolved: pytorch#137032 # Why The arguments are filtered out as they are just const in the compiled graph, but the logger still expects a non-None type # What When passing a filtered out arg (None) to the debug logger, just log that it's a filtered out argument, instead of throwing a Type error also add unit test to capture this behavior going forward # Background pytorch#131594 Test Plan: - execute repro from pytorch#135584 (comment) with and without the edits Reviewed By: angelayi Differential Revision: D63652564
…#137032) Summary: Pull Request resolved: pytorch#137032 # Why The arguments are filtered out as they are just const in the compiled graph, but the logger still expects a non-None type # What When passing a filtered out arg (None) to the debug logger, just log that it's a filtered out argument, instead of throwing a Type error also add unit test to capture this behavior going forward # Background pytorch#131594 Test Plan: - execute repro from pytorch#135584 (comment) with and without the edits Reviewed By: angelayi Differential Revision: D63652564
Summary: # Why The arguments are filtered out as they are just const in the compiled graph, but the logger still expects a non-None type # What When passing a filtered out arg (None) to the debug logger, just log that it's a filtered out argument, instead of throwing a Type error # Background #131594 Test Plan: - execute repro from #135584 (comment) with and without the edits Differential Revision: D63652564 Pull Request resolved: #137032 Approved by: https://github.com/angelayi
…#137032) Summary: # Why The arguments are filtered out as they are just const in the compiled graph, but the logger still expects a non-None type # What When passing a filtered out arg (None) to the debug logger, just log that it's a filtered out argument, instead of throwing a Type error # Background pytorch#131594 Test Plan: - execute repro from pytorch#135584 (comment) with and without the edits Differential Revision: D63652564 Pull Request resolved: pytorch#137032 Approved by: https://github.com/angelayi
In cases where the program takes in a constant, export will specialize on the constant and embed the constant into the graph, with the graph containing a placeholder node with no users. However, inductor errors further down as typically in torch.compile, these constants don't show up as inputs. Since these constants are already embedded in the graph, we will just ignore these inputs while compiling with AOTI, and filter out the non-tensor inputs during the runtime.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang