-
Notifications
You must be signed in to change notification settings - Fork 26.3k
[EASY] remove guard_size_oblivious from is_nonzero proxy call check #154164
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/154164
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 2a7ba24 with merge base 413664b ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
| raise RuntimeError( | ||
| "Boolean value of Tensor with more than one value is ambiguous" | ||
| ) | ||
| torch._check(args[0].numel() == 1, lambda: "Boolean value of Tensor with more than one value is ambiguous") # type: ignore[attr-defined] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does torch._check() do with unbacked? Does it just treat it as false?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
adds a runtime assert that's checked at runtime
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So is that the desired behavior? It seems like previously unbacked would always raise the runtime error (because of 0/1 specialization?).
|
Starting merge as part of PR stack under #154234 |
2 similar comments
|
Starting merge as part of PR stack under #154234 |
|
Starting merge as part of PR stack under #154234 |
…nt (#154167) This is a short circuit, that we should not fail on. Before this PR we would not fail on u0, u0+u1, only if they are size like. but we will fail on u0-u1.. etc for no need. guard_or_false seems appropriate for that reason. This was added in #122145 there was no unit tests for me to verify why it was added, i could not repo using the associated issue , the example does not work. Pull Request resolved: #154167 Approved by: https://github.com/bobrenjc93 ghstack dependencies: #154154, #154164
…ce (#154172) This was added in #119562 the idea in this loop seems to be the following. ``` if (TORCH_GUARD_SIZE_OBLIVIOUS(size.sym_eq(1))) { // NB: we could short circuit this once needs_reduce is true but there's // no point since the reduction function will guard on this anyway if (!c10::guard_or_false(size.sym_eq(target), __FILE__, __LINE__)) { needs_reduce = true; } } else { if (!size.sym_eq(target).expect_true(__FILE__, __LINE__)) { fail(); } } ``` 1. if we know size ==1 1.1 : if we know for sure size == target --> no reduce needed. 1.2 : we know for sure that size != target --> we do reduction. 1.3: we could not tell if size == target or not --> we do reduction. 2. if we do now know if size ==1 or not we add a runtime assertions that size ==target and we fail at runtime if size is not equal to target. We could have simplified 1.1 and always do reduction under 1.1, since doing 1.3 without runtime checks implies that it is safe, but i feel the reason could be perf here? idk. anyway using TORCH_GUARD_OR_FALSE instead of TORCH_GUARD_SIZE_OBLIVIOUS here is appropriate. there is really no clear reason for size oblivious reasoning. or for this logic not to apply when size is not size like size is always >=0 anyway. but bad reasoning can make us not able to infer that although we know its true here. python test/dynamo/test_misc.py -k test_validate_outputs_unbacked Pull Request resolved: #154172 Approved by: https://github.com/bobrenjc93 ghstack dependencies: #154154, #154164, #154167
this was added in #141659, the current change keep the same intention "i do not want to fail here if i cant tell if the size is zero or not" i am not familiar enough in the code to know if we need here a runtime check, but looking at current impl it seems that guard_or_false is appropriate to match current behaviour and have the same effect of guard_size_oblivious here. Pull Request resolved: #154234 Approved by: https://github.com/bobrenjc93 ghstack dependencies: #154154, #154164, #154167, #154172
Stack from ghstack (oldest at bottom):
This was added in #149637,
torch._check can handle unbacked there is no need for size oblivious reasoning here.
Note this does not make is_nonzero unbacked friendly. but that is a different story.
I ran the test added in #149637 for veirfication.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv