Skip to content

Conversation

@laithsakka
Copy link
Contributor

@laithsakka laithsakka commented May 22, 2025

@pytorch-bot
Copy link

pytorch-bot bot commented May 22, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/154164

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 2a7ba24 with merge base 413664b (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot pytorch-bot bot added ciflow/inductor release notes: fx release notes category labels May 22, 2025
laithsakka added a commit that referenced this pull request May 22, 2025
@laithsakka laithsakka changed the title remove guard_size_oblivious from is_no_zero proxy call check [EASY] remove guard_size_oblivious from is_no_zero proxy call check May 22, 2025
@laithsakka laithsakka changed the title [EASY] remove guard_size_oblivious from is_no_zero proxy call check [EASY] remove guard_size_oblivious from is_nonzero proxy call check May 22, 2025
raise RuntimeError(
"Boolean value of Tensor with more than one value is ambiguous"
)
torch._check(args[0].numel() == 1, lambda: "Boolean value of Tensor with more than one value is ambiguous") # type: ignore[attr-defined]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does torch._check() do with unbacked? Does it just treat it as false?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

adds a runtime assert that's checked at runtime

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So is that the desired behavior? It seems like previously unbacked would always raise the runtime error (because of 0/1 specialization?).

@pytorchmergebot
Copy link
Collaborator

Starting merge as part of PR stack under #154234

2 similar comments
@pytorchmergebot
Copy link
Collaborator

Starting merge as part of PR stack under #154234

@pytorchmergebot
Copy link
Collaborator

Starting merge as part of PR stack under #154234

pytorchmergebot pushed a commit that referenced this pull request May 26, 2025
…nt (#154167)

This is a short circuit, that we should not fail on. Before this PR we would not fail on u0, u0+u1,
only if they are size like.  but we will fail on u0-u1.. etc for no need.
guard_or_false seems appropriate for that reason.

This was added in #122145 there was no unit tests for me to verify
why it was added, i could not repo using the associated issue , the example does not work.

Pull Request resolved: #154167
Approved by: https://github.com/bobrenjc93
ghstack dependencies: #154154, #154164
pytorchmergebot pushed a commit that referenced this pull request May 26, 2025
…ce (#154172)

This was added in #119562
the idea in this loop seems to be the following.
```
    if (TORCH_GUARD_SIZE_OBLIVIOUS(size.sym_eq(1))) {
      // NB: we could short circuit this once needs_reduce is true but there's
      // no point since the reduction function will guard on this anyway
      if (!c10::guard_or_false(size.sym_eq(target), __FILE__, __LINE__)) {
        needs_reduce = true;
      }
    } else {
      if (!size.sym_eq(target).expect_true(__FILE__, __LINE__)) {
        fail();
      }
    }
  ```
  1. if we know size ==1
       1.1 : if we know for sure size == target --> no reduce needed.
       1.2 : we know for sure that size != target  --> we do reduction.
       1.3: we could not tell if size == target or not --> we do reduction.
  2. if we do now know if size ==1 or not
     we add a runtime assertions that size ==target and we fail at runtime if size is not equal to target.

We could have simplified 1.1 and always do reduction under 1.1, since doing 1.3 without runtime checks implies
that it is safe, but i feel the reason could be perf here? idk.

anyway using TORCH_GUARD_OR_FALSE instead of TORCH_GUARD_SIZE_OBLIVIOUS here is appropriate.
there is really no clear reason for size oblivious reasoning. or for this logic not to apply when size is not size like
size is always >=0 anyway. but bad reasoning can make us not able to infer that although we know its true here.

 python test/dynamo/test_misc.py -k test_validate_outputs_unbacked

Pull Request resolved: #154172
Approved by: https://github.com/bobrenjc93
ghstack dependencies: #154154, #154164, #154167
pytorchmergebot pushed a commit that referenced this pull request May 26, 2025
this was added in #141659, the current change keep the same intention
"i do not want to fail here if i cant tell if the size is zero or not"
i am not familiar enough in the code to know if we need here a runtime check, but looking at current
impl it seems that guard_or_false is appropriate to match current behaviour  and have the same effect of guard_size_oblivious here.
Pull Request resolved: #154234
Approved by: https://github.com/bobrenjc93
ghstack dependencies: #154154, #154164, #154167, #154172
@github-actions github-actions bot deleted the gh/laithsakka/184/head branch June 27, 2025 02:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants