Skip to content

Conversation

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Sep 20, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/163386

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit a869842 with merge base 51152ef (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

[ghstack-poisoned]
@vadimkantorov
Copy link
Contributor

btw does uint8/uint16 also work here? or no impls for unsigned?

@jansel
Copy link
Contributor Author

jansel commented Sep 20, 2025

Not sure if all backends implement those types.

@Skylion007
Copy link
Collaborator

Not sure if all backends implement those types.

More reason to factor this out in their own utility. It would be good to have a robust check here we could grab the min dtype that is supported on target device.

Copy link
Collaborator

@Skylion007 Skylion007 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is subtly complicated to do, and we should factor it out into its own function.

[ghstack-poisoned]
@jansel
Copy link
Contributor Author

jansel commented Sep 21, 2025

The dtype thing wasn't the main reason for this change, it matters close to zero for performance. Just an incidental change. (These will be registers in a memory-bound kernel.)

The real improvement here was removing the torch.where call from the common path which was causing issues for complex numbers.

[ghstack-poisoned]

def _strength_reduce_integer(val: int) -> torch.dtype:
for possible_dtype in (torch.uint8, torch.uint16, torch.int32):
if val <= torch.iinfo(possible_dtype).max:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: fix for unbacked. If adding guards here is intentional: guard_or_False. Otherwise: statically_known_true(val <= torch.iinfo(possible_dtype).max)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is only called for int, for SymInt we use int64.

@pytorchmergebot
Copy link
Collaborator

Starting merge as part of PR stack under #163434

@pytorchmergebot
Copy link
Collaborator

Starting merge as part of PR stack under #163415

pytorchmergebot pushed a commit that referenced this pull request Sep 24, 2025
jainapurva pushed a commit that referenced this pull request Sep 29, 2025
@github-actions github-actions bot deleted the gh/jansel/534/head branch October 23, 2025 02:13
Khanaksahu pushed a commit to Khanaksahu/pytorch that referenced this pull request Nov 17, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants