-
Notifications
You must be signed in to change notification settings - Fork 26.3k
[inductor] Add typing to ir.py 1 #140912
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[inductor] Add typing to ir.py 1 #140912
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/140912
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ✅ You can merge normally! (1 Unrelated Failure)As of commit 02ab2d8 with merge base 260d1dc ( FLAKY - The following job failed but was likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
|
||
| @classmethod | ||
| def create(cls, *args, **kwargs): # type: ignore[no-untyped-def] | ||
| def create(cls, *args: Any, **kwargs: Any) -> TensorBox: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Without specifying the actual arg types here then callers lose their type safety.
But Any is better than type: ignore so it's better than it was...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree not ideal, but better than it was.
torch/_prims_common/__init__.py
Outdated
| def __rmul__(self, other: Any) -> typing.Self: | ||
| ... | ||
|
|
||
| _T = TypeVar("_T", bound=_WorksWithInt) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: In general I don't like adding bounds to a generic _T like this (because usually _T would just be an unbound generic). I would prefer a name like _WorksWithIntT so the use sites are clearer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this case the caller needs to preserve their types (either sympy.Expr when used in inductor, SymInt when used in primtorch), but the code in this file needs the protocal. I couldn't get it to type cleanly without this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm just bikeshedding on the name _T. When I see a raw _T my assumption is that it's unbound. Naming it something like _WorksWithIntT would (maybe) make it clear when reading the use sites that it's a generic but also make it clear that it's not an unbound generic.
To me:
def make_channels_last_1d_strides_for(
shape: Sequence[_T],
) -> Tuple[Union[_T, int], ...]:
says this can take a Sequence of any type and returns a tuple with that same type (and int).
but
def make_channels_last_1d_strides_for(
shape: Sequence[_WorksWithIntT],
) -> Tuple[Union[_WorksWithIntT, int], ...]:
says that this takes a Sequence of a generic type which is bound by _WorksWithInt and returns a tuple with that same type (and int).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll rename it to _IntLikeT
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Pull Request resolved: pytorch#140912 Approved by: https://github.com/aorenste ghstack dependencies: pytorch#140895, pytorch#140910
ghstack-source-id: 953d1c7 Pull Request resolved: pytorch/pytorch#140912
Stack from ghstack (oldest at bottom):
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov