Skip to content

Conversation

@Kiyosora
Copy link
Contributor

@Kiyosora Kiyosora commented Jun 24, 2021

Porting torch.isposinf & torch.isneginf to structured kernel
Related #55070

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Jun 24, 2021

💊 CI failures summary and remediations

As of commit a5cd52d (more details on the Dr. CI page and at hud.pytorch.org/pr/60633):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


Preview docs built from this PR

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@Kiyosora Kiyosora changed the title [WIP] Port isposinf & isneginf kernel to structured kernels. [WIP] Port isposinf & isneginf kernel to structured kernels. Jun 24, 2021
@Kiyosora Kiyosora force-pushed the structured_isposinf_isneginf_2 branch from 4625aed to 2ce2c61 Compare June 24, 2021 08:14
@Kiyosora Kiyosora force-pushed the structured_isposinf_isneginf_2 branch 2 times, most recently from 3b0b1b2 to f9fc14b Compare June 25, 2021 08:06
@codecov
Copy link

codecov bot commented Jun 25, 2021

Codecov Report

Merging #60633 (a5cd52d) into master (5b6818f) will increase coverage by 0.08%.
The diff coverage is 89.28%.

@@            Coverage Diff             @@
##           master   #60633      +/-   ##
==========================================
+ Coverage   76.12%   76.21%   +0.08%     
==========================================
  Files        2062     2062              
  Lines      205577   205579       +2     
==========================================
+ Hits       156501   156681     +180     
+ Misses      49076    48898     -178     

@Kiyosora Kiyosora force-pushed the structured_isposinf_isneginf_2 branch 2 times, most recently from 82d6ceb to ad37106 Compare June 26, 2021 03:41
@Kiyosora Kiyosora changed the title [WIP] Port isposinf & isneginf kernel to structured kernels. Port isposinf & isneginf kernel to structured kernels Jun 26, 2021
@Kiyosora Kiyosora marked this pull request as ready for review June 26, 2021 09:11
@Kiyosora Kiyosora requested a review from ezyang as a code owner June 26, 2021 09:11
@Kiyosora Kiyosora force-pushed the structured_isposinf_isneginf_2 branch from ad37106 to ebd6d39 Compare June 26, 2021 09:13
@jbschlosser jbschlosser requested a review from bdhirsh June 28, 2021 16:27
@jbschlosser jbschlosser added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Jun 28, 2021
Copy link
Contributor

@bdhirsh bdhirsh Jun 28, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you tried inheriting from TensorIteratorBase? You can do that with structured_inherits: TensorIteratorBase.

It looks like right now, you don't use TensorIteratorBase at all in the meta functions of isneginf and isposinf. Two reasons that would be useful are:

  • We want to make sure that the meta function does all of the type promotion / broadcasting / handles other meta-like edge cases. If you use TensorIterator in the actual impl() function but not the meta() function, there's a risk of forgetting to do some checking in the meta function that's normally handled by TensorIterator.
  • (More long-term) We eventually want to make all TensorIterator ops structured and use TensorIteratorBase everywhere, instead of TensorIterator.

Let me know if you run into issues with that - we've had some instances of it being hard to directly use TensorIteratorBase, mostly because TensorIterator doesn't let you modify the output tensor before calling into it. I think you'll have to use .declare_static_dtype_and_device(kBool, self.device()) in the config to force TensorIterator to treat the output tensor as a bool.

cc @ysiraichi in case you uncovered any other issues - I think you started looking into porting a few binary comparison ops.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bdhirsh it seems like the intention here was to avoid constructing the TensorIterator entirely in the case that we short circuit to fill_.

But I do agree that TI is better, because as currently written the meta function is wrong, as it doesn't preserve layout.

@ezyang ezyang removed their request for review June 28, 2021 19:59
@ezyang
Copy link
Contributor

ezyang commented Jun 28, 2021

@bdhirsh I'm going to let you shepherd this one

@Kiyosora Kiyosora force-pushed the structured_isposinf_isneginf_2 branch 2 times, most recently from 59c4342 to 5499816 Compare June 29, 2021 09:01
@Kiyosora
Copy link
Contributor Author

Kiyosora commented Jun 29, 2021

Hey, @bdhirsh @ezyang! Thank you very much for reviewing this PR and hint me about .declare_static_dtype_and_device. In fact, I was confused about output dtype casting in TensorIterator. Thanks to your suggestion, it seem could be solved now, Let me try and fix it!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One point need to discuss is that, after considering similar operators (like torch.signbit) that require boolean casting output, I wrote unary_boolean_op for code reuse. I am not sure if that is appropriate or should I build TI directly in meta function?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm, one reason not to is that it looks like not all "unary boolean" ops follow this same set of config.

E.g. looking at the logical_not kernel - it's pretty similar, in that torch.logical_not(a) will return a boolean tensor, but it looks like it doesn't enforce that the out= tensor has to have boolean dtype. So you can't unconditionally use .declare_static_dtype_and_device(at::kBool, a.device()) - you only want to do it when the output is currently not defined (in the functional version of the op).

If you leave this as a TensorIterator API, can you change the name (maybe unary_force_boolean_op?), and leave a comment mentioning that this API should only be used when the output tensor must have boolean type?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It sounds very reasonable, thanks for such detailed explanation! Let me change the name and leave a comment to describe that.

@Kiyosora Kiyosora force-pushed the structured_isposinf_isneginf_2 branch from 5499816 to a5cd52d Compare June 29, 2021 15:08
@Kiyosora Kiyosora requested a review from bdhirsh June 30, 2021 01:02
@facebook-github-bot
Copy link
Contributor

@bdhirsh has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@ezyang ezyang requested a review from SplitInfinity July 6, 2021 16:39
@facebook-github-bot
Copy link
Contributor

@bdhirsh merged this pull request in 45ce26c.

@Kiyosora Kiyosora deleted the structured_isposinf_isneginf_2 branch July 8, 2021 02:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed Merged open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants