-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Changing as_strided_scatter to deterministic inputs #85583
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/85583
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 FailuresAs of commit 0b103b0: The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
When I update the inputs to be deterministic I would have expected the I still get this error with the input: I'm going to continue to investigating this |
|
Is the gradient of this function deterministic for all the inputs? (it should be if the function is differentiable) |
[ghstack-poisoned]
|
/easycla As part of the transition to the PyTorch Foundation, this project now requires contributions be covered under the new CLA. See #85559 for additional details. This comment will trigger a new check of this PR. If you are already covered, you will simply see a new "EasyCLA" check that passes. If you are not covered, a bot will leave a new comment with a link to sign. |
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
| def error_inputs_as_strided_scatter(op_info, device, **kwargs): | ||
| make_arg = partial(make_tensor, device=device, dtype=torch.float32, requires_grad=False) | ||
|
|
||
| input_t = make_arg([4, 4]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a comment describing what error condition is being tested
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added
| ((16,), (2, 2, 2, 2), (1, 1, 1, 1), 0), | ||
| ((16,), (2, 1, 1, 2), (1, 7, 7, 1), 0), | ||
| ((3, 3), (2, 2), (2, 1), 0), | ||
| ((16,), (2, 2, 2, 2), (8, 4, 2, 1), 0), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the change here? Should the test cases have a comment describing the property they assume?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added a comment above, the removed code was causing non-deterministic behaviour so I modified the strides
| DecorateInfo(unittest.skip('Only fails for LAZY, passes on everything else'), 'TestCompositeCompliance', 'test_backward'), # noqa: B950 | ||
| DecorateInfo(unittest.skip('Passes on complex64 and float32 only'), 'TestJit', 'test_variant_consistency_jit'), | ||
| DecorateInfo(unittest.skip('Fails on cuda + rocm'), 'TestCommon', 'test_complex_half_reference_testing'), | ||
| # https://github.com/pytorch/pytorch/issues/88105 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice pointer to the issue
mruberry
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall looks good @srossross but I left some suggestions for comments to help future developers understand what's happening
[ghstack-poisoned]
mruberry
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool; cc @kurtamohler
[ghstack-poisoned]
|
I believe with #88342 the backward errors should be fixed |
|
I'm not sure, but could something still be wrong, because now the tests for |
[ghstack-poisoned]
|
I think it's fine to merge this one as is. Then we can follow up with a different PR fixing that issue. |
|
@pytorchbot merge -f "unrelated failures" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Stack from ghstack (oldest at bottom):