-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Handle Scalars Better #17875
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handle Scalars Better #17875
Conversation
| Operator( | ||
| "prim::Int(Scalar a) -> float", | ||
| [](Stack& stack) { | ||
| IValue scalar; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Popping these off as an at::Scalar instead would add the run-time assert that it has the right IValue tag
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Edit: that would convert it from an int/float to a scalar and back again unnecessarily
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@eellison has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@eellison has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@eellison is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@eellison has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@eellison is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Summary: This PR allows Scalars to be castable with `int()` and `float()`, allows scalars to match with float arguments, and prints out a better error message if `x.item()` is used as an int. Scalars are a very uncommon case, and I don't think we want to add the maintenance burden of building out op coverage for it. It's more maintainable to better handle converting it to int/float. Fix pytorch/pytorch#17652 Also note: pytorch/pytorch#16849 Pull Request resolved: pytorch/pytorch#17875 Differential Revision: D14411138 Pulled By: eellison fbshipit-source-id: a4e957cefb0ffd10ddb234d92f6d1558cfce8751
* upstream/master: (87 commits) Make Variable::set_data non-const; cosmetic fixes. remove warning for upsample code (pytorch#17921) Optimize TileOp (pytorch#17290) Optimize channel_stats_op (pytorch#16243) enable shape inference for elementwise operators (pytorch#17885) Remove remaining test jit expects redux (pytorch#17924) Handle Scalars Better (pytorch#17875) Fixed a formatting issue in doc comments (pytorch#17505) Add nbytes, itemsize, element_size to at::Tensor. (pytorch#17810) Fix lint in test_distributions.py Fix lint in test_jit.py Fix lint errors in test_autograd Added a few extra python bindings to help with walking the IR graph from Python (pytorch#17822) kthvalue consistency with sort in the presence of NaN (pytorch#17824) Fix minor grammatical mistakes in torch/nn/modules/loss.py (pytorch#17892) Remove (almost all) TensorOptions from native_functions.yaml (pytorch#17385) Restore full Windows tests (pytorch#17102) Prevent VS2017 from emitting ambiguous symbol errors (second time) Fix windows test hang (pytorch#17778) torch.btrifact for tensors with greater than 3 dimensions (pytorch#14964) ...
| return 0; | ||
| }), | ||
| Operator( | ||
| "prim::Int(Scalar a) -> float", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't this return int?
| DeviceObjType::get()->isSubtypeOf(concrete_type)) { | ||
| return graph.insert(aten::device, {value}, {}, loc); | ||
| } | ||
| if (concrete_type == FloatType::get() && |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at this PR now because of the bug James commented about. But this line is suspect as well because it broadens semantics. Why do we allow implicit conversions from Number -> float but not from Number -> int? Both of those are lossy conversions.
@driazati - changes to implicit conversions should always have high scrutiny. We need to make sure there is a discussion before a change is landed.
This PR allows Scalars to be castable with
int()andfloat(), allows scalars to match with float arguments, and prints out a better error message ifx.item()is used as an int.Scalars are a very uncommon case, and I don't think we want to add the maintenance burden of building out op coverage for it. It's more maintainable to better handle converting it to int/float.
Fix #17652
Also note: #16849