Skip to content

Conversation

@eellison
Copy link
Contributor

@eellison eellison commented Mar 11, 2019

This PR allows Scalars to be castable with int() and float(), allows scalars to match with float arguments, and prints out a better error message if x.item() is used as an int.

Scalars are a very uncommon case, and I don't think we want to add the maintenance burden of building out op coverage for it. It's more maintainable to better handle converting it to int/float.

Fix #17652

Also note: #16849

@eellison eellison requested review from suo and wanchaol and removed request for suo March 11, 2019 18:30
@facebook-github-bot facebook-github-bot added the oncall: jit Add this issue/PR to JIT oncall triage queue label Mar 11, 2019
@eellison eellison requested a review from driazati March 11, 2019 18:42
Operator(
"prim::Int(Scalar a) -> float",
[](Stack& stack) {
IValue scalar;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Popping these off as an at::Scalar instead would add the run-time assert that it has the right IValue tag

Copy link
Contributor Author

@eellison eellison Mar 11, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Edit: that would convert it from an int/float to a scalar and back again unnecessarily

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@eellison has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@eellison has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@eellison is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@eellison has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@eellison is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

zdevito pushed a commit to zdevito/ATen that referenced this pull request Mar 12, 2019
Summary:
This PR allows Scalars to be castable with `int()` and `float()`, allows scalars to match with float arguments, and prints out a better error message if `x.item()` is used as an int.

Scalars are a very uncommon case, and I don't think we want to add the maintenance burden of building out op coverage for it. It's more maintainable to better handle converting it to int/float.

Fix pytorch/pytorch#17652

Also note: pytorch/pytorch#16849
Pull Request resolved: pytorch/pytorch#17875

Differential Revision: D14411138

Pulled By: eellison

fbshipit-source-id: a4e957cefb0ffd10ddb234d92f6d1558cfce8751
petrex pushed a commit to petrex/pytorch that referenced this pull request Mar 14, 2019
* upstream/master: (87 commits)
  Make Variable::set_data non-const; cosmetic fixes.
  remove warning for upsample code (pytorch#17921)
  Optimize TileOp (pytorch#17290)
  Optimize channel_stats_op (pytorch#16243)
  enable shape inference for elementwise operators (pytorch#17885)
  Remove remaining test jit expects redux (pytorch#17924)
  Handle Scalars Better (pytorch#17875)
  Fixed a formatting issue in doc comments (pytorch#17505)
  Add nbytes, itemsize, element_size to at::Tensor. (pytorch#17810)
  Fix lint in test_distributions.py
  Fix lint in test_jit.py
  Fix lint errors in test_autograd
  Added a few extra python bindings to help with walking the IR graph from Python (pytorch#17822)
  kthvalue consistency with sort in the presence of NaN (pytorch#17824)
  Fix minor grammatical mistakes in torch/nn/modules/loss.py (pytorch#17892)
  Remove (almost all) TensorOptions from native_functions.yaml (pytorch#17385)
  Restore full Windows tests (pytorch#17102)
  Prevent VS2017 from emitting ambiguous symbol errors (second time)
  Fix windows test hang (pytorch#17778)
  torch.btrifact for tensors with greater than 3 dimensions (pytorch#14964)
  ...
return 0;
}),
Operator(
"prim::Int(Scalar a) -> float",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this return int?

DeviceObjType::get()->isSubtypeOf(concrete_type)) {
return graph.insert(aten::device, {value}, {}, loc);
}
if (concrete_type == FloatType::get() &&
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at this PR now because of the bug James commented about. But this line is suspect as well because it broadens semantics. Why do we allow implicit conversions from Number -> float but not from Number -> int? Both of those are lossy conversions.

@driazati - changes to implicit conversions should always have high scrutiny. We need to make sure there is a discussion before a change is landed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

oncall: jit Add this issue/PR to JIT oncall triage queue

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Unable to jit.script a[0].item() == 1

6 participants