-
Notifications
You must be signed in to change notification settings - Fork 26.3k
[jit] math module support #19115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[jit] math module support #19115
Conversation
|
Ideally we'd be able to bind the |
torch/csrc/jit/register_prim_ops.cpp
Outdated
| }), | ||
|
|
||
| Operator( | ||
| "aten::ceil(float a) -> int", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should return a float (and could you change aten::floor above to also return a float)? Casting a float to an int may truncate the value if its bigger than std::numeric_limits<int64_t>::max(). It's safer to make the user explicitly agree to truncate.
@torch.jit.script
def double_floor(x):
# type: (float) -> float
return x - x % 1
@torch.jit.script
def int_floor(x):
# type: (float) -> int
return int(x - x % 1)
print(double_floor(1.2))
print(int_floor(1.2))
print(double_floor(float(5e29)))
print(int_floor(float(5e29))) # incorrect in torchscriptThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, just one more comment before we merge this. The CI failures are probably unrelated.
test/test_jit.py
Outdated
| return math.pow(2.0, 2.0) | ||
|
|
||
| def test_pow_int(): | ||
| return math.pow(2.0, 2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For all these tests could you refactor them to take in an input instead of using a literal? This will let us test different values more easily.
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@driazati has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Summary: Fixes: #19253 Fixing pow(Tensor, float) is straightforward. The breakage for pow(float, Tensor) is a bit more subtle to trigger, and fixing needs `torch.log` (`math.log` didn't work) from the newly merged #19115 (Thanks ngimel for pointing out this has landed.) Pull Request resolved: #19324 Differential Revision: D15003531 Pulled By: ailzhang fbshipit-source-id: 8b22138fa27a43806b82886fb3a7b557bbb5a865
Summary: This PR refer to issue [pytorch#19026](pytorch#19026) Pull Request resolved: pytorch#19115 Differential Revision: D14936053 Pulled By: driazati fbshipit-source-id: 68d5f33ced085fcb8c10ff953bc7e99df055eccc
Summary: Fixes: pytorch#19253 Fixing pow(Tensor, float) is straightforward. The breakage for pow(float, Tensor) is a bit more subtle to trigger, and fixing needs `torch.log` (`math.log` didn't work) from the newly merged pytorch#19115 (Thanks ngimel for pointing out this has landed.) Pull Request resolved: pytorch#19324 Differential Revision: D15003531 Pulled By: ailzhang fbshipit-source-id: 8b22138fa27a43806b82886fb3a7b557bbb5a865
Summary: Not much to say. Fixes implementation introduced here: pytorch#19115 Pull Request resolved: pytorch#21041 Differential Revision: D15528801 Pulled By: Chillee fbshipit-source-id: bacd709eb711ca00156bd70480d6051b437517ed
This PR refer to issue #19026