-
Notifications
You must be signed in to change notification settings - Fork 26.3k
[C++ API] L1Loss module #25902
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[C++ API] L1Loss module #25902
Conversation
|
@ShahriarSS Not keeping |
| s.backward(); | ||
|
|
||
| ASSERT_EQ(output.sizes(), torch::IntArrayRef()); | ||
| ASSERT_EQ(input.sizes(), input.grad().sizes()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a way we can also check the correctness of output value after loss->forward?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I based this on python's lL1Loss test and I think that it didn't check. What is the expected value here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it, I think we can rely on the parity test to make sure the output value is in sync with the Python API.
|
I'll also work on enabling parity test for |
|
I will rename this PR to |
|
@pytorchbot rebase this please |
0db0f10 to
25c1855
Compare
|
@pytorchbot rebase this please |
fc699ab to
fa19561
Compare
yf225
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot @ShahriarSS !
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yf225 is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
@yf225 This is L1Loss module. I don't think that
_Lossand_WeightedLossas base Python classes do anything. First one sets reduction type and also takes inreduceparameter which is deprecated. The second one only registersweightparameter. I don't think that we should keep this structure. What do you think?