Use generators with all/any in torch/optim#78142
Conversation
🔗 Helpful links
✅ 1 Base FailuresAs of commit 0fc1580 (more details on the Dr. CI page): Expand to see more✅ None of the CI failures appear to be your fault 💚
🚧 1 fixed upstream failure:These were probably caused by upstream breakages that were already fixed.
Please rebase on the
|
95b4b07 to
483b655
Compare
|
@kit1980 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
malfet
left a comment
There was a problem hiding this comment.
Looks good to me, but please explain why test_scriptable is getting removed
| with parametrize.cached(): | ||
| traced_model = torch.jit.trace_module(model, {'forward': x}) | ||
|
|
||
| def test_scriptable(self): |
There was a problem hiding this comment.
Hmm, why are you deleting this test?
There was a problem hiding this comment.
As I mentioned in the PR description, the test was bad and had a TODO to actually implement and just tested that UnsupportedNodeError is thrown, and with GeneratorExp support a different error would be thrown.
There was a problem hiding this comment.
This test was here on purpose to make sure that trying to script this will raise a nice error message and not crash in an obscure way. I think we want to keep it and make sure the new error message is good.
There was a problem hiding this comment.
@albanD any suggestions how to best reconcile this? It was just an accident that GeneratorExp was not supported and the test produced somewhat related-looking UnsupportedNodeError.
There was a problem hiding this comment.
In that case, I guess we can add a check if it it scripting and if so, raise a nice error?
There was a problem hiding this comment.
In that case, I guess we can add a check if it it scripting and if so, raise a nice error?
Hmm, add a check where exactly?
There was a problem hiding this comment.
I'm not sure how torchscript fails here so hard to say.
If register_parametrization does fail, then in that function.
If it fails at runtime, I guess in
pytorch/torch/nn/utils/parametrize.py
Line 257 in efdb419
Or within torchscript if that's possible?
There was a problem hiding this comment.
@albanD I've updated the PR, please take another look.
6d373da to
f83a466
Compare
|
@kit1980 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
f83a466 to
c48be80
Compare
|
@kit1980 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
c48be80 to
2ede2a6
Compare
|
@kit1980 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
2ede2a6 to
32cd2bf
Compare
|
@kit1980 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
32cd2bf to
4dec7e2
Compare
|
@kit1980 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
4dec7e2 to
efce3b9
Compare
fcbdb34 to
bfd9ce5
Compare
|
@kit1980 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
bfd9ce5 to
d8b6304
Compare
|
@kit1980 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
|
@pytorchbot rebase |
|
@pytorchbot successfully started a rebase job. Check the current status here |
|
Successfully rebased |
d8b6304 to
0fc1580
Compare
|
@kit1980 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
|
@pytorchbot merge |
|
@pytorchbot successfully started a merge job. Check the current status here |
|
Hey @kit1980. |
Summary: Generator comprehensions with any/all are less verbose and potentially help to save memory/CPU : https://eklitzke.org/generator-comprehensions-and-using-any-and-all-in-python To make JIT work with this change, I added code to convert GeneratorExp to ListComp. So the whole PR is basically NoOp for JIT, but potentially memory and speed improvement for eager mode. Also I removed a test from test/jit/test_parametrization.py. The test was bad and had a TODO to actually implement and just tested that UnsupportedNodeError is thrown, and with GeneratorExp support a different error would be thrown. Pull Request resolved: #78142 Approved by: https://github.com/malfet, https://github.com/albanD Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/de7219e8a7f3b2f8e20162f3b5af338df2e110c1 Reviewed By: malfet, albanD Differential Revision: D36653959 Pulled By: kit1980 fbshipit-source-id: 26b68aa46229f59e21019ec1fd1cfeff81657e58
Apparently #78142 made torch.JIT allow for simple generator expressions which allows us to enable rules that replace unnecessary list comprehensions with generators in any/all. This was originally part of #99280 but I split it off into this PR so that it can be easily reverted should anything break. Pull Request resolved: #99890 Approved by: https://github.com/justinchuby, https://github.com/kit1980, https://github.com/malfet
Generator comprehensions with any/all are less verbose and potentially help to save memory/CPU : https://eklitzke.org/generator-comprehensions-and-using-any-and-all-in-python
To make JIT work with this change, I added code to convert GeneratorExp to ListComp. So the whole PR is basically NoOp for JIT, but potentially memory and speed improvement for eager mode.