update to use torch.optim.lr_scheduler.LRScheduler#4709
Closed
wat3rBro wants to merge 2 commits intofacebookresearch:mainfrom
Closed
update to use torch.optim.lr_scheduler.LRScheduler#4709wat3rBro wants to merge 2 commits intofacebookresearch:mainfrom
wat3rBro wants to merge 2 commits intofacebookresearch:mainfrom
Conversation
Summary: X-link: facebookresearch/fvcore#118 Pull Request resolved: facebookresearch#4650 D35519683 (facebookresearch@0ad20f1) changed the default `matching_heuristics` from `False` to `True`, it may cause some issues: - potential bug report: https://fb.workplace.com/groups/2240361332735959/posts/5264606250311437/?comment_id=5265167493588646 - bloated logging: facebookresearch#4364 This diff reverts the default value back to `False`, and introduce a way to specify this option via filename url. Differential Revision: https://internalfb.com/D41145857 fbshipit-source-id: 1daa202d9cf48dddaee6d00d84306ef9dbc7e2a9
Summary: pytorch/pytorch#88503 introduces the public version `LRScheduler`, however `isinstance(self.scheduler, torch.optim.lr_scheduler._LRScheduler)` doesn't work anymore because of https://github.com/pytorch/pytorch/blob/1ea11ecb2eea99eb552603b7cf5fbfc59659832d/torch/optim/lr_scheduler.py#L166-L169. It's a bit tricky to make it BC compatible for torch version <= 1.13. V1 of this diff uses try catch block to import the `LRScheduler` and make it available in `detectron2.solver`, then the whole D2 (facebookresearch@11528ce) uses this version of `LRScheduler`. There're two drawbacks though: - it adds a little mental burden to figure out what's D2 (facebookresearch@11528ce083dc9ff83ee3a8f9086a1ef54d2a402f)'s `LRScheduler`, previously it's clear that the `LRScheduler`/`_LRScheduler` is from `torch`. - it has a name collision with `hooks.LRScheduler`, eg. in the `hooks.py` I have to do `LRScheduler as _LRScheduler`. But I couldn't found a better solution, maybe use try catch block in every file? Differential Revision: D42111273 fbshipit-source-id: 613cd35a1e0bdb9b8edd852042df047aba021d87
Contributor
|
This pull request was exported from Phabricator. Differential Revision: D42111273 |
ppwwyyxx
reviewed
Dec 16, 2022
Comment on lines
137
to
-135
| # MultiStepLR with WarmupLR but the current LRScheduler design doesn't allow it. | ||
|
|
||
|
|
||
| class WarmupMultiStepLR(torch.optim.lr_scheduler._LRScheduler): |
Contributor
There was a problem hiding this comment.
now it should be fine to remove everything below this line. Looks like they are not used any more.
Contributor
Author
There was a problem hiding this comment.
@ppwwyyxx There're actually many usage internally (including _get_warmup_factor_at_iter), it's a bit hard to remove them.
Contributor
|
This pull request has been merged in 2108097. |
wat3rBro
pushed a commit
to wat3rBro/d2go
that referenced
this pull request
Jan 4, 2023
Summary: Pull Request resolved: facebookresearch#453 Previous diffs updated the LRScheduler to public version (eg. facebookresearch/detectron2#4709), this also requires newer version of pytorch-lightning. This diff upgrades the lightning version to 1.8.6, also fixes some deprecated call sites of old lightning versions. - `deepcopy` seems to be supported now, remove `_deepcopy` (there's now not allowed to access `trainer` attributed when it is `None`) - `dataloader_idx` is removed from `on_train_batch_start`. - stop using `_accelerator_connector` (the AcceleratorConnector doesn't have those attributes anymore). - deprecated `on_pretrain_routine_end` -> `on_fit_start` Differential Revision: D42319019 fbshipit-source-id: 3e5da0601e5f3a3fb482c84bb34bdc9f67b0ddfc
facebook-github-bot
pushed a commit
to facebookresearch/d2go
that referenced
this pull request
Jan 4, 2023
Summary: Pull Request resolved: #453 Previous diffs updated the LRScheduler to public version (eg. facebookresearch/detectron2#4709), this also requires newer version of pytorch-lightning. This diff upgrades the lightning version to 1.8.6, also fixes some deprecated call sites of old lightning versions. - `deepcopy` seems to be supported now, remove `_deepcopy` (there's now not allowed to access `trainer` attributed when it is `None`) - `dataloader_idx` is removed from `on_train_batch_start`. - stop using `_accelerator_connector` (the AcceleratorConnector doesn't have those attributes anymore). - deprecated `on_pretrain_routine_end` -> `on_fit_start` Reviewed By: YanjunChen329 Differential Revision: D42319019 fbshipit-source-id: ba46abbd98da96783e15d187a361fda47dc7d4d6
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary:
pytorch/pytorch#88503 introduces the public version
LRScheduler, howeverisinstance(self.scheduler, torch.optim.lr_scheduler._LRScheduler)doesn't work anymore because of https://github.com/pytorch/pytorch/blob/1ea11ecb2eea99eb552603b7cf5fbfc59659832d/torch/optim/lr_scheduler.py#L166-L169, if the schedule is inherited from the public version.It's a bit tricky to make it BC compatible for torch version <= 1.13. V1 of this diff uses try catch block to import the
LRSchedulerand make it available indetectron2.solver, then the whole D2 uses this version ofLRScheduler. There're two drawbacks though:LRScheduler, previously it's clear that theLRScheduler/_LRScheduleris fromtorch.hooks.LRScheduler, eg. in thehooks.pyI have to doLRScheduler as _LRScheduler.But I couldn't found a better solution, maybe use try catch block in every file?
Differential Revision: D42111273