1014 learning rate finder#1454
Conversation
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
|
@wyli @Nic-Ma @ericspod this part of the PR saves the state of the network and optimiser to disk or memory, such that they can be restored at the end. Does this functionality live somewhere else in MONAI? |
|
We had that kind of functionality as part of an Ignite handler for saving checkpoints, but that's not quite how you're doing things here. |
There was a problem hiding this comment.
thanks @rijobro this is very useful!
Perhaps the way the training losses is computed/accumulated (self._train_batch) should be decoupled from the lr_finder. Intuitively I think the LR finder should work fine as long as the user provides a black box that takes lr/step as input, and returns a total_loss (e.g. self._train_batch and self._validate). Do you want to refactor this PR to decouple those? otherwise we could file a ticket and have another iteration.
pls see also some minor suggestions inline, they are mostly optional
@Can-Zhao would be great to have your comments as well!
|
@wyli thanks, I'll get to it! |
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
|
@wyli this is ready if you want to review again, thanks! |
wyli
left a comment
There was a problem hiding this comment.
thanks, it looks good, except some minor warnings on (object) https://deepsource.io/gh/Project-MONAI/MONAI/run/a53a8313-ab56-4aea-8a8b-e6fd4941f978/python/PYL-R0205 this new feature needs another iteration to decouple the actual training/validation logic_train_batch and _validate from the LearningRateFinder class
Signed-off-by: Richard Brown <[email protected]>
|
I am having trouble using this class for GANs. Is there any way to do something similar to the "GANLearner" fastai? |
Fixes #1014.
Description
Implements calculation of optimal learning rate based on https://github.com/davidtvs/pytorch-lr-finder.
Status
Ready
Types of changes
./runtests.sh --codeformat --coverage../runtests.sh --quick.make htmlcommand in thedocs/folder.