-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Closed
Labels
module: optimizerRelated to torch.optimRelated to torch.optimtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
🐛 Bug
ReduceLROnPlateau will fail when add new parameter group to the optimizer.
The _reduce_lr function will raise list index out of range error
To Reproduce
Steps to reproduce the behavior:
- initialize optimizer:
optimizer = Adam(filter(lambda p: p.requires_grad, model.parameters()), args.lr) - initialize scheduler:
scheduler = ReduceLROnPlateau(optimizer, patience=1, factor=0.1, verbose=True, mode='max') - add parameter group to the optimizer:
optimizer.add_param_group({'params':unfreezed_params, 'lr':lr}) - raise error when the learning rate should be changed
Expected behavior
when add a parameter group to the optimizer, the min_lrs attribute of the scheduler should be updated to avoid this error.
Environment
- PyTorch Version (1.1.0):
Metadata
Metadata
Assignees
Labels
module: optimizerRelated to torch.optimRelated to torch.optimtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module