Problem with constraints
• Penalty method (exterior penalty)
• Augmented Lagrangian method
Penalty method
Penalty formulation
c c
[h(x)] + max [0, g(x)]
2 2
min fP (x , c) = min f (x) +
x x 2 2
• c is a penalty parameter
• equality and inequality constraints can have different penalty parameters
Gradient
∇ x fP (x , c) =∇f (x) + c h(x) ∇h(x) + c max [ 0, g(x)] ∇g(x)
Penalty method
Penalty algorithm
1) set k = 1, choose initial guess x1ini , penalty factor c1 , β >1 and tolerance ε
2) at iteration k , solve the penalized problem k ,
min fP ( x , ck ) (e.g., with CG or DFP) to obtain solution xk*
x
3) check convergence, that is, if xk* − xk*−1 < ε then STOP,
otherwise update the penalty factor ck +1 = β ck
4) set as initial guess of the penalized problem k + 1 the solution of
the penalized problem k , xkini+1 = xk*
5) set k= k + 1 and go to 2)
Note: In practice, in the intermediate iterations (step 2), sometimes the optimization process is not solved
accurately, being replaced by a limited number of iterations of the optimization process.
Augmented Lagrangian
Augmented Lagrangian formulation
c µ
2 2
c µ
) min f (x) + λ h(x) + [ h(x)] + max 0, + g(x) −
2
φ (λ , µ=
) min fALAG (x , λ , µ=
x x 2 2 c c
or
φ (λ , µ=
) min fALAG (x , λ , µ=
x
) min f (x) + λ h(x) +
x
c
2
2 1
2c
{
[h(x)] + max [0, µ + cg(x)] − µ 2
2
}
Gradient
∇ x fALAG (x , λ , µ ) = ∇f (x) + [ λ + c h(x)] ∇h(x) + max [ 0, µ + c g(x)] ∇g(x)
Augmented Lagrangian
Augmented Lagrangian algorithm
1) set k = 1, choose initial guess x1ini , initial Lagrange multipliers λ1 , µ1 ,
penalty factor c and tolerance ε
2) at iteration k , with λk , µk fixed, solve the problem for x (inner min),
min fALAG ( x , λk , µk ) (e.g., with CG or DFP) to obtain solution xk*
x
3) at iteration k , with x fixed, update λk +1 =
λk + c h(xk* ) , µk +1 =
max 0, µk + cg(xk* )
4) check convergence, that is, if λk +1 − λk < ε and/or µk +1 − µk < ε then STOP
5) set k= k + 1 and go to 2)
Note: In practice, in the intermediate iterations of the inner min (step 2), sometimes the optimization
process is not solved accurately, being replaced by a limited number of iterations of the optimization
process. In the limit, if the optimization process (of step 2) is replaced by just one iteration, each iteration
on variable k result in one update of variable x followed by the update of multipliers λ and µ.