Introduction to
Optimal Control
Yifan Hou
11/14/2018
• Rocket Landing 𝑓
• Thrust ~ fuel consumption
𝑚
𝑓
ℎ̈ = −𝑔 𝑚̇ = −𝑘𝑓 𝑔
𝑚
• Find control f to minimize fuel ℎ
consumption
• Dynamic walking
• Stabilize the inverted pendulum
• Find COM trajectory to move to goal
as fast as possible, while being
stable
The Optimal Control Problem
• System dynamics
• Cost function
Advantages:
Cost design is intuitive!
• Boundary conditions (optional)
Disadvantages:
(usually) no closed form solution
• Usually does not have closed form solution
• Special case: LQR
Linear-Quadratic Problem (Basic form)
• System dynamics
• Cost function
• Boundary conditions
• F and Q are not negative-definite,
• R is positive-definite
Solution: finite horizon, TV-LQR
• The necessary and sufficient condition for optimal control is:
Where is the solution to the following equation:
Riccati Equation
subjected to boundary condition .
Extension I: infinite horizon
• System dynamics
• Cost function
• Boundary conditions If the system is
controllable, then
• Q is not negative-definite, solution exists for the
optimal control problem
• R is positive-definite
Solution: infinite horizon TI-LQR
• The necessary and sufficient condition for optimal control is:
Where is the solution to the following equation:
Algebraic Riccati Equation
Stability of the closed-loop system
• Optimality stability !!
• Theorem
Decompose . If is observable,
then the closed loop system under optimal
control law is asymptotically stable.
Extension II: Affine system
• System dynamics
Extension III: Tracking Problem
• Goal trajectory: ∗ .
• Cost function: Denote ∗