Lecture 3
The following theorem is similar to Theorem 3 given in Lecture 2, with the difference that it gives conditions for uniform stability, rather than uniform asymptotical stability for non-autonomous systems. Theorem 1 (Uniform stability for non-autonomous systems) Let x=0 be an equilibrium point of a system described by = f ( x, t ) and U \ n a domain containing it. Let x V : U [0, ) \ be a continuously differentiable function that satisfies:
W1 ( x ) V ( x, t ) W2 ( x ) ( x, t ) = V + V f ( x, t ) 0 V t x for all t 0 and x U where W1 ( x ) , W2 ( x ) and W3 ( x ) are continuous positive definite functions on U. Then, x=0 is uniformly stable and V is called a Lyapunov function. Proof. The proof of this Theorem can be found in Khalil (2003). This theorem will be used later in this lecture.
MODEL REFERENCE ADAPTIVE CONTROL
Model
Controller parameters model output ym
Adjustment mechanism
process output y
Command signal uc
Control signal u
Controller
Process
This is an adaptive control technique where the performance specifications are given in terms of a model. The model represents the ideal response of the process to a command signal. The controller has two loops:
The inner loop, which is an ordinary feedback loop consisting of the process and the controller. The outer loop, which adjust the controller parameters in such a way that the error e = y - ym is small (not trivial)
Based on ideas by Whitaker (1958), from MIT.
Approaches: Gradient approach (MIT Rule) and Lyapunov direct approach.
THE MIT RULE
Tracking error: e = y - ym Introduce the cost function J:
1 J = e2 2
Where is a vector of controller parameters. Change the parameters in the direction of the negative gradient of e2
d J e = = e dt
where e/ is called the sensitivity derivative. It indicates how the error is influenced by the adjustable parameters .
Example 3.1: Adjustment of a feedforward gain
Process y = k G(p)u where G(p) is known, p=d/dt is the differentiation operator and k is an unknown gain. Desired response ym = k0 G(p)uc where k0 is a given constant Controller u = uc
where u is the control signal and uc is the command signal Error
e = y ym = kG( p)uc k0 G( p)uc
Sensitivity derivative MIT rule
e k = kG( p)uc = ym k0
d e k = ' e = ' y e = y e m dt k0 m
where = ' k/k0 has been introduced instead of '. Note that to have the correct sign of it is necessary to know the sign of k.
Block diagram
Model ym
k0 G(s)
-
- / s
Process
uc
k G(s)
d = y e = m p dt
FG IJ ym e H K
Simulation results for
G(s) = 1 s +1
Input uc : sinusoid with frequency 1 rad/s Parameter values k = 1, k0 = 2
Example 3.2: MRAS of a fist order system
Process:
dy =ay + bu dt
Model:
dym = a y + b u m m m c dt
Controller:
u = u y 1c 2
Closed loop system:
dy = ay + bu = ay + b( u y) = (a + b )y + b u 1c 2 2 1c dt
Ideal controller parameters for perfect model-following:
0= 1 0= 2 bm b am a b
Derivation of adaptive law Error: e = y - ym where
y= b 1 uc p + a + b 2
and p = d/dt is the differentiation operator Sensitivity derivatives:
e = b u 1 p + a + b 2 c e = b 2 1 b u = y c 2 p + a + b 2 ( p + a + b 2 ) 2
Approximate (p+a+b2) (p+am), then the MIT rule d/dt = -' e e/ , may be written as follows
FG IJ H K d 2 F b yIJ e = FG am yIJ e = ( ' )G H p + am K H p + am K dt
where = ' b / am
am b d 1 = ' uc e = uc e + + p a p a dt m m
FG H
IJ K
Block diagram
Model ym
Gm(s)
Process uc + u y
G(s)
-/s
/s
am s + am
am s + am
IJ FG am uc IJ e K H p + am K I F am I d 2 F a I = G m yJ e = F G 2 H pJ K GH p+ am yJK e H p + am K dt
am d 1 = uc e = 1 p + am p dt
FG H
IJ K
FG H
Simulation: a =1, b = 0.5, am = bm = 2 uc : square wave with period 20 s and values 1 and 0.
Example 3.3 Performance for different input signal levels
Consider the problem of adaptation of a feedforward gain considered in Example 3.1, and take the transfer function G be given by:
G(s) =
1
s 2 + a1s + a2
Simulation results are shown for k = a1 = a2 = 1, and for a square wave input signal uc with period 40 s and values (0.1,-0.1), (1,-1) and (3.5,-3.5)
Normalized updating rule
The MIT adjustment rule in its basic form: d/dt = - e e/ has the disadvantage that the adjustment rate depends on the magnitude of the command signal. This can be avoided by using the normalized form:
FG e IJ H K d = dt e I T F e I F +G J G J H K H K
e
where >0 has been introduced to avoid difficulties when e/ 0
Lyapunov approach to stable adaptation.
In MRAC we want the error e= y-ym to go to zero, we will now attempt to find a Lyapunov function and adaptation mechanism to achieve it.
Example 3.4: A first order system
Process: Model:
dy =ay + bu dt dym = a y + b u m m m c dt
where am > 0 and the reference uc is bounded. Controller: Error:
u = u y 1c 2
e = y ym d e =ay +b( u y )+ a y b u m m m c 1 c 2 dt d e =a y (b + a ) y +(b b )u 2 1 m c dt m m d e = a e (b + a a ) y + (b b )u m m 2 1 m c dt
Note: perfect model following occurs when the parameters have ideal values: 1=bm/b and 2=(am-a)/b Define
2 = b 2 + a am , 1 = b1 bm
so that when the
parameters have ideal values,
1 = 2 = 0
With that definition, we have
de = a e y + u m 2 1 c dt
Candidate for Lyapunov function:
1 2 1 2 1 2 + ) V ( e, , ) = ( e + 1 2 2 b 1 b 2
Note: V0 when e0 and the parameters 1,2 are ideal. Derivative of the candidate Lyapunov function:
V V dV = + t e dt V 1 de dt V d1 2 dt d2 dt
=e
de 1 d1 1 d + 1 + 2 2 dt b dt b dt
d d 1 1 2 1 2 = a e e2 y + e1u + + m c b 1 dt b 2 dt 1 d1 1 d2 = am e + + euc + ey 1 b dt 2 b dt
2
If we choose the following adaptation laws
d1 = b u e c dt d2 = bye dt
Then
dV =am e 2 dt
Notice that am > 0 Notice that the system described by the state equations:
de = a e y + u m 1 2 c dt d1 = b u e c dt
d2 = bye dt
T T has an equilibrium point at x = (e, 1 , 2 ) = (0, 0, 0)
Since is positive definite and then, according to Theorem 1 the equilibrium point x = 0 is uniformly stable, but will e0 and will the parameters 1 and 2 converge to their ideal values when t ?
V ( e, , ) 1 2
dV 2 = a e 0 m dt
The answer to this question is as follows: Since
dV =am e 2 dt
is negative semi-definite then V(t) V(0)
and thus that e, 1 and 2 must be bounded. Since am > 0, and uc is bounded then ym is bounded, and therefore y = e+ym is bounded as well. From the fact that all these variables are bounded, it is possible to conclude by using some extra mathematical arguments (see Astrom and Wittenmark, 1995) that dV/dt 0 as t , which implies that the error e 0 as t . But the parameters 1 and 2 will not necessarily converge to the ideal values. Notice that the adaptation laws found can be written in terms of the original controller parameters as follows:
d 1 u e = u e = c 1 p c dt
FG IJ H K F I d 2 =ye 2 = G J y e dt H pK
Model ym
Gm(s)
Process uc + u y
e +
G(s)
-/s
/s
The adjustment rule obtained by Lyapunov theory is simpler than the MIT rule because it does not require the filtering of signals. Arbitrary large values of the adaptation gain can be used with the Lyapunov approach, since it provides guaranteed stability. Simulation results based on a=1, b=0.5, am =2 and bm=2. The reference uc is a square wave with values (1,-1)
Suggested Reading
Astrom and Wittenmark, 1995, Adaptive Control, Prentice-Hall, 2nd Edition, Chapter 3.