0% found this document useful (0 votes)
103 views12 pages

Discrete Systems

The document discusses stability of dynamical systems. It introduces concepts of stability from classical control theory such as BIBO stability and how pole locations determine stability. It then discusses stability analysis using state space models and concepts of equilibrium points for nonlinear and time-varying systems.

Uploaded by

joukend
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
103 views12 pages

Discrete Systems

The document discusses stability of dynamical systems. It introduces concepts of stability from classical control theory such as BIBO stability and how pole locations determine stability. It then discusses stability analysis using state space models and concepts of equilibrium points for nonlinear and time-varying systems.

Uploaded by

joukend
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Stability of Dynamical Systems

Introduction
Classical Control
Stability of a system is of paramount importance. In general, an unstable system is both
useless and dangerous. When a system is unstable, state and/or output variables are becoming
unbounded in magnitude over time–at least theoretically. In practice, at some point in time,
electronic amplifiers will saturate and mechanical components will reach their physical limits of
motion. In any event, the system is no longer operating in a well-behaved manner.
In courses in classical control theory, the systems being considered are generally linear and
time-invariant, and stability is generally analyzed in terms of the locations of the poles of a transfer
function, that is, the zeros of the denominator polynomial. In order to be stable, the poles of the
transfer function for a continuous-time system must all lie strictly in the left half of the complex
s-plane. The poles must all have strictly negative real parts. For a discrete-time system to be
stable, the poles of the transfer function must lie in the interior of a circle of unit radius centered
at the origin of the complex z-plane. The magnitude of the poles must all be less than one.
The specific type of stability that is described by these requirements on pole locations is known
as Bounded-Input, Bounded-Output (BIBO) stability. This and other types of stability will be
defined in a later section. For a system to be BIBO stable, any input signal u(t) applied to the
system that is bounded in magnitude (ku(t)k < ∞) must produce an output that also remains
bounded for all time.
To illustrate the effects of pole locations on stability, consider the following three transfer
functions. Only H1 (s) is BIBO stable; all the poles are in the left half of the s-plane. H2 (s) has
poles on the jω axis, and H3 (s) has a pole in the right half of the s-plane. Therefore, neither of
them is BIBO stable. Figure 1 shows the responses√ of these three systems to a sinusoidal input
signal u(t) = sin (ωt) whose frequency is ω = 11 rad/sec, the same as the imaginary values of the
complex poles in H2 (s).

30 Y1 (s)
H1 (s) = = (1)
(s + 4) (s + 2 + j2) (s + 2 − j2) U (s)
60 Y2 (s) 5 Y3 (s)
H2 (s) = 2
= , H3 (s) = = (2)
(s + 6) (s + 11) U (s) (s − 1) (s + 2) (s + 3) U (s)
Assuming that the major characteristics of the output signals are shown in Fig. 1, it is clear from
the plots that the outputs from transfer functions H2 (s) and H3 (s) are growing without bound,
while the output from H1 (s) is remaining within a finite bound. The output y2 (t) is growing linearly
with time due to the t sin (ωt) term, and y3 (t) is growing exponentially due to the et term.
Discrete-time transfer functions with stability properties similar to those for the continuous-time
systems in Eqns. (1) and (2) are shown below.

3 Y1 (z)
H1 (z) = = (3)
(z + 0.4) (z − 0.2 + j0.2) (z − 0.2 − j0.2) U (z)

6 Y2 (z) 5 Y3 (z)
H2 (z) = = , H3 (z) = = (4)
(z − 0.6) (z 2 − z + 1) U (z) (z − 2) (z + 0.2) (z − 0.3) U (z)

1
Input Signal u(t) Output Signal y (t)
1
1 0.6

0.8
0.4
0.6

0.4 0.2

0.2
0
Amplitude

Amplitude
0
−0.2
−0.2

−0.4 −0.4

−0.6
−0.6
−0.8

−1 −0.8
0 5 10 15 20 0 5 10 15 20
Time (s) Time (s)

Output Signal y (t) 7 Output Signal y (t)


2 x 10 3
30 6

20 5

10 4
Amplitude

Amplitude

0 3

−10 2

−20 1

−30 0
0 5 10 15 20 0 5 10 15 20
Time (s) Time (s)

Figure 1: Responses of three systems to a sinusoidal input.

2
State Space Control
As discussed above, stability in the classical sense working with transfer functions is a function
of the locations of the poles of the system transfer function. In state space models for linear time-
invariant systems, stability is determined by the locations of the eigenvalues of the A matrix in
·
x(t) = Ax(t) + Bu(t) or x(k + 1) = Ax(k) + Bu(k). For the transfer functions of continuous-time
systems in (1) and (2), possible A matrices are
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
0 1 0 0 1 0 0 1 0
A1 = ⎣ 0 0 1 ⎦ , A2 = ⎣ 0 0 1 ⎦ , A3 = ⎣ 0 0 1 ⎦ (5)
−32 −24 −8 −66 −11 −6 6 −1 −4
© √ ª
The eigenvalues for these matrices are λ1 = {−4, −2 ± j2} , λ2 = −6, ±j 11 , λ3 = {1, −2, −3} .
These are the same values as the poles of the three transfer functions. This will always be the case
when there are no pole-zero cancellations in the transfer functions. For the transfer functions of
discrete-time systems in Eqns. (3) and (4), the corresponding A matrices could be

⎡ ⎤ ⎡ ⎤ ⎡ ⎤
0 1 0 0 1 0 0 1 0
A1 = ⎣ 0 0 1 ⎦, A2 = ⎣ 0 0 1 ⎦, A3 = ⎣ 0 0 1 ⎦ (6)
−0.032 0.08 0 0.6 −1.6 1.6 −0.12 −0.14 2.1

The eigenvalues for these matrices are λ1 = {−0.4, 0.2 ± j0.2} , λ2 = {0.6, 0.5 ± j0.866} , λ3 =
{2, −0.2, 0.3} . As before, these eigenvalues are the same as the poles of the respective transfer
functions.

Equilibrium Points
Concepts
Stability analysis for linear time-invariant (LTI) systems is fairly simple. A system that is linear
and time-invariant is either stable or unstable. There are some different definitions of stability–
defined in the next section–but stability for this type of system does not depend on time or on
the present location of the state vector in the n-dimensional state space. Stability analysis for
time-varying linear systems and for nonlinear systems is more complicated.
An equilibrium point xe in state space Σ is a point at which a system will remain in the absence
of external inputs or disturbances. Therefore, for a continuous-time system described by the general
·
state equation x(t) = f [x(t), u(t), t] , the point xe is an equilibrium point if
·
xe (t) = 0 = f [xe (t), 0, t] (7)
If a system is at an equilibrium point at time t = t0 , and no external forces act on the system, it
will remain at the equilibrium point for all t ≥ t0 . For linear time-invariant systems, (7) becomes
·
xe (t) = 0 = Axe (8)
An equilibrium point xe for a discrete-time system is defined in a similar manner. Since the
system state remains at an equilibrium point when there are no external inputs applied, it follows
that x(k + 1) = x(k) when x(k) = xe . For the general nonlinear discrete-time system, the state
equation x(k + 1) = f [x(k), u(k), k] at an equilibrium point is

xe = f (xe , 0, k) (9)
and for a linear time-invariant, discrete-time system, the state equation is xe = Axe .

3
The analysis of system stability is closely connected with the concept of equilibrium points. As
illustrated in (7)—(9), a system whose state is at an equilibrium point will remain there unless an
external input acts on the system. The question about stability then becomes what happens to
the system if a momentary external input–intentionally applied or not–does perturb the system
away from the equilibrium point. There are three possible answers to this question:

1. The system state returns to the equilibrium point.

2. The system state does not return to the equilibrium point but does remain “close” to that
point.

3. The system state diverges from the equilibrium point.

Examples of Equilibrium Points

Example 1 Consider an unforced, discrete-time linear system described by x(k + 1) = Ax(k). The
eigenvalues of the A matrix are λ = {1, 0.6, −0.5} , and the A matrix in companion form is

⎡ ⎤
0 1 0
A=⎣ 0 0 1 ⎦ (10)
−0.3 0.2 1.1
For a point in Σ to be an equilibrium point, xe = Axe or (In − A) xe = 0. The matrix (In − A)
and its Row-Reduced Echelon (RRE) form are
⎡ ⎤ ⎡ ⎤
1 −1 0 1 0 −1
I3 − A = ⎣ 0 1 −1 ⎦ ⇒ ⎣ 0 1 −1 ⎦ (11)
0.3 −0.2 −0.1 0 0 0
The first row of the RRE form of the matrix shows that x1 = x3 , the second row shows that
x2 = x3 , and the third row shows that one element of xe is arbitrary, which in this case must be x3 .
£ ¤T
Therefore, this system has an infinite number of equilibrium points given by xe = α α α ,
with α being arbitrary. The origin of state space is one of the equilibrium points, with α = 0. The
reason for the infinite number of equilibrium points is the fact that A has an eigenvalue at z = 1,
which represents pure integration in discrete time. ¨
·
Example 2 Now consider an unforced, continuous-time linear system described by x(t) = Ax(t).
The eigenvalues of the A matrix are λ = {0, −2, −3} . The A matrix in companion form and its
RRE equivalent (since Axe = 0) are
⎡ ⎤ ⎡ ⎤
0 1 0 0 1 0
A=⎣ 0 0 1 ⎦ ⇒ ⎣ 0 0 1 ⎦ (12)
0 −6 −5 0 0 0
The first row of the RRE matrix shows that x2 = 0, the second row shows that x3 = 0, and
the last row shows that there is one arbitrary variable, which obviously for this system must be
x1 . Therefore, for this system, there is an infinite number of equilibrium points which are given by
£ ¤T
xe = α 0 0 . The pole at s = 0 represents a pure integrator in continuous time, which is the
reason for the infinite number of xe values. ¨

4
Example 3 For this last example, consider a linear continuous-time system without any integra-
tors. The eigenvalues are λ = {−1, −2, −4} . The A matrix in companion form and its RRE
equivalent are
⎡ ⎤ ⎡ ⎤
0 1 0 1 0 0
A=⎣ 0 0 1 ⎦ ⇒ ⎣ 0 1 0 ⎦ = I3 (13)
−8 −14 −7 0 0 1
The first row of the RRE matrix shows that x1 = 0, the second row shows that x2 = 0, and the
last row shows that x3 = 0. Therefore, the only equilibrium point for this system is the origin of
state space, xe = 0. ¨

These three examples have shown that for linear systems, if the system A matrix contains an
eigenvalue that corresponds to pure integration (λ = 1 in discrete time or λ = 0 in continuous time),
then there are an infinite number of equilibrium points. However if there are no such eigenvalues
in the A matrix, then there is a unique equilibrium point which is the origin, that is, the only
equilibrium point is xe = 0.

Stability Definitions
Several definitions of stability are presented here, following the material in the course textbook1 .
The first two definitions are for unforced systems, that is, u(t) = 0 for all t.

Definition 4 An equilibrium point xe is stable if for any given value of a parameter > 0 there
exists a number δ ( ) such that if kx(t0 ) − xe k < δ, then the state vector satisfies kx(t) − xe k <
for all t > t0 . The relationship δ ≤ is required. This type of stability is also known as stable
in the sense of Lyapunov (i.s.L.). Therefore, a system that is stable in the sense of Lyapunov
remains close to the equilibrium point following a perturbation.

For a linear time-invariant system to be stable in the sense of this definition, all eigenvalues
of the A matrix must be in the region of stability (open left-half of the s-plane for continuous-
time systems or interior of the unit circle in the z-plane for discrete-time systems) except for the
possibility of unrepeated eigenvalues on the boundary of stability (jω axis for continuous-time
systems or |z| = 1 for discrete-time systems).

Definition 5 An equilibrium point xe is asymptotically stable if it (a) is stable as in the previous


definition and (b) in addition there exists a number δ 0 > 0 such that if kx(t0 ) − xe k < δ 0 , the state
vector satisfies limt→∞ kx(t) − xe k = 0. Thus, for asymptotic stability, the state returns to the
equilibrium point.

For a linear time-invariant system to be asymptotically stable, all eigenvalues of the A matrix
must lie strictly in the region of stability (open left-half of the s-plane or interior of the unit circle
in the z-plane). In that case, xe = 0 is the only equilibrium point. If a linear time-invariant system
is asymptotically stable, then it is also globally asymptotically stable since any initial state would
converge to the origin if the system had no external input.

Example 6 Consider three unforced continuous-time LTI systems with the same initial condition
£ ¤T
x(0) = 1 0 1 . The system matrices are
1
Modern Control Theory, 3rd Edition, William Brogan, Prentice Hall, 1991

5
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
0 1 0 0 1 0 0 1 0
A1 = ⎣ 0 0 1 ⎦, A2 = ⎣ 0 0 1 ⎦, A3 = ⎣ 0 0 1 ⎦ (14)
−4 −4 −1 0 −2 −3 −5 −7 −3

The eigenvalues are λ1 = {−1, j2, −j2}, λ2 = {−1, −2, 0}, and λ3 = {−1, −1 + j2, −1 − j2}.
From the stability definitions, the first two systems are stable but not asymptotically stable, while the
third system is asymptotically stable. Figure 2 shows the state responses for each of these systems.
It is clear from the plots that for the first two systems all the states stay near the origin but not
all the states converge to the origin. The states in the first system continue to oscillate due to the
complex conjugate poles. Two of the states in the second system do converge to the origin, but the
other state (x1 ) converges to a constant non-zero value due to the eigenvalue λ = 0. For the third
system all the states converge to the origin. ¨

When an input signal is applied to the system, two additional definitions for stability can occur.
Each of the definitions assume that the input signal u(t) or u(k) is bounded in norm for all time.
Some examples of bounded inputs include step functions, sinusoidal signals, decaying exponentials,
and combinations of these. Examples of unbounded inputs include ramp or parabolic functions and
growing exponentials.

Definition 7 An input is said to be bounded if there exists a constant K > 0 such that kuk ≤ K <
∞ for all time. A system is said to be bounded-input, bounded-state (BIBS) stable if there
exists a constant δ > 0, which may depend on K and on x(0), such that kxk ≤ δ for any bounded
input and any initial condition.

Definition 8 Let the input u be bounded, with Km being the least upper bound (the smallest number
such that kuk ≤ Km ). Then a system is said to be bounded-input, bounded-output (BIBO)
stable if there exists a constant α > 0 such that kyk ≤ αKm for all time. This is the type of stability
most often considered in a classical controls course.

Stability Analysis for Linear Systems


Time-Varying Systems
Here it is assumed that the systems are described by linear, time-varying, continuous-time state
and output equations in the standard form
·
x(t) = A(t)x(t) + B(t)u(t), y(t) = C(t)x(t) + D(t)u(t) (15)
When u(t) = 0, the solution to (15) is

x(t) = Φ (t, t0 ) x (t0 ) (16)


where Φ (t, t0 ) is the state transition matrix. Since the input signal u(t) is zero, the types of
stability that can be investigated are stable in the sense of Lyapunov (stable i.s.L.) and asymptotic
stability. For the system to be stable i.s.L., the state must remain in some neighborhood of an
equilibrium point after a perturbation. If the A matrix does not have an eigenvalue λ = 0, then
xe = 0 is the only equilibrium point. If A does have an eigenvalue λ = 0, then there are an infinite
number of equilibrium points. To cover both possibilities with the minimum of notation, define the
perturbation of the state away from any equilibrium point by ∆x (t) = x(t) − xe .
The norm of the distance of the state x(t) from the equilibrium point xe can be used to determine
the constraints on the transition matrix in order for the system to be stable i.s.L.

6
State Response for System 1
3

2
Amplitude

−1

−2
0 1 2 3 4 5 6 7 8 9 10

State Response for System 2


3

2
Amplitude

−1
0 1 2 3 4 5 6 7 8 9 10

State Response for System 3


1.5

0.5
Amplitude

−0.5

−1

−1.5
0 1 2 3 4 5 6 7 8 9 10
Time (s)

Figure 2: Examples of stability in the sense of Lyapunov and asymptotic stability.

7
k∆x (t)k = kΦ (t, t0 ) · ∆x (t0 )k ≤ kΦ (t, t0 )k · k∆x (t0 )k (17)
where the last form of the expression in (17) comes directly from the definition of the norm for a
square matrix, which is kW k = sup [kW xk / kxk] over all x such that kxk 6= 0.
Based on Definition 4, if the norm of the transition matrix is bounded, such that kΦ (t, t0 )k ≤ N
for all t ≥ t0 , then the system will be stable in the sense of Lyapunov for any value of > 0 by
choosing the perturbation at the initial time to satisfy δ ( ) ≤ /N. This is a necessary and sufficient
condition for the system to stable i.s.L.
For a system to be asymptotically stable, the state response must both stay within a neighbor-
hood of the equilibrium point over time and also converge to that point at t → ∞. The constraints
on the transition matrix for asymptotic stability are

kΦ (t, t0 )k ≤ N for all t ≥ t0 and kΦ (t, t0 )k → 0 as t → ∞ (18)


If the input u(t) is non-zero, then Bounded-Input, Bounded-State and Bounded-Input, Bounded-
Output stability may be studied. The first requirement for both BIBS and BIBO stability is that
the system be stable i.s.L. The solution to Eqn. (15) when u(t) 6= 0 in terms of a deviation from
the equilibrium point is
Z t
∆x (t) = Φ (t, t0 ) · ∆x (t0 ) + Φ (t, τ ) B (τ ) u (τ ) dτ (19)
t0

When the norm is taken of both sides of the previous equation, and the triangle property of norms
is used (twice), constraints for BIBS stability are obtained.

° Z t °
° °
k∆x (t)k = °
°Φ (t, t 0 ) · ∆ (t
x 0 ) + Φ (t, τ ) B (τ ) u (τ ) dτ °
° (20)
t0
°Z t °
° °
k∆x (t)k ≤ kΦ (t, t0 ) · ∆x (t0 )k + °
° Φ (t, τ ) B (τ ) u (τ ) dτ °
° (21)
t0
Z t
k∆x (t)k ≤ kΦ (t, t0 )k · k∆x (t0 )k + kΦ (t, τ ) B (τ ) u (τ )k dτ (22)
t0

From (22), the constraints for BIBS stability that must be satisfied for x(t) to remain bounded for
all initial conditions and all bounded input signals are
Z t
kΦ (t, t0 )k ≤ N for all t ≥ t0 and kΦ (t, τ ) B (τ )k dτ ≤ N1 for all t ≥ t0 (23)
t0

To determine the constraints for BIBO stability we will look at the solution to the output
equation y(t) = C(t)x(t) + D(t)u(t). From (19) the output signal is
Z t
y(t) = C (t) Φ (t, t0 ) x(t0 ) + C (τ ) Φ (t, τ ) B (τ ) u (τ ) dτ + D(t)u(t) (24)
t0

The previous equation can be simplified in two ways. First, we may assume that the initial condition
x(t0 ) came about from some input having been applied to the system in the time interval from
t = −∞ to t = t0 . That eliminates the first term and changes the lower limit on the integration
from t0 to −∞. The second step is to use the sifting property of the impulse function in reverse to
take the term D(t)u(t) inside the integral. The result of these two steps gives us the following.

8
Z t Z t
y(t) = [C (τ ) Φ (t, τ ) B (τ ) + δ (t − τ ) D(τ )] u (τ ) dτ = W (t, τ ) u (τ ) dτ (25)
−∞ −∞
The m × r matrix W (t, τ ) = C (τ ) Φ (t, τ ) B (τ ) + δ (t − τ ) D(τ ) is known as the weighting matrix.
It is the impulse response matrix for the system. The (i, j) element of W (t, τ ) is the response at
time t at the ith output terminal due to an impulse function applied at time τ at the j th input
terminal, the inputs at the other terminals being identically zero.2 Since the input is assumed to
be bounded, ku(t)k ≤ Km for all t, the requirement that must be satisfied by the system for BIBO
stability is that there exists a constant M > 0 such that
Z t
kW (t, τ )k dτ ≤ M for all t (26)
−∞
It should be obvious from this discussion that a matrix norm plays a large part in the conditions
for each of the types of stability that have been discussed. Various definitions of matrix norms exist.
One common and convenient definition for a matrix norm is the largest singular value of the matrix.
The matrix does not have to be square for this. For any matrix S, the largest singular value σ is
the square root of the maximum eigenvalue of S T S, that is
q
σ = max [λ (S T S)] (27)
For example, if the 4 × 6 matrix S is given by
⎡ ⎤
9 0 10 0 20 2.8
⎢ 0 −0.3 −9 0 0 −3.13 ⎥
S=⎢ ⎣ 0
⎥ (28)
1 −0.3 0 0 0 ⎦
−1 0 0 −1 0 0
S T S are {607.8615, 72.0994, 1.8304, 1.0256, 0, 0} . The maximum singular value
the eigenvalues of √
of S is σ = kSk = 607.8615 = 24.6548.
Time-Invariant Systems
Stability of linear time-invariant (LTI) systems has already been discussed, at least from the
BIBO perspective. System stability depends entirely on the locations of the eigenvalues of the
A matrix. The state transition matrix for continuous-time systems is Φ (t, τ ) = eA(t−τ ) , and for
discrete-time systems, Φ (k, 0) = Ak . Assume that A has n linearly independent eigenvectors (no
generalized eigenvectors). −1
h i Then Λ = M AM is the diagonal matrix of eigenvalues λi of A, with
M = v1 v2 ... vn being the modal matrix whose columns are the eigenvectors of A. The
transition matrices can be written as

eA(t−τ ) = M eΛ(t−τ ) M −1 , Ak = M Λk M −1 (29)


With a change of basis defined by x = M q, and letting t0 = 0 for convenience, the solutions to the
state equations for the homogeneous case are

x(t) = eAt x(0) → x(t) = M eΛt M −1 x(0) → q(t) = eΛt q(0) (30)
k k −1 k
x(k) = A x(0) → x(k) = M Λ M x(0) → q(k) = Λ q(0) (31)

Therefore, qi (t) = eλi t qi (0) for the continuous-time system, and qi (k) = λki qi (0) in discrete time.
Letting λi = σ i + jω i , the continuous-time solution can be written as qi (t) = eσi t ejωi t qi (0) . From
2
Linear System Theory and Design, C.T. Chen, Holt, Rinehart, and Winston, NY, 1970, p. 76.

9
these expressions it can be seen that system stability is determined by the real parts of the eigen-
values σ i in continuous time, and by the magnitudes of the eigenvalues |λi | in discrete time since
eσi t and λki must be bounded for all t and k, respectively.
Although the above discussion on LTI stability was for a system with a full set of linearly
independent eigenvectors–systems having either all distinct eigenvalues or having full degeneracy
for any repeated eigenvalues–the same conclusions may be drawn concerning stability when there
are generalized eigenvectors as well. Stability requirements for linear, time-invariant systems may
be summarized as follows.

1. Continuous-Time Systems

(a) Unstable: If any eigenvalue λi has its real part σ i > 0, the system is unstable since there
will be a growing exponential in the response.
(b) Stable in the sense of Lyapunov: All repeated eigenvalues must have their real parts
strictly negative, σ i < 0, since the response associated with a repeated eigenvalue will
be multiplied by t. Distinct eigenvalues may have their real parts either negative or zero,
σ i ≤ 0. The terms associated with distinct eigenvalues with σ = 0 will not decay to zero,
but they will not grow without bound.
(c) Asymptotic stability: For a system to be asymptotically stable, all eigenvalues must have
strictly negative real parts, σ i < 0, since each term must have a decaying exponential in
it. A system that is asymptotically stable is also BIBS and BIBO stable.

2. Discrete-Time Systems

(a) Unstable: If any eigenvalue λi lies outside the unit circle centered at the origin of the
complex plane, |λi | > 1, the system is unstable since there will be a growing term in the
response.
(b) Stable in the sense of Lyapunov: All repeated eigenvalues must lie strictly inside the
unit circle, |λi | < 1, since the response associated with a repeated eigenvalue will be
multiplied by k. Distinct eigenvalues may lie inside or on the boundary of the unit circle,
|λi | ≤ 1. The terms associated with distinct eigenvalues with |λi | = 1 will not decay to
zero, but they will not grow without bound.
(c) Asymptotic stability: For a system to be asymptotically stable, all eigenvalues must lie
strictly inside the unit circle, |λi | < 1, since each term must decay to zero with increasing
time. A system that is asymptotically stable is also BIBS and BIBO stable.

The matrices A1 , A2 , and A3 shown in Eqn. (5) for continuous-time systems are asymptotically
stable (A1 ), stable i.s.L. (A2 ), and unstable (A3 ). The discrete-time matrices in Eqn. (6) are
also asymptotically stable (A1 ), stable i.s.L. (A2 ), and unstable (A3 ). This is easily verified by
inspection of the eigenvalues.

Eigenvalues and Stability for Time-Varying Systems


Since stability for linear time-invariant systems is totally a function of the locations of the eigen-
values of the A matrix, we might ask if we can use the same results for linear time-varying (LTV)
systems. The answer is sometimes yes and sometimes no, but we always have to be cautious in
doing so.
Time-varying systems can come about in many ways. Nonlinear system models are often lin-
earized about a nominal trajectory. As the linear model is evaluated at different points in state

10
space, the coefficients in the matrices change. Controller coefficients for aircraft are intentionally
varied depending on flight conditions, and controller coefficients for ships are often a function of
speed. Each of these scenarios causes the elements in the linear system matrices to vary over time,
even if they are not explicit functions of time.
If the elements in the matrices do not vary “too rapidly” and if the eigenvalues are all well
within the region of stability at each set of values, then in many cases the system will remain
stable. However, there is no guarantee of that in general. Even when the eigenvalues themselves
are stable and constant for a time-varying system, there is no guarantee of stability, as the following
example shows.
The model used here is the one in Example 10.5 on page 359 of the text. It illustrates the fact
that the eigenvalues of the system matrix in an LTV model do not necessarily define stability. The
state space model with u(t) = 0 is
∙ ¸
· −1 + α cos2 (t) 1 − α sin(t) cos(t)
x(t) = x(t) = A(t)x(t) (32)
−1 − α sin(t) cos(t) −1 + α sin2 (t)
When the eigenvalues of A(t) are computed through |λI − A(t)| = 0, all the terms involving sin(t)
and cos(t) cancel out, leaving the following characteristic polynomial.

∆ (λ) = 0 = λ2 + (2 − α) λ + (2 − α) (33)
The eigenvalues depend on the parameter α but do not depend at all on time t. Therefore, for a
fixed value of α, the eigenvalues of A(t) are constants. They are the solutions to
q
p
− (2 − α) (2 − α)2 − 4 (2 − α) (α − 2) (α − 2) (α + 2)
λ= ± = ± (34)
2 2 2 2
The top graph in¡ Fig. 3 shows¢ the eigenvalue locations for −4 ≤ α ≤ 6. This is the root locus
2
plot of −α (λ + 1) / λ + 2λ + 2 . If the parameter α is any real value greater than 2, then the
eigenvalues are both real, with one being positive and the other negative. For linear time-invariant
systems, this fact would mean that the system is unstable. However, if the parameter is less than
2, then the eigenvalues have strictly negative real parts, being complex conjugates for −2 < α < 2
and real for α ≤ −2. For LTI systems, this fact would guarantee asymptotic stability. To see what
it means for this time-varying system, the state transition matrix needs to be investigated.
The solution for the state vector for the system defined in Eqn. (32) is x(t) = Φ (t, 0) x(0), with
the transition matrix being
∙ (α−1)t ¸
e cos(t) e−t sin(t)
Φ (t, 0) = (35)
−e(α−1)t sin(t) e−t cos(t)
It is clear from the elements in the first column of Φ (t, 0) that for α > 1 the transition matrix
contains exponential terms that grow without bound with increasing time. Therefore, the system
is unstable for α > 1 even though all the eigenvalues of A(t) have strictly negative real parts for
all α < 2. The bottom left graph in Fig. 3 shows the response of the state vector to the initial
£ ¤T
condition x(0) = 1 1 when α = 1.5. It is clear from this graph that the system is unstable
with that value for α. When the parameter value is changed to α = 0.5, the state response to the
same initial condition is shown in the bottom right graph, which is seen to be an asymptotically
stable response.
Thus, this system is stable i.s.L. for α = 1, unstable for α > 1, and asymptotically stable
for α < 1. This example illustrates the fact that for linear time-varying systems, the location
of eigenvalues may provide some useful information, but in general more complete information is
needed in order to make a final decision on stability.

11
Eigenvalues of A(t) for α Between −4 and +6
1

0.8 α = 0: λ = −1 ± j1

0.6 α = 0.5: λ = −0.75 ± j0.968

0.4 α = 1.5: λ = −0.25 ± j0.661

α = 2: λ = 0, 0
Imag Part of λ

0.2

−0.2 α = 6: λ = −0.828, 4.828

−0.4 α = −2: λ = −2, −2

−0.6 α = −4: λ = −4.73, −1.27

−0.8

−1
−5 −4 −3 −2 −1 0 1 2 3 4 5
Real Part of λ

State Response with α = 1.5 State Response with α = 0.5


100 1.2

50
0.8

0.6
0
Amplitude

Amplitude

0.4

0.2
−50
0

−0.2
−100

−0.4

−150 −0.6
0 2 4 6 8 10 0 2 4 6 8 10
Time (s) Time (s)

Figure 3: Eigenvalues of the time-varying A(t) matrix and state responses for α = 1.5 and α = 0.5.

12

You might also like