ECE 801: Linear State Space Systems
Dr. Darren Dawson
320 Engineering Innovation Building
(864) 656 - 5924
FAX: (864) 656 - 7220
EMAIL:
[email protected] CLASS TIME: See the Schedule
OFFICE HOURS: To be announced.
Coverage of text:
þ Chapter 1: Background, Modeling, Intro. to State
Variables
þ Chapter 2: Vector Spaces
þ Chapter 3: Linear Operators
þ Chapter 4: Eigenvalues and Eigenvectors
þ Chapter 5: Functions of Vectors and Matrices
þ Chapter 6: Solutions to State Equations
þ Chapter 7: Stability
þ Chapter 8: Controllability and Observability
þ Chapter 9: Realizations
þ Chapter 10: Feedback and Observers
xChapter 11: Optimal Control and Estimation
Other Suggested Texts:
The MathWorks, Inc., " The Student Edition of
MATLAB," Prentice-Hall, version 5, 1997
Brogan, W. R., “Modern Control Theory”, 3e, Prentice-
Hall, 1991.
C.-T. Chen, "Linear System Theory and Design," Holt,
Rinehart and Winston, 1984.
W. M. Wonham, "Linear Multivariable Control; A
Geometric Approach," 3rd Ed., Springer-Verlag,
1985.
T. Kailath, "Linear Systems," Prentice-Hall, 1980.
Chapter 1 The Concept of "STATE"
First, an intuitive example:
B x(t)
K M F(t)
Differential Equation (from F=Ma):
M &x&( t ) + B x& ( t ) + Kx ( t ) = F ( t )
Define "state variables and control ":
x1 = x (t ) u(t) = F(t)
x2 = x& ( t )
Now take derivatives:
x&1 = x& = x2
x&2 = &x& = u M − B M x2 − K M x1
x1
Use vector-matrix form: X = "State Vector"
x2
"State Equations" 0 1 0
X& = K X + 1 u
− −B
M M M
"Output equation" y ( t ) = x = x1
or
y= 1 0 X
"State-Space Form" X& = AX + Bu
Y = CX + Du
0 1 0
X& = K X + 1 u
− −B
M M M
A B
C D
y = 1 0 X +0u
These equations, along with the initial conditions of the
system (two of them!) are two first-order linear differential
equations which provide exactly the same information as the
original 2nd order linear differential equation.
But this "state variable description is not unique.
Another state-variable description:
Let x1 = x + x&
x2 = x&
Then
x&1 = x& + &x& = x 2 + 1 u−B x& − K x
M M M
= x2 + 1 u−B x2 − K ( x1 − x2 )
M M M
x& 2 = &x& = 1 M u − B M x 2 − K M ( x1 − x 2 )
So
− K 1 + K −B 1
X& = M M M X + M u
− K M K −B
M M 1
M
y = [1 − 1] X
This state-variable representation, with the initial
conditions, is also perfectly equivalent to the original
2nd order D.E.!
What's the difference?*
Are there advantages to one representation over the
other?
*Ans: One set of state variables is a "transformed" version of
the other. One can consider the two sets of
variables ( x1 , x2 ), ( x1 , x2 ) as different "coordinate
systems" representing the same physical process.
x1 = x1 + x2 = x + x&
x2 = x2 = x&
.
x
X(t)
x2 x2 x1
x1 x
Definitions of "State Variables" (Brogan):
Definition 1: The state variables of a system consist of a minimum
set of parameters which completely summarize the system's status
in the following sense. If at any time t0 , the values of the state
variables xi ( t0 ) are known, then the output y ( t1 ) and the
values xi ( t1 ) can be uniquely determined for any time t1 , t1 > t0
provided the input u[to , t1 ] is also known.
Definition 2: The state at any time t 0 is a set of the minimum
number of parameters xi ( t0 ) which allows a unique output
segment y[ t0 , t1 ] to be associated with each input segment u[to , t1 ]
for every t 0 and for all t > t 0.
In other words, if the set of variables we choose allows us, along
with the initial conditions, to get the same information about
output y from our "state equations" as we get from the
system's overall differential equation, then our variables are
state variables.
If the D.E. is nth order, there is going to be a set of n state
variables, and hence, n state equations.
Obtaining state variables:
We have already seen that state variables are not uniquely
chosen. This suggests many ways to select them from a D.E.
One sure-fire way is as follows:
dny d n −1 y d2y dy
D.E.: n
+ a n −1 n −1
+ K + a 2 2
+ a1 + a0 y = u( t )
dt dt dt dt
Choose state variables:
dy d2y d n −1 y
x1 = y x2 = x3 = 2 L xn = n −1
dt dt dt
In
Indiscrete-time
discrete-timesystems,
systems,useusesuccessive
successivetime-shifts
time-shiftsrather
rather
than
thanderivatives;
derivatives;i.e.,
i.e.,x(k),
x(k),x(k-1),
x(k-1),......,,
x(k-n+1).
x(k-n+1). (Aside
(AsideNote)
Note)
X& = AX + Bu
Then in state-variable form: y = CX + Du where
0
0 1 0 0 L 0 0
0 0 1 0 L 0
0
A= 0 0 0 1 L 0 B =
M M M
− a 0 − a1 − a2 − a3 L − a n −1 0
1
(n x n) (n x 1)
C= 1 0 0 L 0 and D=0
(1 x n) (1 x 1)
These particular state variables are called "phase variables".
In this course, all other choices of state variables will be assumed to
be linear combinations of these state variables.
vectors linear combinations matrices
linear systems spaces
coordinate transformations state variables
. . . . All is leading to one thing: the need for Vector Spaces and
Linear Algebra !!
Appendix A: Matrix Algebra
Matrix equality (matrices of equal size): equality element-by-
element.
Matrix Addition:
If A = aij and B = bij , then A + B = C means that cij = aij + bij
row index column index
Matrix Multiplication:
If A is ( n × m) and B is ( m × p ), then C = AB implies that C is ( n × p ),
m
and that cij = ∑ aik bkj
k =1
Matrices, in general, do not commute!!!!
Null and Unit (Identity) Matrices:
1 0 L 0
0 1 M
0 L 0
0n = M O M and I n = M O
0 L 0 ( n × n ) 1 0
0 L 0 1 ( n ×n )
0A = 0 and IA = A
Associative, Commutative, and Distributive Laws:
A+B = B+ A
A + (B + C) = ( A + B) + C Always assuming compatible
αA = Aα dimensions!!
A(BC) = ( AB)C
A(B + C) = AB + AC
Transposes and Symmetry:
If A = aij then A T = a ji . If A T = A, then A is "symmetric."
If A T = − A , then A is "skew symmetric."
If A = A * (complex - conjugate transpose ), then A is "Hermitian "
( ABC) T = CT B T A T
Determinants: (square matrices only)
If A and B are both ( n × n ), then AB = A B
A = AT
If a whole row or column is zero, or if any row or column is a linear
combination of another row or column , then A = 0.
"Rank", or r ( A), is the size of the largest nonzero determinant
that can be formed while crossing out rows and columns
of A .
If A is (m x n), the rank of A must be ≤ min( m, n ) .
q ( A ) = n − r ( A ) = "nullity"
“degeneracy” = “rank deficiency” = min(m, n) − r ( A)
If A is square and rank-deficient ( rank <n), it is
"singular," and A = 0, otherwise "nonsingular" or "full
rank"
Matrix Inverses:
Only square, nonsingular matrices have inverses.
If A −1 = B , then AB = BA = I Sometimes we refer to "left"
inverses and "right"
inverses, usually for
( ABC) −1 = C −1B −1A −1 polynomial matrices.
If A −1 = A , A is "involutory."
If A −1 = A T , A is " orthogonal."
If A −1 = A T , A is "unitary."
(complex - conjugate transpose )
Trace: (square matrices only)
"trace" of A, or tr ( A ) , is the sum of all the elements on the
diagonal.
tr ( A ) = tr ( A T )
tr ( AB ) = tr ( BA ) the matrix AB must be square
tr ( A + B ) = tr ( B + A ) A and B must be square
Block matrices:
"Block" matrices can be multiplied just as if their
individual entries were scalars:
A1 A2 B1 B2 A1 B1 + A2 B3 A1 B2 + A2 B4
A =
3 A4 B3
B4 A3 B1 + A4 B3 A3 B2 + A4 B4
Each element in these matrices is a
matrix itself.
"Elementary"operations and matrices:
Elementary Operations:
1. Interchange any two rows or columns.
2. Multiply any row or column by a scalar
3. Add a multiple of one row (column) to another row (column)
without altering the first row (column).
Elementary Matrix:
Any matrix that can be obtained by applying any number of
elementary operations to the identity matrix.
Matrix Calculus:
Matrices can have functions (of time, for example) as their
individual elements. Differentiation and integration of
matrices is done element-by-element; i.e.,
= [a&ij (t )], and ∫ A(t )dt = [∫ aij (t )dt ]
& d A( t )
A=
dt
Note that this implies that Laplace transforms are done
element-by-element too:
X& = AX + Bu sX ( s ) = AX ( s ) + BU ( s ) + x 0
L =
y = CX + Du Y ( s ) = CX ( s ) + DU ( s)
Taking the first equation:
( sI n − A ) X ( s) = BU ( s) + x 0
X ( s ) = ( sI n − A ) −1 BU ( s ) + ( sI n − A ) − 1 x 0
Substituting into the second equation: (Y ( s ) = CX ( s ) + DU ( s ) )
Y ( s ) = C ( sI n − A ) −1 BU ( s ) + DU ( s ) + C ( sI n − A ) − 1 x 0
"zero-state" solution "zero-input" solution
TRANSFER FUNCTION: Suppose initial conditions all zero:
[
Y ( s ) = C ( sI n − A) −1 B + D ] U (s)
transfer function H ( s)
Note: We cannot write Y ( s ) U ( s ) = H ( s ) because U ( s)
might be a vector!
Some other properties of matrix calculus:
∂ ( Ax)
=A
∂x
Τ a: vector (column)
∂ ( x Ay)
= xΤ A A: matrix
∂y x: column vector
∂ ( x Τ Ay)
= ( Ay )Τ = y Τ AΤ
∂x
∂ ( xΤ Ax)
∂x
(
= x Τ A + x Τ AΤ = xΤ A + AΤ ) (= 2 xΤ A if A is symmetric )
∂[ y Τ Ax] ∂[ x Τ AΤ y ]
=
∂x ∂x
∂A−1 (t ) −1 ∂A −1
= −A A
∂t ∂t
If matrices A and B are functions of scalar t, but X is not;
∂ ( AB) ∂A ∂B
= B+A (note the order! )
∂t ∂t ∂t
∂ ( XA) ∂A
=X
∂t ∂t
If a scalar function f is a scalar function of a vector x, for
example if
f ( x1 , x2 , x3 ) = 2 x1 + x2 x 3 − sin ( x 3 )
then ∂f ∂f ∂f ∂f
=
∂x ∂x1 ∂x 2 ∂x 3
(1 x n)
If f is a vector function of a vector x, for example if
f1 ( x1 , x 2 , K, x n )
f ( x , x ,K , x )
f (x) =
2 1 2 n
M
f ( x , x ,K , x )
m 1 2 n
∂ f1 ( x ) ∂f1 ( x) ∂f1 ( x)
then L
∂x ∂x2 ∂xn
∂f (1x ) ∂f 2 ( x)
∂f ( x ) 2 M
= ∂x1 ∂x2
∂x
O
∂f m ( x ) ∂f m ( x)
L
∂x1 ∂xn
x or f without a subscript indicates a vector quantity
Note that the derivative of an m-dimensional vector with
respect to an n-dimensional vector is an (m x n)
matrix.
From
FromNowNowOn:
On: Matrices
Matriceswill
willbe
bedenoted
denotedinincapital,
capital,
but
butnot
notnecessarily
necessarilyboldface,
boldface,letters.
letters. Their
Their
interpretation
interpretationas
asmatrix
matrixquantities
quantitiesshould
shouldbebe
apparent
apparentfrom
fromthe
thecontext.
context.