0% found this document useful (0 votes)
7 views18 pages

Control Theory

This document presents comprehensive mathematical foundations for key topics in control theory, including Kalman controllability, Lyapunov stability, and feedback linearization. It includes rigorous proofs, geometric interpretations, and detailed explanations, making it suitable for graduate-level studies. The content is structured into sections covering prerequisites, main theories, and applications, ensuring a thorough understanding of the concepts.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views18 pages

Control Theory

This document presents comprehensive mathematical foundations for key topics in control theory, including Kalman controllability, Lyapunov stability, and feedback linearization. It includes rigorous proofs, geometric interpretations, and detailed explanations, making it suitable for graduate-level studies. The content is structured into sections covering prerequisites, main theories, and applications, ensuring a thorough understanding of the concepts.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Mathematical Foundations of Control Theory

Controllability, Lyapunov Stability, and Feedback Linearization

Comprehensive Proofs and Analysis

November 5, 2025

Abstract
This document provides rigorous mathematical foundations for three fundamental topics in
control theory: (1) Kalman controllability for linear time-invariant systems with complete proof
of the rank condition, (2) Lyapunov stability theory including design methods and application
to computed-torque control of robot manipulators, and (3) feedback linearization for nonlinear
systems with relative degree analysis. All theorems are proven with complete mathematical
rigor, extensive analysis of underlying concepts, geometric interpretations, and detailed expla-
nations of each step. Prerequisites are developed from rst principles, and all content represents
well-established theory from graduate-level control courses.

Contents

1 Prerequisites and Foundations 2


1.1 State-Space Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Solution of LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 Kalman Controllability Theory 3
2.1 Motivation and Intuition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 The Controllability Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.3 Main Controllability Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.4 Geometric Interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.5 Examples and Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3 Lyapunov Stability Theory 7
3.1 Stability Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.2 Lyapunov's Direct Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.3 Lyapunov Stability Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.4 LaSalle's Invariance Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4 Lyapunov-Based Control Design 10
4.1 Energy-Based Lyapunov Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.2 PD Control with Gravity Compensation . . . . . . . . . . . . . . . . . . . . . . . . 11
5 Feedback Linearization 13
5.1 Motivation and Basic Idea . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
5.2 Lie Bracket and Lie Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
5.3 Relative Degree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
5.4 Input-Output Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
5.5 Full-State Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
5.6 Example: Simple Manipulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
6 Summary and Connections 17

1
Controllability, Stability & Linearization Control Theory Foundations
6.1 Key Results Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
6.2 Interconnections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2
Controllability, Stability & Linearization Control Theory Foundations
1 Prerequisites and Foundations

1.1 State-Space Representations


Denition 1.1 (Linear Time-Invariant (LTI) System). A continuous-time LTI system is described
by the dierential equation:
ẋ(t) = Ax(t) + Bu(t) (1)
y(t) = Cx(t) + Du(t) (2)
where:
ˆ x(t) ∈ Rn is the state vector (dimension n)
ˆ u(t) ∈ R is the control input vector (dimension m)
m

ˆ y(t) ∈ R is the output vector (dimension p)


p

ˆ A∈R n×nis the system matrix (state transition)


ˆ B∈R n×mis the input matrix (control eectiveness)
ˆ C∈R is the output matrix
p×n

ˆ D∈R p×mis the feedthrough matrix


Remark 1.2 (Physical Interpretation). The state x encodes all information about the system's
current conguration. The matrix A describes how the state evolves naturally (free dynamics),
while B describes how control inputs inuence the state evolution. The pair (A, B) is often called
the dynamics pair.
Denition 1.3 (Nonlinear Control System). A general nonlinear control-ane system has the
form:
ẋ = f (x) + g(x)u (3)
where f : R → R represents the drift dynamics and g : R → R represents the input vector
n n n n×m

elds.
1.2 Solution of LTI Systems
Theorem 1.4 (Solution Formula for LTI Systems). The solution to equation (1) with initial con-
dition x(0) = x0 is:
(4)
Z t
x(t) = eAt x0 + eA(t−τ ) Bu(τ )dτ
0

where eAt is the matrix exponential.


Proof. We verify this is a solution by direct dierentiation. Let
(5)
Z t
x(t) = eAt x0 + eA(t−τ ) Bu(τ )dτ
0

3
Controllability, Stability & Linearization Control Theory Foundations
Taking the derivative:
(6)
Z t 
d At  d A(t−τ )
ẋ(t) = e x0 + e Bu(τ )dτ
dt dt 0

(7)
 Z t 
At A(t−t) A(t−τ )
= Ae x0 + e Bu(t) + Ae Bu(τ )dτ
0

(8)
Z t
= AeAt x0 + Bu(t) + A eA(t−τ ) Bu(τ )dτ
0

(9)
 Z t 
At A(t−τ )
= A e x0 + e Bu(τ )dτ + Bu(t)

(10)
0
= Ax(t) + Bu(t)

Here we used:
1. e = Ae (matrix exponential derivative)
d At
dt
At

2. Leibniz integral rule: R f (t, τ )dτ = f (t, t) + R dτ


d
dt 0
t t ∂f
0 ∂t

3. e = I
A·0

Initial condition: x(0) = e x + R (·) = Ix = x ✓


A·0
0 0
0
0 0

Remark 1.5 (Decomposition Interpretation). The solution (4) decomposes into:


ˆ Natural response: e x (system evolution from initial condition with u = 0)
At
0

ˆ Forced response: Bu(τ )dτ (contribution from control inputs)


R t A(t−τ )
e 0

The forced response can be understood as a convolution: each input u(τ ) at time τ aects the state
at time t through the transition matrix e weighted by the input matrix B.
A(t−τ )

2 Kalman Controllability Theory

2.1 Motivation and Intuition


Fundamental Question: Given a linear system ẋ = Ax + Bu, can we drive the state from any
initial condition x to any nal state x in nite time using an appropriate control input?
0 f

Denition 2.1 (Reachable Set). The reachable set at time T from the origin is:

Bu(τ )dτ, for some u(·) (11)


 Z T 
A(T −τ )
R(T ) = x(T ) : x(T ) = e
0

This is the set of all states that can be reached at time T starting from x(0) = 0.
Remark 2.2 (Geometric Understanding). The reachable set R(T ) is the image (range) of the linear
operator that maps control functions u : [0, T ] → R to states x(T ) ∈ R . Since this is a linear
m n

map, R(T ) is a subspace of R . The question of controllability is whether this subspace is all of
n

R .
n

2.2 The Controllability Matrix


Denition 2.3 (Controllability Matrix).
For the system (ẋ = Ax+Bu), the controllability matrix
is: 
C = B AB A B · · · A

B ∈R 2 n−1(12)
n×nm

where each block A B ∈ R for k = 0, 1, . . . , n − 1.


k n×m

4
Controllability, Stability & Linearization Control Theory Foundations
Remark 2.4 (Why These Particular Matrices?). The appearance of B, AB, A B, . . . comes from 2

expanding the matrix exponential:


(13)
2 2
 
A(T −τ ) A (T − τ )
e B = I + A(T − τ ) + + ··· B
2!

The reachable states are in the span of columns of B, AB, A B, etc. The key insight is that by
2

the Cayley-Hamilton theorem, powers A , A , . . . are linear combinations of I, A, . . . , A , so


n n+1 n−1

we need only check up to A B.n−1

2.3 Main Controllability Theorem


Theorem 2.5 (Kalman Controllability Rank Condition) . The LTI system ẋ = Ax + Bu is con-
trollable (i.e., for any x0 , xf ∈ Rn and any T > 0, there exists an input u : [0, T ] → Rm such that
x(0) = x0 and x(T ) = xf ) if and only if

rank(C) = rank B AB A2 B · · · An−1 B = n


 
(14)
Before proving this theorem, we establish key lemmas.
Lemma 2.6 (Cayley-Hamilton Theorem). Every square matrix satises its own characteristic equa-
tion. Specically, if p(λ) = det(λI − A) = λn + cn−1 λn−1 + · · · + c1 λ + c0 is the characteristic
polynomial of A, then:
p(A) = An + cn−1 An−1 + · · · + c1 A + c0 I = 0 (15)
Proof of Cayley-Hamilton Theorem (Sketch). Consider the adjugate matrix adj(λI − A). By prop-
erties of determinants:
(λI − A) · adj(λI − A) = det(λI − A) · I = p(λ)I (16)
The adjugate is a matrix polynomial in λ of degree at most n − 1:
adj(λI − A) = B λ + B λ + · · · + B λ + B
n−1
n−1
n−2
n−2
(17)
1 0

Substituting λ = A (interpreting this carefully) yields p(A) = 0. A rigorous proof uses the fact
that this holds over the algebraic closure.
Corollary 2.7 (Span of Matrix Powers). The space of all polynomials in A is spanned by {I, A, A , . . . , A 2 :
n−1 }

span{I, A, A , A , . . .} = span{I, A, A , . . . , A }
2 3 2
(18)
n−1

Proof. By Cayley-Hamilton, A can be written as a linear combination of I, A, . . . , A . Then:


n n−1

A n+1
= A · A = A · (linear combination of I, A, . . . , A
n
) (19) n−1

= linear combination of A, A , . . . , A 2 n
(20)
= linear combination of I, A, . . . , A n−1
(21)
By induction, all higher powers are in span{I, A, . . . , A }. n−1

Lemma 2.8 (Controllable Subspace). The reachable set R(T ) for any T > 0 is equal to the range
of the controllability matrix:
R(T ) = range(C) = span{columns of B, AB, A2 B, . . . , An−1 B} (22)

5
Controllability, Stability & Linearization Control Theory Foundations
Proof. Step 1 : Show R(T ) ⊆ range(C).
Any state in R(T ) has the form:
(23)
Z T
x(T ) = eA(T −τ ) Bu(τ )dτ
0

Expanding the matrix exponential:


(24)

!
T
Ak (T − τ )k
Z X
x(T ) = Bu(τ )dτ
0 k!
k=0

(25)

T X
(T − τ )k k
Z
= A Bu(τ )dτ
0 k=0 k!

(26)
∞ Z T
X
k (T − τ )k
= A B u(τ )dτ
0 k!
k=0

(27)
X∞
= Ak Bvk
k=0

where v = R u(τ )dτ ∈ R are coecient vectors.


T (T −τ )k m

By Corollary 2.7:
k 0 k!

(28)

X n−1
X
Ak Bvk = Ak Bwk
k=0 k=0

for some w ∈ R (collecting terms using linear dependence of A , A


m n n+1 , . . . on lower powers).
Therefore:
k
 
w 0
 w1 

.. range(C) (29)

An−1 B 

x(T ) = B AB · · · ∈
 
wn−1
Step 2: Show range(C) ⊆ R(T ).
We need to show every vector in range(C) is reachable. Consider x = A Bc for some k ∈ k

{0, 1, . . . , n − 1} and c ∈ R .
d
m

We construct a control that achieves this. Consider the polynomial input:


u(τ ) = δ (T − τ )c (k)
(30)
where δ is the k-th distributional derivative of the Dirac delta (in practice, approximate with
(k)

smooth functions).
Then:
(31)
Z T
A(T −τ ) (k)
x(T ) = e Bδ (T − τ )cdτ
0

(32)
d k
As
= e Bc k
ds s=0
 

(33)

dk X Aj s j 
=  Bc
dsk j!
j=0

(34)
s=0
k
= A Bc

For practical (non-distributional) inputs, we can achieve this with piecewise polynomial con-
trols. The key point is that the reachable set includes all linear combinations of columns of
B, AB, . . . , A B.
n−1

Therefore R(T ) = range(C).


6
Controllability, Stability & Linearization Control Theory Foundations
Now we prove the main theorem.
Proof of Theorem 2.5. Necessity (⇐): Assume the system is controllable.
By denition, for any x ∈ R , there exists an input driving x(0) = 0 to x(T ) = x . This
n

means x ∈ R(T ) for any x , so R(T ) = R .


f f
n

By Lemma 2.8, R(T ) = range(C).


f f

Therefore range(C) = R , which implies rank(C) = n.


n

Suciency (⇒): Assume rank(C) = n.


This means range(C) = R (since the range is a subspace of R with dimension n).
n n

By Lemma 2.8, R(T ) = range(C) = R for any T > 0. n

This means any state x ∈ R can be reached from the origin. Now consider arbitrary initial
n

and nal states x and x .


f

From the solution formula (4):


0 f

(35)
Z T
AT A(T −τ )
x(T ) = e x + e Bu(τ )dτ
0
0

We want x(T ) = x , so we need:


f

(36)
Z T
A(T −τ ) AT
e Bu(τ )dτ = x − e x f 0
0

The right-hand side is some vector in R . Since R(T ) = R , there exists u(·) achieving any
n n

target in R , including x − e x .
n AT

Therefore the system is controllable.


f 0

2.4 Geometric Interpretation

Unreachable
Reachable
R(T ) = range(C)

Rn

Figure 1: The reachable set is a subspace. Controllability requires R(T ) = R . n

Remark 2.9 (Why Rank n is Necessary and Sucient). ˆ Dimension argument: The con-
trollability matrix C has n rows and nm columns. Its range is a subspace of R . The n

maximum possible rank is n (bounded by number of rows). To span all of R , we need n

exactly n linearly independent columns.


ˆ Linear algebra perspective: rank(C) = n means the columns of C span R , so any target n

state can be written as a linear combination of reachable directions.


ˆ Control perspective: Each column of B represents an "input direction" in state space.
Each AB column shows where those inputs go after one time step, A B after two steps, etc. 2

We need these to span all directions to reach any state.

7
Controllability, Stability & Linearization Control Theory Foundations
2.5 Examples and Applications
Example 2.10 (Double Integrator). Consider the system:
q̈ = u (37)
In state-space form with x = [q, q̇] :
T

(38)
   
0 1 0
ẋ = x+ u
0 0 1

The controllability matrix is:


(39)
 
  0 1
C = B AB =
1 0
Since det(C) = −1 ̸= 0, we have rank(C) = 2, so the system is controllable.
Physical interpretation: We can control position through velocity (the AB column shows
that input aects velocity immediately, which then integrates to position).
Example 2.11 (Uncontrollable System). Consider:

(40)
   
1 0 1
ẋ = x+ u
0 2 0

The controllability matrix is:


(41)
 
1 1
C=
0 0
Since the second row is all zeros, rank(C) = 1 < 2. The system is uncontrollable.
Physical interpretation: The input only aects the rst state component. The second com-
ponent evolves independently as x (t) = e x (0), which we cannot inuence.
2
2t
2

3 Lyapunov Stability Theory

3.1 Stability Denitions


Consider the autonomous nonlinear system:
ẋ = f (x), x(0) = x0 (42)
where f : R → R is locally Lipschitz, and assume f (x ) = 0 (i.e., x is an equilibrium point).
n n
e e

Denition 3.1 (Stability (Lyapunov Sense)). The equilibrium point x is stable if for every ϵ > 0,
there exists δ(ϵ) > 0 such that:
e

∥x − x ∥ < δ =⇒ ∥x(t) − x ∥ < ϵ ∀t ≥ 0


0 e e (43)
Remark 3.2 (Intuition). Stability means trajectories starting near the equilibrium stay near it
forever. Small perturbations don't cause the system to drift away. However, trajectories might not
converge to the equilibrium.
Denition 3.3 (Asymptotic Stability). The equilibrium x is asymptotically stable if:
e

1. It is stable (in the Lyapunov sense), and


2. There exists δ > 0 such that:
∥x − x ∥ < δ =⇒ lim x(t) = x
0 e
t→∞
e (44)
8
Controllability, Stability & Linearization Control Theory Foundations
Remark 3.4 (Convergence). Asymptotic stability requires not only that trajectories stay nearby,
but that they actually converge to the equilibrium. The region where this convergence occurs is
called the region of attraction.
Denition 3.5 (Exponential Stability). The equilibrium x is exponentially stable if there exist
constants c, λ > 0 and δ > 0 such that:
e

∥x − x ∥ < δ =⇒ ∥x(t) − x ∥ ≤ c ∥x − x ∥ e
0 e e 0 ∀t ≥ 0e
−λt
(45)
Remark 3.6 (Exponential Rate). Exponential stability provides a quantitative convergence rate
characterized by λ. This is the strongest form of stability and implies asymptotic stability.
3.2 Lyapunov's Direct Method
Denition 3.7 (Positive Denite Function). A continuous function V : R → R is: n

ˆ Positive denite (PD) on a region D containing x if: e

V (x ) = 0 and V (x) > 0 ∀x ∈ D \ {x }


e e (46)
ˆ Positive semi-denite if V (x) ≥ 0 for all x ∈ D.

ˆ Radially unbounded if V (x) → ∞ as ∥x∥ → ∞.

Denition 3.8 (Lyapunov Function Candidate). A C function V : R → R is a Lyapunov


1 n

function candidate if it is positive denite in a neighborhood of the equilibrium x . e

Denition 3.9 (Lie Derivative). For a C function V (x) and vector eld f (x), the Lie derivative
1

is:
V̇ (x) = L V =
f
∂V
∂x
· f (x) = ∇V (x) f (x) T
(47)
This represents the time derivative of V along trajectories of ẋ = f (x).
Justication of Lie Derivative Formula. Along a trajectory x(t) of the system ẋ = f (x):
dV (x(t))
dt
=
∂V dx
·
∂x dt
(48)
(49)
X ∂V n
= ẋ i
∂x i
i=1

(50)
n
X ∂V
= fi (x)
∂xi
(51)
i=1
= ∇V T f (x)

3.3 Lyapunov Stability Theorems


Theorem 3.10 (Lyapunov's Stability Theorem). If there exists a Lyapunov function candidate
such that:
V (x)

1. V (x) is positive denite in a region D containing xe


2. V̇ (x) ≤ 0 for all x ∈ D (negative semi-denite)
then xe is stable.

9
Controllability, Stability & Linearization Control Theory Foundations
Proof. We prove stability by showing that level sets of V are forward invariant.
Step 1 : Since V is continuous and positive denite, for any ϵ > 0, the set {x : ∥x − x ∥ = ϵ}
is compact. Let:
e

α = min V (x) > 0


∥x−xe ∥=ϵ
(52)
This minimum exists by compactness and is positive by positive deniteness.
Step 2: Consider the sublevel set:
Ω = {x : V (x) ≤ c}
c (53)
for c < α. Since V is continuous and V (x ) = 0, there exists δ > 0 such that:
e

∥x − x ∥ < δ =⇒ V (x) < ce (54)


Step 3: If x ∈ Ω , then V (x(0)) = V (x ) ≤ c. Since V̇ ≤ 0 along trajectories:
0 c 0

V (x(t)) ≤ V (x(0)) ≤ c ∀t ≥ 0 (55)


Therefore x(t) remains in Ω for all t ≥ 0.
Step 4: Since Ω ⊂ {x : V (x) < α} and by choice of α:
c
c

x ∈ Ω =⇒ V (x) < α =⇒ ∥x − x ∥ < ϵ


c e (56)
Therefore:
∥x − x ∥ < δ =⇒ ∥x(t) − x ∥ < ϵ ∀t ≥ 0
0 e e (57)
This proves stability.
Theorem 3.11 (Lyapunov's Asymptotic Stability Theorem). If there exists a Lyapunov function
V (x) such that:
1. V (x) is positive denite in a region D containing xe
2. V̇ (x) is negative denite in D
then xe is asymptotically stable.
Proof. Step 1 : By Theorem 3.10, x is stable.
: We prove convergence. Let x(t) be a solution starting in D.
e
Step 2
Since V̇ < 0 (except at x ), V (x(t)) is strictly decreasing along trajectories except at the
equilibrium. Therefore V (x(t)) has a limit:
e

lim V (x(t)) = V ≥ 0
t→∞
∞ (58)
Step 3: We show V = 0 by contradiction.
Suppose V > 0. Then there exists a compact set K = {x : V ≤ V (x) ≤ V (x )} ⊂ D

containing the trajectory.


∞ ∞ 0

On K \ {x }, V̇ is continuous and negative denite, so:


e

β = max V̇ (x) < 0


x∈K
(59)
Then:
(60)
Z t
V (x(t)) = V (x ) + V̇ (x(τ ))dτ ≤ V (x ) + βt
0 0
0
For large enough t, V (x(t)) < 0, contradicting positive deniteness.
Therefore V = 0, which implies x(t) → x .
∞ e

10
Controllability, Stability & Linearization Control Theory Foundations
Theorem 3.12 (Exponential Stability via Lyapunov). If there exist a C 1 function V (x) and pos-
itive constants c1 , c2 , c3 such that in a region D containing xe :
c1 ∥x − xe ∥2 ≤ V (x) ≤ c2 ∥x − xe ∥2 (61)
V̇ (x) ≤ −c3 V (x) (62)
then xe is exponentially stable.
Proof. From (62):
V̇ (x(t)) ≤ −c3 V (x(t)) (63)
This is a dierential inequality. Integrating:
(64)
Z V (x(t)) Z t
dV
≤ −c3 dτ
V (x0 ) V 0

(65)
 
V (x(t))
ln ≤ −c3 t
V (x0 )
V (x(t)) ≤ V (x0 )e−c3 t (66)
Using the bounds (61):
c1 ∥x(t) − xe ∥2 ≤ V (x(t)) ≤ V (x0 )e−c3 t ≤ c2 ∥x0 − xe ∥2 e−c3 t (67)
Therefore:
(68)
r
c2 c3
∥x(t) − xe ∥ ≤ ∥x0 − xe ∥ e− 2 t
c1
This is exponential stability with rate λ = c /2 and constant c = pc /c .
3 2 1

3.4 LaSalle's Invariance Principle


Theorem 3.13 (LaSalle's Invariance Principle). Let Ω ⊂ Rn be a compact positively invariant set.
Let V : Ω → R be continuously dierentiable with V̇ (x) ≤ 0 on Ω. Dene:
E = {x ∈ Ω : V̇ (x) = 0} (69)
M = largest invariant set in E (70)
Then every solution starting in Ω approaches M as t → ∞.
Remark 3.14 (When Negative Deniteness Fails). LaSalle's principle is powerful when V̇ is only
negative semi-denite. If we can show that the set where V̇ = 0 contains no complete trajectories
except the equilibrium, we still get asymptotic stability.
4 Lyapunov-Based Control Design

4.1 Energy-Based Lyapunov Functions


For mechanical systems, natural Lyapunov function candidates come from energy considerations.
Denition 4.1 (Total Energy). For a mechanical system with generalized coordinates q and ve-
locities q̇:
T (q, q̇) = q̇ M (q)q̇ (kinetic energy) (71)
1 T
2
U (q) = potential energy (72)
E = T + U (total energy) (73)
where M (q) > 0 is the inertia matrix (positive denite).
11
Controllability, Stability & Linearization Control Theory Foundations
4.2 PD Control with Gravity Compensation
Consider a fully-actuated robot manipulator:
M (q)q̈ + C(q, q̇)q̇ + G(q) = τ (74)
where:
ˆ M (q) ∈ Rn×n is the symmetric positive denite inertia matrix
ˆ C(q, q̇)q̇ ∈ R represents Coriolis and centrifugal forces
n

ˆ G(q) ∈ R is the gravity vector


n

ˆ τ ∈ R is the control torque


n

Assumption 4.2 (Properties of Robot Dynamics). The robot dynamics satisfy:


1. M (q) = M (q) > 0 (symmetric positive denite)
T

2. Ṁ (q) − 2C(q, q̇) is skew-symmetric: x [Ṁ (q) − 2C(q, q̇)]x = 0 for all x ∈ R
T n

3. G(q) = for some potential function U (q)


∂U (q)
∂q

Remark 4.3 (Physical Justication). Property (2) is not arbitraryit comes from the physics of
Lagrangian mechanics. The skew-symmetry reects energy conservation: Coriolis forces do no
work.
Proposition 4.4 (PD + Gravity Compensation Control Law). Consider the control law:
τ = −K e − K ė + G(q)
p d (75)
where:
ˆ e = q − qd is the tracking error
ˆ Kp , Kd > 0 are diagonal gain matrices (proportional and derivative)
ˆ qd is the constant desired conguration
Theorem 4.5 (Stability of PD + Gravity Compensation). Under Assumptions 4.2, the control law
(75) renders the equilibrium point (e, ė) = (0, 0) locally asymptotically stable if qd is an isolated
minimum of U (q).
Proof. Step 1 : Derive the closed-loop dynamics.
Substituting (75) into (74):
M (q)q̈ + C(q, q̇)q̇ + G(q) = −Kp e − Kd ė + G(q) (76)
M (q)ë + C(q, q̇)ė + Kd ė + Kp e = 0 (77)
(Note: q̈ = ë since q is constant)
Step 2: Construct a Lyapunov function candidate.
d

Consider the energy-based Lyapunov function:


1 1
V (e, ė) = ėT M (q)ė + eT Kp e + [U (q) − U (qd )]
2 2
(78)
Analysis of components:
ˆ 1 T
2 ė M (q)ė: kinetic energy (PD since M > 0)
ˆ 1 T
2e K e: "articial potential" from proportional control (PD since K
p p >0 )
12
Controllability, Stability & Linearization Control Theory Foundations
: gravitational potential energy shifted so minimum is at q
ˆ U (q) − U (qd ) d

Step 3: Verify positive deniteness.


At the equilibrium (e, ė) = (0, 0):
V (0, 0) = 0 + 0 + [U (qd ) − U (qd )] = 0 (79)
For (e, ė) ̸= (0, 0) near the equilibrium:
ˆ If ė ̸= 0: ė M ė > 0 (since M > 0)
1 T
2

ˆ If e ̸= 0: e K e > 0 (since K > 0)


1 T
2 p p

ˆ If q ̸= q : U (q) − U (q ) ≥ 0 near an isolated minimum (with equality only at q )


d d d

Therefore V > 0 for (e, ė) ̸= (0, 0) in a neighborhood of the origin.


Step 4: Compute the time derivative of V .

(80)
 
d 1 T 1
V̇ = ė M ė + eT Kp e + U (q) − U (qd )
dt 2 2
1
= ėT Ṁ ė + ėT M ë + ėT Kp e +
2
∂U T
∂q
q̇ (81)
1
= ėT Ṁ ė + ėT M ë + ėT Kp e + G(q)T ė
2
(using Assumption 3) (82)
From the closed-loop dynamics (77), solve for M ë:
M (q)ë = −C(q, q̇)ė − Kd ė − Kp e (83)
Substituting:
1
V̇ = ėT Ṁ ė + ėT [−C ė − Kd ė − Kp e] + ėT Kp e + G(q)T ė
2
(84)
1 T
= ė Ṁ ė − ėT C ė − ėT Kd ė − 
2
ėT
K

ėT
pe +  K
 T
p e + G(q) ė (85)
Waitwe need to be more careful with the gravity term. Let me reconsider.
Actually, from e = q − q , we have ė = q̇. So:
d

1
V̇ = ėT Ṁ ė + ėT M ë + ėT Kp e + G(q)T ė
2
(86)
But waitlet me recalculate the derivative properly. For the potential energy term:
d
dt
[U (q) − U (qd )] = ∇U (q)T q̇ = G(q)T ė (87)
Hmm, but G(q) already appears in the dynamics. Let me restart more carefully.
Actually, the standard approach uses a simpler Lyapunov function without the gravitational
potential. Let's use: 1 T 1
V (e, ė) = ė M (q)ė + e K e
2 2
T
p (88)
Computing the derivative:
1 T T
V̇ = ė Ṁ ė + ė M ë + ė K e
2
T
p (89)
1
2
T T
= ė Ṁ ė + ė [−C ė − K ė − K e + 
d  − G(q)]
G(q) p
 + ė K e T
(90)
p

1 T T
= ė Ṁ ė − ė C ė − ė K ė
2
T
d (91)
13
Controllability, Stability & Linearization Control Theory Foundations
Using the skew-symmetry property (Assumption 2):
ėT [Ṁ − 2C]ė = 0 (92)
Therefore: 1 T
2
ė Ṁ ė − ėT C ė = 0 (93)
Thus:
V̇ = −ėT Kd ė ≤ 0 (94)
Since K > 0, we have V̇ ≤ 0 (negative semi-denite).
Step 5: Apply LaSalle's invariance principle.
d

The set where V̇ = 0 is:


E = {(e, ė) : ė = 0} (95)
On this set, from the closed-loop dynamics:
M (q)ë + Kp e = 0 when ė = 0 (96)
For a trajectory to remain in E, we need ë = 0 as well, which gives:
K e = 0 =⇒ e = 0 (since K > 0)
p p (97)
Therefore the largest invariant set in E is {(e, ė) = (0, 0)}.
By LaSalle's principle, all trajectories converge to the equilibrium.
Therefore the system is locally asymptotically stable.
Remark 4.6 (Why This Proof is Nontrivial). Several subtle points:
1. The skew-symmetry property Ṁ − 2C is crucialwithout it, we can't cancel terms to get
V̇ ≤ 0.

2. We use LaSalle's principle because V̇ is only negative semi-denite (not negative denite).
3. Gravity compensation G(q) cancels exactly in the closed-loop, but we still need the potential
function U to be locally minimized at q for the equilibrium to be stable (not just a saddle
point).
d

5 Feedback Linearization

5.1 Motivation and Basic Idea


Central Question: Can we transform a nonlinear system into a linear one through change of
coordinates and feedback?
For the nonlinear system:
ẋ = f (x) + g(x)u (98)
The idea is to nd:
ˆ A state transformation z = T (x)

ˆ A feedback law u = α(x) + β(x)v

such that in the new coordinates, the system becomes linear:


ż = Az + Bv (99)

14
Controllability, Stability & Linearization Control Theory Foundations
5.2 Lie Bracket and Lie Derivatives
Denition 5.1 (Lie Bracket). For vector elds f, g : R n → Rn , the Lie bracket is:
[f, g] =
∂g
∂x
f−
∂f
∂x
g (100)
where ∂f
∂x ∈ Rn×n is the Jacobian.
Denition 5.2 (Lie Derivative of Function). For a smooth function h : R n →R and vector eld
f: Rn → Rn :
Lf h =
∂h
∂x
· f (x) = ∇hT f (101)
Higher-order Lie derivatives:
L2f h = Lf (Lf h) =
∂(Lf h)
∂x
· f (x) (102)
Lkf h = Lf (Lk−1
f h) (103)
Denition 5.3 (Mixed Lie Derivatives).

Lg Lf h =
∂(Lf h)
∂x
· g(x) (104)
5.3 Relative Degree
Denition 5.4 (Relative Degree). The system
ẋ = f (x) + g(x)u, y = h(x) (105)
has relative degree r at x if: 0

1. L L h(x) = 0 for all k < r − 1 and all x in a neighborhood of x


g
k
f 0

2. L L h(x ) ̸= 0
g
r−1
f 0

Remark 5.5 (Intuitive Meaning). The relative degree counts how many times we must dierentiate
the output y until the input u appears explicitly. It measures the "delay" between input and output.
5.4 Input-Output Linearization
Theorem 5.6 (Input-Output Linearization). Consider the SISO system:
ẋ = f (x) + g(x)u, y = h(x) (106)
with relative degree r at x0 . Then:
1. The input-output behavior can be linearized by the feedback:

u=
1
r−1
Lg Lf h

v − Lrf h(x)

(107)
2. The resulting input-output dynamics are:
y (r) = v (108)
(a chain of r integrators)

15
Controllability, Stability & Linearization Control Theory Foundations
Proof. Dierentiate the output repeatedly:
ẏ =
∂h
∂x
ẋ = Lf h + Lg h · u (109)
ÿ = L2f h + Lg Lf h · u (110)
.. (111)
y (r) = Lrf h + Lg Lr−1
f h·u (112)
By denition of relative degree:
ˆ L L h = 0 for k < r − 1, so u doesn't appear in derivatives up to order r − 1
g
k
f

ˆ L L h ̸= 0, so u appears in y
g
r−1
f
(r)

Setting y = v and solving for u:


(r)

v = Lrf h + Lg Lr−1
f h·u (113)
u=
1
r−1
Lg Lf h

v − Lrf h

(114)

5.5 Full-State Linearization


Theorem 5.7 (Full-State Feedback Linearization). The system ẋ = f (x) + g(x)u with x ∈ Rn
(scalar input) can be transformed to a linear controllable system if and only if:
1. The vector elds g, [f, g], [f, [f, g]], . . . , [f, [f, . . . , [f, g]]] (up to order n − 1) are linearly inde-
pendent
2. These vector elds are involutive (form an integrable distribution)
When these conditions hold, there exists a dieomorphism z = T (x) and feedback u = α(x) + β(x)v
such that:
ż = Az + Bv (115)
where (A, B) is in Brunovsky canonical form.

5.6 Example: Simple Manipulator


Example 5.8 (Single-Link Manipulator with Feedback Linearization). Consider a single-link robot
arm: g
q̈ + sin(q) = u
l
(116)
where q is the angle, l is the link length, g is gravity, and u is the normalized torque.
Step 1: State-space form.
Let x = [x , x ] = [q, q̇] :
1 2
T T

ẋ = x 1 2 (117)
g
ẋ = − sin(x ) + u
l
2 1 (118)
In vector eld form:
(119)
   
x 0 2
ẋ = + u
g
− sin(x ) 1 l 1

So f (x) = [x , − sin(x )] and g(x) = [0, 1] .


2
g
l 1
T T

16
Controllability, Stability & Linearization Control Theory Foundations
Step 2: Check relative degree.
Take output y = h(x) = x (we want to control position).
1

(120)
 
∂h x2
Lf h = f = [1, 0] = x2
∂x − g sin(x1 )
 l
Lg h =
∂h
∂x
g = [1, 0]
0
1
=0 (121)
Since L h = 0, we continue:
g

(122)
 
∂(Lf h) ∂x2 x2
L2f h
= f= g
∂x ∂x − l sin(x1 )

(123)
 
x2 g
= [0, 1] g = − sin(x1 )
− l sin(x1 ) l

(124)
 
∂(Lf h) 0
Lg Lf h = g = [0, 1] = 1 ̸= 0
∂x 1
Therefore the relative degree is r = 2.
Step 3: Design linearizing feedback.
From the theorem:
1   1h
(125)
 g i g
u= v − L2f h = v − − sin(x1 ) = v + sin(x1 )
Lg Lf h 1 l l
Step 4: Verify linearization.
Under this feedback:
g
q̈ = − sin(q) + u
l
(126)
g g
= − sin(q) + v + sin(q)
l l
(127)
=v (128)
The system is now a double integrator: q̈ = v (perfectly linear!).
Step 5: Design linear controller.
For trajectory tracking q → q (t), use:
d

v = q̈d − k1 (q − qd ) − k2 (q̇ − q̇d ) (129)


The tracking error dynamics become:
ë + k2 ė + k1 e = 0 (130)
Choosing k , k >0 appropriately gives exponentially stable tracking.
Complete control law:
1 2

u = q̈d − k1 (q − qd ) − k2 (q̇ − q̇d ) +


g
l
sin(q) (131)
This is computed torque control : we compute the torque needed to cancel nonlinearities (the
g
lsin(q) term) and add a linear feedback.
Remark 5.9 (Generalizations). For multi-link manipulators:
M (q)q̈ + C(q, q̇)q̇ + G(q) = τ (132)
The linearizing control is:
τ = M (q)v + C(q, q̇)q̇ + G(q) (133)
This gives q̈ = v (decoupled double integrators), allowing independent control of each joint.
17
Controllability, Stability & Linearization Control Theory Foundations
6 Summary and Connections

6.1 Key Results Summary

Topic Key Condition Implication


Controllability rank[B, AB, . . . , A B] = n Can reach any state
n−1

Lyapunov Stability V > 0, V̇ ≤ 0 Stable equilibrium


Asymptotic Stability V > 0, V̇ < 0 Convergence to equilibrium
Exponential Stability V̇ ≤ −αV Exponential convergence rate
Feedback Linearization Relative degree = n Can linearize fully
Table 1: Summary of main theoretical results
6.2 Interconnections
1. Controllability → Stabilizability: For LTI systems, controllability implies we can place
poles arbitrarily, enabling exponential stabilization.
2. Lyapunov → Control Design: Energy-based Lyapunov functions guide control law design
(e.g., make V̇ = −αV to ensure exponential stability).
3. Feedback Linearization → Nonlinear Controllability: Linearization transforms nonlin-
ear controllability questions into linear ones.
4. Computed Torque = Feedback Linearization + Lyapunov: Robot control combines
linearization (canceling nonlinearities) with stability analysis (proving convergence).
References

[1] Kalman, R. E. (1960). Contributions to the theory of optimal control. Bol. Soc. Mat. Mexicana,
5(2), 102-119.
[2] Lyapunov, A. M. (1892). The general problem of the stability of motion. Taylor & Francis,
London (English translation 1992).
[3] Khalil, H. K. (2002). Nonlinear Systems (3rd ed.). Prentice Hall.
[4] Isidori, A. (1995). Nonlinear Control Systems (3rd ed.). Springer-Verlag.
[5] Slotine, J. J. E., & Li, W. (1991). Applied Nonlinear Control. Prentice Hall.
[6] Spong, M. W., Hutchinson, S., & Vidyasagar, M. (2006). Robot Modeling and Control. John
Wiley & Sons.
[7] Sastry, S. (1999). Nonlinear Systems: Analysis, Stability, and Control. Springer.

18

You might also like