Control Engineering Notes
Control Engineering Notes
FRANCIS MAINA
4 System Response 33
4.1 Common time domain input functions . . . . . . . . . . . . . . . . . . . . 34
4.1.1 The impulse function . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.1.2 The step function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.1.3 The ramp function . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.1.4 The parabolic function . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.2 Response of second-order systems . . . . . . . . . . . . . . . . . . . . . . 35
4.2.1 Standard form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.2.2 Roots of the characteristic equation and their relationship to damp-
ing in second-order systems . . . . . . . . . . . . . . . . . . . . . . 36
i
CONTENTS ii
6 Root-Locus Method 51
6.1 Root locus construction rules . . . . . . . . . . . . . . . . . . . . . . . . . 51
6.2 Root locus construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Course Outline
The outline for this course is as shown in the contents section.
Chapter 2: Modeling in the frequency domain.
Chapter 3: Modeling in the time domain.
Chapter 4: System response.
Chapter 5: Stability of dynamic systems.
Chapter 6: Root-Locus Method.
Chapter 7: Nyquist stability criterion.
References are provided within the course notes and listed in the bibliography section.
Required Tools
During the kinematic analysis of 2D mechanisms in this course, you are expected to
have the following tools:
Examination
To complete this course, a student must have must sat for all CATs, attempted all
Assignments, done all Practicals, attained Minimum Class Attendance and sat for
Final Exam. If any of these five (5) components is missing, the course is considered
Incomplete. The course components weight distribution is as shown below:
iii
CONTENTS iv
Expected Outcomes
At the end of this course, you should be able to;
Control systems find use in an array of fields including: in the domestic domain, when
we need to regulate the temperature and humidity of homes and buildings for com-
fortable living, transportation, various functionalities of the modern automobiles and
airplanes that involve control systems.
Deliverable/ Outcomes
• Appreciate the role and importance of control systems in our daily lives.
1
CHAPTER 1. INTRODUCTION TO CONTROL SYSTEMS (DEFINITIONS, CONFIGURATION
• The objective
In this case, the objectives can be identified with inputs, or actuating signal, u, and
the results are also called outputs, or controlled variables, y. In general, the objective
of the control system is to control the outputs in some prescribed manner by the inputs
through the elements of the control system.
Example 1
As a simple example of the control system, consider the steering control of an automo-
bile. The direction of the two front wheels can be regarded as the controlled variable,
or the output, y, and the direction of the steering wheel is the actuating signal, or
the input, u. The control system, or process in this case, is composed of the steering
CHAPTER 1. INTRODUCTION TO CONTROL SYSTEMS (DEFINITIONS, CONFIGURATION
mechanism and the dynamics of the entire automobile. If the objective is to control
the speed of the automobile, then the amount of pressure exerted on the accelerator
is the actuating signal, and the vehicle speed is the controlled variable.
As a whole, we can regard the simplified automobile control system as one with two
inputs (steering and accelerator) and two outputs (direction of wheel turn and speed).
In this case, the two controls and two outputs are independent of each other, but there
are systems for which the controls are coupled. Such systems with more than input
and output are regarded as multivariable systems.
Example 2
Consider the idle-speed control of an automobile engine. The objective of such a con-
trol system is to maintain the engine idle speed at a relatively low value (for fuel econ-
omy) regardless of the applied engine loads (e.g., transmission, power steering, air
conditioning). If there is no idle-speed control, sudden engine loads could result in a
sudden drop in the vehicle speed and hence stalling. The objective of the idle-speed
control then is to minimize the speed drop when engine loading is applied and main-
tain the idle speed at the predefined threshold (desired value). A representation of the
idle speed control system from a standpoint of Inputs-System-Outputs. The inputs in
this case ate the load torque, TL , and the throttle angle, α, an the engine speed, ω, is
the output, The entire engine is the controlled process of the system.
A closed loop control of the system in example 2 can be modelled as in the block di-
agram above. The reference input ωr sets the desired idling speed. The engine speed
at idle should agree with the reference value ωr and any difference such as the load
torque τL is sensed by the speed transducer and the error detector. The controller will
operate on the difference and provide a signal to adjust the throttle angle α to correct
the error.
CHAPTER 1. INTRODUCTION TO CONTROL SYSTEMS (DEFINITIONS, CONFIGURATION
The figure below compares the typical performances of open-loop and closed-loop
idle speed control systems.
Figure 1.6: A typical response of the idle-speed control system (a) open loop (b) Closed
loop.
In (a), the idle speed of the open-loop system will drop and settle at a lower value
after a load torque is applied. In (b), the idle speed of the closed-loop system is shown
to recover quickly to the preset value after the application of τL .
What is the essence of feedback?
Feedback exists whenever there is a closed sequence of cause-and-effect relationships.
Feedback is used to reduce the error between the reference input and the system out-
put. The reduction of system error is merely one of the many important effects that
feedback may have upon a system that includes stability, bandwidth, overall gain,
impedance and sensitivity.
Considering a simple feedback system configuration in the figure below:
r is the input signal, y the output signal, e the error and b the feedback signal. The
parameters GH represent the constant gains. Then the Input/ Output relationship of
the system can be represented as:
y G
M= =
r 1 + GH
This formulation can help uncover the significant effects of the feedback system:
CHAPTER 1. INTRODUCTION TO CONTROL SYSTEMS (DEFINITIONS, CONFIGURATION
y G
=
r 1 + GH + GF
If the outer-loop feedback gain F is properly selected, then the overall system
can be stable.
y = G2 n
CHAPTER 1. INTRODUCTION TO CONTROL SYSTEMS (DEFINITIONS, CONFIGURATION
With the presence of feedback, the system output due to n acting alone is:
G2
y= n
1 + G1 G2 H
this means that the noise component is reduced by the factor 1 + G1 G2 H
• According to the method of analysis and design: linear or nonlinear, and time-
varying or time-invariant.
The principle of superposition states that the response produced by the simultane-
ous application of two different forcing functions is the sum of the two individual
responses. Hence, for the linear system, the response to several inputs can be calcu-
lated by treating one input at a time and adding the results.
When the magnitudes of signals are extended beyond the range of the linear opera-
tion, depending on the severity of the nonlinearity, the system is nonlinear. Examples
of such systems include: amplifiers used in control systems often exhibit a saturation
effect when their input signals become large; the magnetic field of a motor usually has
saturation properties. Other common nonlinear effects found in control systems are
the backlash or dead play between coupled gear members, nonlinear spring charac-
teristics. All nonlinear systems do not obey the principle of superposition.
• A burning rocket is a time varying system because its mass decreases rapidly
with time
• Establish how the overall system subcomponents interact, utilizing block dia-
grams
• Use block diagrams, signal flow graphs, or state diagrams to find the model of
the overall system—transfer function or state space model
• Study the transfer function of the system in the Laplace domain, or the state
space representation of the system.
• Understand the time and frequency response characteristics of the system and
whether it is stable or not.
• Design a controller using time response, root locus technique, frequency re-
sponse, state space approach. . .
1.4 Summary
This chapter has introduced some of the basic concepts of the control system and their
uses. The basic components of the control system were reviewed, the configurations
and feedback effects. Various types of control systems were categorized.
Chapter 2
The first step in developing a mathematical model is to apply the fundamental physi-
cal laws of science and engineering. For example, when modeling electrical networks,
Ohm’s law and Kirchhoff’s laws, which are basic laws of electric networks, must be
applied initially. From the equations we obtain the relationship between the system’s
output and input.
Z ∞
L [f (t)] = F (s) = f (t)e−st dt (2.1)
0−
where s = σ + jω, a complex variable. Thus, knowingf (t) and that the integral exists,
we can find a function, F (s), that is called the Laplace transform of f (t)
Using the laplace equation, it is possible to derive a table relating f (t) to F (s) for spe-
cific cases. Table 2.1 below shows the results for a representative sample of functions.
If we use the tables, we do not have to use the laplace equation all the time, which
requires complex integration, to find f (t) given F (s).
10
CHAPTER 2. MODELING IN THE FREQUENCY DOMAIN (CLASSICAL) 11
Solution:
The Laplace transform of f (t) = tu(t), Item 3 of Table 2.1. If the inverse transform of
F (s) = 1/s2 is tu(t), the inverse transform of F (s + a) = 1/(s + a)2 is e−at tu(t). Hence,
f1 (t) = e−3t tu(t).
2
F (s) = (2.2)
(s + 1)(s + 2)
Solution:
2 K1 K2
F (s) = = + (2.3)
(s + 1)(s + 2) (s + 1) (s + 2)
2 K2 (s + 1)
= K1 + (2.4)
(s + 2) (s + 2)
Each component part of the equation is an F (s) in Table 2.1. Hence, f (t) is the sum of
the inverse Laplace transform of each term;
Problem 2. Given the following differential equation, solve for y(t) if all initial conditions
are zero. Use the Laplace transform.
d2 y dy
2
+ 12 + 32y = 32u(t) (2.7)
dt dt
Solution:
Apply Laplace transform to the differential equation to get the following quadratic
equation:
32
s2 Y (s) + 12sY (s) + 32Y (s) = (2.8)
s
Solving for Y (s) yields the following:
CHAPTER 2. MODELING IN THE FREQUENCY DOMAIN (CLASSICAL) 13
32 32
Y (s) = = (2.9)
s(s2 + 12s + 32) s(s + 4)(s + 8)
To solve for y(t), we form the partial-fraction expansion of the resultant equation:
32 K1 K2 K3
Y (s) = = + + (2.10)
s(s + 4)(s + 8) s (s + 4) (s + 8)
32
K1 = =1 (2.11)
(s + 4)(s + 8) s→0
32
K2 = = −2 (2.12)
s(s + 8) s→−4
32
K3 = =1 (2.13)
s(s + 4) s→−8
1 −2 1
Y (s) = + + (2.14)
s (s + 4) (s + 8)
Since each of the three component parts of the above equation is represented as an
F (s) in Table 2.1, y(t) is the sum of the inverse Laplace transforms of each term. Hence,
Problem 3.
2
F (s) = (2.16)
(s + 1)(s + 2)2
Solution:
The roots of (s + 2)2 in the denominator are repeated, since the factor is raised to an
integer power higher than 1. In this case, the denominator root at −2 is a multiple root
CHAPTER 2. MODELING IN THE FREQUENCY DOMAIN (CLASSICAL) 14
2 K1 K2 K3
F (s) = = + + (2.17)
(s + 1)(s + 2)2 (s + 1) (s + 2)2 (s + 2)
2 K1
= (s + 2)2 + K2 + (s + 2)K3 (2.18)
(s + 1) (s + 1)
−2 (s + 2)s
2
= K1 + K3 (2.19)
(s + 1) (s + 1)2
K3 is isolated and can be found if we let s approach −2. Hence, K3 = −2. Each compo-
nent part is an F (s) in Table 2.1; hence, f (t) is the sum of the inverse Laplace transform
of each term;
dc(t)
+ 2c(t) = r(t) (2.21)
dt
Solution:
Taking the Laplace transform of both sides, assuming zero initial conditions, we have
C(s) 1
G(s) = = (2.23)
R(s) s+2
Problem 2. Find the transfer function relating the capacitor voltage, VC (s), to the input
voltage, V (s) in Figure 2.3.
CHAPTER 2. MODELING IN THE FREQUENCY DOMAIN (CLASSICAL) 16
Solution:
In any problem, the designer must first decide what the input and output should be.
In this network, several variables could have been chosen to be the output—for exam-
ple, the inductor voltage, the capacitor voltage, the resistor voltage, or the current. The
problem statement, however, is clear in this case: We are to treat the capacitor voltage
as the output and the applied voltage as the input. Summing the voltages around the
loop, assuming zero initial conditions, yields the integro-differential equation for this
network as;
Z t
di(t) 1
L + Ri(t) + i(t)dt = v(t) (2.24)
dt C 0
d2 q(t) dq(t) 1
L 2
+R + q(t) = v(t) (2.25)
dt dt C
VC (s) 1 1/LC
= 2
= 2 R 1
(2.29)
V (s) (LCs + RCs + 1) s + Ls + LC
CHAPTER 2. MODELING IN THE FREQUENCY DOMAIN (CLASSICAL) 17
Problem 3. Find the transfer function, Vo (s)/Vi (s), for the circuit given just below:
Solution:
The transfer function of the operational amplifier circuit is given by, say;
Since the admittances of parallel components add, Z1 (s) is the reciprocal of the sum of
the admittances;
1 1 360 × 103
Z1 (s) = 1 = −6 1 = (2.30)
C1 s + R1
5.6 × 10 s + 360×103
2.016s + 1
1 107
Z2 (s) = R2 + = 220 × 103 + (2.31)
C2 s s
CHAPTER 2. MODELING IN THE FREQUENCY DOMAIN (CLASSICAL) 18
The resulting circuit is called a PID controller and can be used to improve the perfor-
mance of a control system.
Mechanical systems are parallel electrical networks to such an extent that there are
analogies between electrical and mechanical components and variables. Mechani-
cal systems, like electrical networks, have three passive, linear components. Two of
them, the spring and the mass, are energy-storage elements; one of them, the viscous
damper, dissipates energy. The two energy-storage elements are analogous to the two
electrical energy-storage elements, the inductor and capacitor. The energy dissipator
is analogous to electrical resistance.
CHAPTER 2. MODELING IN THE FREQUENCY DOMAIN (CLASSICAL) 19
Problem 4. Find the transfer function, X(s)/F (s), for the system shown just below:
Solution:
Place on the mass all forces felt by the mass. We assume the mass is traveling
toward the right. Thus, only the applied force points to the right; all other forces
impede the motion and act to oppose it. Hence, the spring, viscous damper, and the
force due to acceleration point to the left. We now write the differential equation of
motion using Newton’s law to sum to zero all of the forces shown on the mass in
Figure just above.
d2 x(t) dx(t)
M + f v + Kx(t) = f (t) (2.33)
dt2 dt
Taking Laplace transform assuming zero conditions;
X(s) 1
G(s) = = 2
(2.36)
F (s) M s + fv s + K
Chapter 3
Two approaches are available for the analysis and design of feedback control systems.
The first, which we began to study, is known as the classical, or frequency-domain,
technique. This approach is based on converting a system’s differential equation
to a transfer function, thus generating a mathematical model of the system that al-
gebraically relates a representation of the output to a representation of the input.
Replacing a differential equation with an algebraic equation not only simplifies the
representation of individual subsystems but also simplifies modeling interconnected
subsystems. The primary disadvantage of the classical approach is its limited appli-
cability: It can be applied only to linear, time-invariant systems or systems that can
be approximated as such. A major advantage of frequency-domain techniques is that
they rapidly provide stability and transient response information. Thus, we can im-
mediately see the effects of varying system parameters until an acceptable design is
met.
With the arrival of space exploration, requirements for control systems increased
in scope. Modeling systems by using linear, time-invariant differential equations and
subsequent transfer functions became inadequate. The state-space approach (also re-
ferred to as the modern, or time-domain, approach) is a unified method for modeling,
analyzing, and designing a wide range of systems. For example, the state-space ap-
proach can be used to represent nonlinear systems that have backlash, saturation, and
dead zone. Also, it can handle, conveniently, systems with nonzero initial conditions.
Time-varying systems, (for example, missiles with varying fuel levels or lift in an air-
craft flying through a wide range of altitudes) can be represented in state space. Many
systems do not have just a single input and a single output (SISO). Multiple-input,
multiple-output systems (MIMO, such as a vehicle with input direction and input
velocity yielding an output direction and an output velocity) can be compactly rep-
resented in state space with a model similar in form and complexity to that used for
single-input, single-output systems.
The time-domain approach can be used to represent systems with a digital com-
puter in the loop or to model systems for digital simulation. With a simulated system,
system response can be obtained for changes in system parameters—an important
design tool. The statespace approach is also attractive because of the availability of
numerous state-space software packages for the personal computer.
20
CHAPTER 3. MODELING IN THE TIME DOMAIN (STATE-SPACE) 21
The time-domain approach can also be used for the same class of systems modeled
by the classical approach. This alternate model gives the control systems designer
another perspective from which to create a design. While the state-space approach can
be applied to a wide range of systems, it is not as intuitive as the classical approach.
The designer has to engage in several calculations before the physical interpretation
of the model is apparent, whereas in classical control a few quick calculations or a
graphic presentation of data rapidly yields the physical interpretation.
Here, the coverage of state-space techniques is to be regarded as an introduc-
tion to the subject, a springboard to advanced studies, and an alternate approach to
frequency-domain techniques. Still, we will limit the state-space approach to linear,
time-invariant systems or systems that can be linearized.
As mentioned, the classical control system design techniques are generally only appli-
cable to:
(b) Systems that are linear (or can be linearlized) and are time invariant (have pa-
rameters that do not vary with time)
dx1
= a11 x1 + a12 x2 + · · · + a1n xn + b11 u1 + · · · + b1m um (3.1)
dt
dx2
= a21 x1 + a22 x2 + · · · + a2n xn + b21 u1 + · · · + b2m um (3.2)
dt
dxn
= an1 x1 + an2 x2 + · · · + ann xn + bn1 u1 + · · · + bnm um (3.3)
dt
CHAPTER 3. MODELING IN THE TIME DOMAIN (STATE-SPACE) 22
These equations set may be combined in matrix format. This results in the state
vector differential equation
ẋ = Ax + Bu (3.4)
The equation just above is generally called the state space equation.
In general, the outputs (y1 , y2 , . . . , yn ) of a linear system can be related to the state
variables and the input variables;
y = Cx + Du (3.9)
CHAPTER 3. MODELING IN THE TIME DOMAIN (STATE-SPACE) 23
Output equation The two state space equations are very important;
ẋ = Ax + Bu (3.10)
y = Cx + Du (3.11)
x = state vector
ẋ = derivative of the state vector with respect to time
y = output vector
u = input or control vector
A = system matrix
B = input matrix
C = output matrix
D = feedforward matrix
Problem 1.
Write down the state equation and output equation for the spring-mass-damper system shown
in the figure just below:
Solution:
x1 = y (3.12)
dy
x2 = = x˙1 (3.13)
dt
CHAPTER 3. MODELING IN THE TIME DOMAIN (STATE-SPACE) 24
Input variable
u = P (t) (3.14)
Now, similar to F = ma
X
F y = mÿ (3.15)
Sum of forces
d2 y K C 1
= − y − ẏ + P (t) (3.17)
dt2 m m m
From the equations above, the set of first-order differential equations are
ẋ1 = x2 (3.18)
K C 1
ẋ2 = − x1 − x2 + u (3.19)
m m m
and the state equations become
ẋ1 0 1 x1 0
= K C + 1 u (3.20)
ẋ2 − m − m x2 m
x1
y= 1 0 (3.21)
x2
The state variables are not unique and may be selected to suit the problem being stud-
ied.
CHAPTER 3. MODELING IN THE TIME DOMAIN (STATE-SPACE) 25
Problem 2.
For the RLC network shown just below, write down the state equations when:
Solution:
Part (a):
State variables
x1 = v2 (t) (3.22)
d2 v2 dv2
LC 2
+ RC + v2 = v1 (t) (3.24)
dt dt
NB:
dq(t)
i(t) = (3.25)
dt
and
ẋ1 = x2 (3.27)
CHAPTER 3. MODELING IN THE TIME DOMAIN (STATE-SPACE) 26
1 RC 1
x2 = − x1 − x2 + u (3.28)
LC LC LC
Form the state equations
ẋ1 0 1 x1 0
= 1 −R + 1 u (3.29)
ẋ2 − LC − L x2 LC
Part (b)
State variables
x1 = v2 (t) (3.30)
x2 = i(t) (3.31)
We know
di
L = −v2 (t) − Ri(t) + v1 (t) (3.32)
dt
dv2
C = i(t) (3.33)
dt
These two first-order differential equations can be written in the form
1
ẋ1 = x2 (3.34)
C
1 R 1
ẋ2 = − x1 − x2 + u (3.35)
L L L
Giving the state equations
ẋ1 0 1 x1 0
= 1 −R + 1 u (3.36)
ẋ2 −L − L x2 L
CHAPTER 3. MODELING IN THE TIME DOMAIN (STATE-SPACE) 27
Problem 3. Given the electrical network of the figure just below, find a state-space
representation if the output is the current through the resistor.
Solution:
The following steps will yield a variable representation of the network in space.
Step 1
Label all of the branch currents in the network. These include iL , iR , and iC as shown
in the figure.
Step 2
Select the state variables by writing the derivative equation for all energy-storage ele-
ments, that is, the inductor and the capacitor. Thus,
dvc
C = iC (3.37)
dt
diL
L = vL (3.38)
dt
From the above equations, choose the state variables as the quantities that are differ-
entiated, namely vc and iL .
Since iL and vL are not state variables, our next step is to express iC and vL as linear
combinations of the state variables, vC and iL , and the input, v(t).
Step 3
Apply network theory, such as Kirchhoff’s voltage and current laws, to obtain iC and
vL in terms of the state variables, vC and iL . At Node 1,
iC = −iR + iL (3.39)
1
= − vc + iL (3.40)
R
which yields iC in terms of the state variables, vc and iL . Around the outer loop,
which yields vL in terms of the state variables, vc , and the source, v(t).
Step 4
Substitute the results of the above equations to obtain the following state equations:
dvc 1
C = − vc + iL (3.42)
dt R
diL
L = −vc + v(t) (3.43)
dt
dvc 1 1
=− vc + iL (3.44)
dt RC C
diL 1 1
= − vc + v(t) (3.45)
dt L L
Step 5
Find the output equation. Since the output is ig(t),
1
iR = vc (3.46)
R
The final result for the state-space representation is found by representing the above
equations in vector-matrix form:
1 1
v̇c − RC C
vc 0
= + 1 v(t) (3.47)
i̇L − L1 0 iL L
(3.48)
vc
iR = 1/R 0 (3.49)
iL
CHAPTER 3. MODELING IN THE TIME DOMAIN (STATE-SPACE) 29
dn y dn−1 y dy dn−1 u du
n
+ a n−1 n−1
+ · · · + a 1 + a 0 y = b n−1 n−1
+ · · · + b1 + b0 u (3.50)
dt dt dt dt dt
This equation can be represented by a transfer function;
Define a set of state variables such that:
ẋ1 = x2
ẋ2 = x3
.. .. (3.51)
. .
ẋn = −a0 x1 − a1 x2 − · · · − an−1 xn + u
y = b0 x1 + b1 x2 + · · · + bn−1 xn (3.52)
ẋ1 0 1 0 ... 0 x1 0
ẋ2 0 0 1 ... 0 0
x2
..
..
..
..
= + . u (3.53)
. .
.
ẋn−1 0 0 0 ... 1 xn−1 0
ẋn −a0 −a1 −a2 . . . −an−1 xn 1
The state-space representation in the equation just above is called controllable canon-
ical form and the output equation is
x1
x2
bn−1 x3
y = b0 b1 b2 . . . (3.54)
.
..
xn
CHAPTER 3. MODELING IN THE TIME DOMAIN (STATE-SPACE) 30
Problem 1.
Find the state and output equation for:
Y 4
(s) = 3 (3.55)
U s + 3s2 + 6s + 2
Solution:
ẋ1 0 1 0 x1 0
ẋ2 = 0 0 1 x2 + 0 u
(3.56)
ẋ3 −2 −6 −3 x3 1
Output equation
x
1
y = 4 0 0 x2 (3.57)
x3
Problem 2.
Find the state equation for:
Y 5s2 + 7s + 4
(s) = 3 (3.58)
U s + 3s2 + 6s + 2
Solution:
x
1
y = 4 7 5 x2
(3.59)
x3
CHAPTER 3. MODELING IN THE TIME DOMAIN (STATE-SPACE) 31
ẋ = Ax + Bu (3.60)
y = Cx + Du (3.61)
Y (s) = C(sI − A)−1 BU (s) + DU (s) = [C(sI − A)−1 B + D]U (s) (3.66)
Matrix [C(sI − A)−1 B + D] is called the transfer function matrix, since it relates the
output vector, Y (s), to the input vector, U (s). We can find the transfer function;
Y (s)
T (s) = = C(sI − A)−1 B + D (3.67)
U (s)
Problem 1.
Given the system defined by the following, find the transfer function, T (s) = Y (s)/U (S),
where U (s) is the input and Y (s) is the output.
0 1 0 10
ẋ = 0 0 1 x + 0 u (3.68)
−1 −2 −3 0
CHAPTER 3. MODELING IN THE TIME DOMAIN (STATE-SPACE) 32
y= 1 0 0 x (3.69)
Solution:
The solution revolves around finding the term (sI − A)−1 as illustrated in the above
equations. All other terms are already defined. Hence, fist find (sI − A):
s −1
s 0 0 0 1 0 0
(sI − A) = 0 s 0 − 0 0 1 = 0 s −1 (3.70)
0 0 s −1 −2 −3 1 2 s+3
2
(s + 3s + 2) s+3 1
−1 s(s + 3) s
adj(sI − A) −s −(2s + 1) s2
(sI − A)−1 = = (3.71)
det(sI − A) s3 + 3s2 + 2s + 1
10
B =0 (3.72)
0
C= 1 0 0 (3.73)
D=0 (3.74)
10(s2 + 3s + 2)
T (s) = (3.75)
s3 + 3s2 + 2s + 1
Chapter 4
System Response
33
CHAPTER 4. SYSTEM RESPONSE 34
The total response of a system is always the sum of the transient and the steady-state
components. In this case, the system is subjected to a ramp function. The difference
between the input function xi (t) and the system response xo (t) are called transient
errors during the transient period, and the steady-state errors during the steady-state
period. For a control system, all errors should be minimized.
The Laplace transform of an impulse function is equal to the area of the function. The
impulse function whose area is unity is called a unit impulse δ(t).
d2 x0 dx0
a 2 +b + cx0 = exi (t) (4.1)
dt dt
Take Laplace transforms, zero initial conditions
which is written as
K
G(s) = (4.6)
1 2
ωn2s + ω2ζn s + 1
Kωn2
G(s) = (4.7)
s2 + 2ζωn s + ωn2
This is the standard form of transfer functions for a second-order system, where
If X0 (s) 6= 0 then
as2 + bs + c = 0 (4.9)
This polynomial in s is called the Characteristic Equation and its roots will determine
the system transient response. Their values are
√
−b ± b2 − 4ac
s1 , s2 = (4.10)
2a
CHAPTER 4. SYSTEM RESPONSE 37
The term (b2 − 4ac), called the discriminant, may be positive, zero or negative which
will make the roots real and unequal, real and equal or complex. This gives rise to
the three different types of transient response described in the table just above. The
transient response of a second-order system is given by the general solution
This gives a step response function of the form shown in the figure just below; which
shows the effect that roots of the characteristic equation have on the damping of a second-order
When the damping coefficient C of a second-order system has its critical value Cc ,
the system, when disturbed, will reach its steady-state value in the minimum time
without overshoot. This is when the roots of the Characteristic Equation have equal
negative real roots.
Damping ration ζ
The ratio of the damping coefficient C in a second-order system compared with the
value of the damping coefficient Cc required for critical damping is called the Damp-
ing Ratio ζ (Zeta). Hence
C
ζ= (4.12)
Cc
CHAPTER 4. SYSTEM RESPONSE 38
Thus
ζ = 0 No damping
ζ < 1 Underdamping
ζ = 1 Critical damping
ζ > 1 Overdamping
Problem 1. Find the value of the critical damping coefficient Cc in terms of K and m
for the spring-mass-damper system shown just below
Solution:
From Newton’s second law
X
F x = mẍ0 (4.13)
or
Characteristics Equation is
ms2 + Cs + K = 0 (4.17)
That is
C K
s2 + + =0 (4.18)
m m
CHAPTER 4. SYSTEM RESPONSE 39
1 nC q C 2
o
4K
s1 , s2 = ± − (4.19)
2 m m m
For critical damping, the discriminant is zero, hence the roots become
Cc
s1 = s2 = − (4.20)
2m
Also, for critical damping
Cc2 4K
2
= (4.21)
m m
4Km2
Cc2 = (4.22)
m
giving
√
Cc = 2 Km (4.23)
(a) Rise time tr : The shortest time to achieve the final or steady-state value, for the
first time. This can be 100% rise time as shown, or the time taken for example
from 10% to 90% of the final value, thus allowing for non-overshoot response.
(b) Overshoot: For a control system an overshoot of between 0 and 10% (1 < ζ > 0.6)
is generally acceptable.
(c) Settling time ts : This is the time for the system output to settle down to within a
tolerance band of the final value, normally between ±2 or 5%.
Chapter 5
(b) transient terms which are either exponential, or oscillatory with an envelope of
exponential form.
If the exponential term decay as time increases, then the system is said to be stable
If the exponential terms increase with increasing time, the system is considered unsta-
ble.
as2 + bs + c = 0 (5.1)
The roots of the characteristic equation given in the equation just above can be found
using the following formula:
√
−b ± b2 − 4ac
s1 , s2 = (5.2)
2a
The roots determine the transient response of the system and for a second-order sys-
tem can be written as
(a) Overdamping
s1 = −σ1 (5.3)
s2 = −σ2 (5.4)
40
CHAPTER 5. STABILITY OF DYNAMIC SYSTEMS 41
s1 = s2 = −σ (5.5)
(c) Underdamping
s1 , s2 = −σ ± jω (5.6)
If the coefficient b in the characteristic equation were to be negative, then the roots
would be:
s1 , s2 = +σ ± jω (5.7)
N (s)
H(s) = (5.8)
D(s)
As we shall see in details, Poles and Zeros of the system affect its response and stabil-
ity.
pole and a O for the zero. To show the properties of the poles and zeros, let us find the
unit step response of the system.
(s + 2) A B 2/5 3/5
C(s) = = + = + (5.9)
s(s + 5) s s+5 s s+5
where
(s + 2) 2
A= = (5.10)
(s + 5) s→0 5
(s + 2) 3
B= = (5.11)
s s→−5 5
Thus,
2 3 −5t
c(t) = + e (5.12)
5 5
(a) shows input and output, (b) shows the pole-zero plot of the system, (c) following
the arrows, shows the evolution of the system response.
CHAPTER 5. STABILITY OF DYNAMIC SYSTEMS 43
1. A pole of the input function generates the form of the forced response. That is, the
pole of the origin generated a step function at the output.
2. A pole of the transfer function generates the form of the natural response. That is,
the pole at −5 generated e−5t .
3. A pole on the real axis generates an exponential response of the form e−αt , where
−α is the pole location on the real axis. Thus, the farther to the left a pole is on
the negative real axis, the faster the exponential transient response will decay to
zero.
4. The zero and poles generate the amplitudes for both the forced and natural re-
sponses.
EXAMPLE:
Let us take a general case which has two finite poles and no zeros. The term in the numer-
ator is simply a scale or input multiplying factor that can take on any value without
affecting the form of the derived results.
By assigning appropriate values to parameters a and b, we can show all possible
second-order transient responses.
The unit step response then can be found using C(s) = R(s)G(s), where R(s) = 1/s,
followed by a partial-fraction expansion and the inverse Laplace transform.
CHAPTER 5. STABILITY OF DYNAMIC SYSTEMS 44
Generally, additional Poles in first order systems delay the response of a system. Left
half-plane Zeros of a first order system speed up the response of a system and the right
half-plane cause the response to go in the opposite direction.
Additional Poles in a second order system decrease the number of oscillations. Addi-
tional Zeros in a second order system increase the number of oscillations.
Problem 1.
Find the range of gain, K, for the system of the figure just below, that will cause the system
to be stable, unstable, and marginally stable. Assume K > 0.
CHAPTER 5. STABILITY OF DYNAMIC SYSTEMS 45
Solution:
K
T (s) = (5.13)
s3 + 18s2 + 77s + K
Form the Routh table:
Since K is assumed positive, we see that all elements in the first column are always
positive except the s1 row.
This entry can be positive, zero, or negative, depending upon the value of K.
If K < 1386, all terms in the first column will be positive, and since there are no sign
changes, the system will have three poles in the left half-plane and be stable.
If K > 1386, the s1 term in the first column is negative. There are two sign changes,
indicating that the system has two right-half-plane poles and one left half-plane pole,
which makes the system unstable.
If K = 1386, we have an entire row of zeros, which could signify jω poles.
Returning to the s2 row and replacing K with 1386, we form the even polynomial
dP (s)
= 36s + 0 (5.15)
ds
Replacing the row of zeros with the coefficients, we obtain the Routh Harwitz table as
shown just below for the case of K = 1386.
CHAPTER 5. STABILITY OF DYNAMIC SYSTEMS 46
Since there are no sign changes from the even polynomial (s2 row) down to the bottom
of the table, the even polynomial has its two roots on the jω-axis of unit multiplicity.
Since there are no sign changes above the even polynomial, the remaining root is in
the left half-plane. Therefore, the system is marginally stable.
Problem 2.
Determine the stability of the closed-loop transfer function
10
T (s) = (5.16)
s5 + 2s4 + 3s3+ 6s2 + 5s + 3
Solution:
First write a polynomial that has the reciprocal roots of the denominator of Eq. From
our discussion, this polynomial is formed by writing the denominator of Eq. in reverse
order. Hence,
We form the Routh table as shown in table just below. Since there are two sign changes,
the system is unstable and has two right-half-plane poles. Notice that the table does
not have a zero in the first column.
CHAPTER 5. STABILITY OF DYNAMIC SYSTEMS 47
(a) For there to be no roots with positive real parts then there is a necessary, but not
sufficient, condition that all coefficients in the characteristic equation have the
same sign and that none are zero.
If (a) above is satisfied, then the necessary and sufficient condition for stability is
either
(b) All the Hurwitz determinants of the polynomial are positive, or alternatively
(c) All coefficients of the first column of Routh’s array have the same sign. The
number of sign changes indicate the number of unstable roots.
a1 a3
D 1 = a1 D2 = (5.19)
a0 a2
a1 a3 a5 a7
a1 a3 a5
a a a a6
D3 = a0 a2 a4 D4 = 0 2 4 ... (5.20)
a1 a3 a5
a1 a3
a2 a4
1. Factor out any roots at the origin to obtain the polynomial, and multiply by −1
if necessary, to obtain
2. If the order of the resulting polynomial is at least two and any coefficient ai is
zero or negative, the polynomial has at least one root with non-negative real
part. To obtain the procise number of roots with non-negative real part, proceed
as follows. Arrange the coefficients of the polynomial, and values subsequently
calculated from them as shown below:
sn a0 a2 a4 a6 ...
sn−1 a1 a3 a5 a7 ...
sn−2 b1 b2 b3 b4 ...
sn−3 c1 c2 c3 c4 ...
sn−4 d1 d2 d3 d4 ... (5.23)
.. .. ..
. . .
s2 e1 e2
s1 f1
s0 g0
a1 a4 − a0 a5
b2 = (5.25)
a1
a1 a6 − a0 a7
b3 = (5.26)
a1
..
. (5.27)
generated until all subsequent coefficients are zero. Similarly, cross multiplying
the coefficients of the two previous rows to obtain the ci , di , etc.
b 1 a3 − a1 b 2
c1 = (5.28)
b1
b 1 a5 − a1 b 3
c2 = (5.29)
b1
CHAPTER 5. STABILITY OF DYNAMIC SYSTEMS 49
b 1 a7 − a1 b 4
c3 = (5.30)
b1
..
. (5.31)
c 1 b2 − b1 c 2
d1 = (5.32)
c1
c1 b3 − b1 c3
d2 = (5.33)
c1
..
. (5.34)
until the nth row of the array has been completed. Missing coefficients are re-
placed by zeros. The resulting array is called the Routh array. The powers of s
are not considered to be part of the array. We can think of them as labels. The
column beginning with a0 is considered to be the first column of the array. The
Routh array is seen to be triangular. It can be shown that multiplying a row by
a positive number to simplify the calculation of the next row does not affect the
outcome of the application of the Routh criterion.
3. Count the number of sign changes in the first column of the array. It can be
shown that a necessary and sufficient condition for all roots of (2) to be located
in the left-half plane is that all the ai are positive and all of the coefficients in the
first column be positive.
a0 s 2 + a1 s + a2 = 0 (5.35)
s 2 a0 a2
s 1 a1 0 (5.36)
s 0 a2
where the coefficient a1 is the result of multiplying a1 by a2 and subtracting a0 (0) then
dividing the result by a2 . In the case of a second order polynomial, we see the Routh’s
stability criterion reduces to the condition that all ai be positive.
a0 s3 + a1 s2 + a2 s + a3 = 0 (5.37)
CHAPTER 5. STABILITY OF DYNAMIC SYSTEMS 50
s3 a0 a2
s2 a1 a3
a1 a2 −a0 a3 (5.38)
s1 a1
s0 a3
a1 a2 > a0 a3 (5.39)
Here we illustrate the fact that multiplying a row by a positive constant does not
change the result. One possible Routh array is given at left, and an alternative is given
at right,
s4 1 3 5 s4 1 3 5
s3 2 4 0 s3 2 4 0
1 2 0
(5.41)
s2 1 5 s2 1 5
s1 −6 s1 −3
s0 5 s0 5
Root-Locus Method
This is a control system design technique developed by W.R. Evans (1948) that deter-
mines the roots of the characteristic equation (closed-loop poles) when the open-loop
gain-constant K is increased from zero to infinity.
The stability of the given closed loop system depends upon the location of the roots of
the characteristics equation. That is the location of the closed loop poles. If we change
some parameter of a system, then the location of closed loop pole changes in ‘s’ plane.
The study of this locus (of moving pole in ‘s’ plane) because of variation of any pa-
rameter of the system is very important while designing any closed loop system.
The locus of the roots, or closed-loop poles are plotted in the s-plane. This is a complex
plane, since s = σ ± jω. It is important to remember that the real part σ is the index in
the exponential term of the time response, and if positive will make the system unsta-
ble. Hence, any locs in the right-hand side of the plane represents an unstable system.
The imaginary part ω is the frequency of transient oscillation.
When a locus crosses the imaginary axis, σ = 0. This is the condition of marginal sta-
bility, i.e. the control system is on the verge of instability, where transient oscillation
neither increases, nor decay, but remain at a constant value.
The design method requires the closed-loop poles to be plotted in the s-plane as K is
varied from zero to infinity, and then a value of K selected to provide the necessary
transient response as required by the performance specification. The loci always com-
mence at open-loop poles (denoted by x) and terminate at open-loop zeros (denoted
by o) when they exist.
2. Termination points (K = ∞): The root loci terminate at the open-loop zeros when
they exist, otherwise at infinity.
51
CHAPTER 6. ROOT-LOCUS METHOD 52
3. Number of distinct root loci: This is equal to the order of the characteristic equation.
4. Symmetry of root loci: The root loci are symmetrical about the real axis.
5. Root locus asymptotes: For large values of k the root loci are asymptotic to straight
lines, with angles given by
(1 + 2k)
θ= (6.1)
(n − m)
where
k = 0, 1, . . . (n − m − 1)
n = no. of finite open-loop poles
m = no. of finite open-loop zeros
6. Asymptote intersection: The asymptotes intersect the real axis at a point given by
P P
open − looppoles − open − loopzeros
σa = (6.2)
(n − m)
7. Root locus locations on real axis: A point on the real axis is part of the loci if the sum
of the number of open-loop poles and zeros to the right of the point concerned
is odd.
8. Breakaway points: The points at which a locus breaks away from the real axis van
be calculated using one of two methods:
dK
=0 (6.3)
ds s=σb
where K has been made the subject of the characteristic equation i.e. K =
...
(b) Solving the relationship
n m
X 1 X 1
= (6.4)
1
(σb + |P i|) 1
(σb + |Zi|)
where |P i| and |Zi| are the absolute values of open-loop poles and zeros
and σb is the breakaway point.
9. Imaginary axis crossover: The location on the imaginary axis of the loci (marginal
stability) can be calculated using either:
10. Angles of departure and arrival: Computed using the angle criterion, by positioning
a trial point at a complex open-loop pole (departure) or zero (arrival).
11. Determination of points on the root loci: Exact points on root loci are found using
the angle criterion.
12. Determination of K on root loci: The value of K on root loci is found using the
magnitude criterion.
Problem 1.
For a unity feedback system,
K
G(s) = (6.5)
s(s + 4)(s + 2)
Sketch the nature of root locus showing all details on it. Comment on the stability of the
system.
Solution:
Step 1:
Poles = 0, −4, −2. Therefore P = 3.
Zeros = there are no zeros. Z = 0.
So all (P − Z = 3) branches terminate at infinity.
Here RL denotes Root Locus existence region and NRL denotes the non-existence re-
gion of root locus. These sections of real axis identified as a part of the root locus as to
the right sum of poles and zeros is odd for those sections.
As there is no root locus between −2 to −4, −3.15 can not be a breakaway point. It
also can be confirmed by calculating K for s = −3.15. It will be negative that confirms
s = −3.15 is not a breakaway point.
For s = −3.15, K = −3.079 (Substituting in equation for K). But as there has to be
breakaway point between ‘0’ and ‘−2’, s = –0.845 is a valid breakaway point.
For s = −0.845K = +3.079. As K is positive s = –0.845 is valid breakaway point.
CHAPTER 6. ROOT-LOCUS METHOD 56
s3 + 6s2 + 8s + K = 0 (6.6)
Routh’s array:
Problem 2.
Sketch the root locus for the system shown in figure just below.
Solution:
CHAPTER 6. ROOT-LOCUS METHOD 58
Let us begin by calculating the asymptotes. The real axis intercept is evaluated as
(−1 − 2 − 4) − (−3) 4
σa = =− (6.7)
4−1 3
If the value for k continued to increase, the angles would begin to repeat. The number
of lines obtained equals the difference between the number of finite poles and the
number of finite zeros. The locus begins at the open-loop poles and ends at the open-
loop zeros. For the example there are more open-loop poles than open-loop zeros.
Thus, there must be zeros at infinity. The asymptotes tell us how we get to these zeros
at infinity. Figure just below shows the complete root locus as well as the asymptotes
that were just calculated. The real-axis segments lie to the left of an odd number of
poles and/or zeros. The locus starts at the open-loop poles and ends at the open-loop
zeros. For the example there is only one open-loop finite zero and three infinite zeros.
The three zeros at infinity are at the ends of the asymptotes.
Chapter 7
Nyquist Plots were invented by Nyquist - who worked at Bell Laboratories, the pre-
miere technical organization in the U.S. at the time. Nyquist Plots are a way of show-
ing frequency responses of linear systems. There are several ways of displaying fre-
quency response data, including Bode plots and Nyquist plots. Bode plots use fre-
quency as the horizontal axis and use two separate plots to display amplitude and
phase of the frequency response. Nyquist plots display both amplitude and phase an-
gle on a single plot, using frequency as a parameter in the plot. Nyquist plots have
properties that allow you to see whether a system is stable or unstable.
3.) Where the plot crosses the real axis, Im(G(jω)) = 0, and
59
CHAPTER 7. NYQUIST STABILITY CRITERION 60
Problem 1.
Consider a first order system
1
G(s) = (7.1)
1 + sT
where T is the time constant.
Solution:
1
G(jω) = (7.2)
1 + jωT
1
|G(jω)| = √ (7.3)
1 + ω2T 2
1
|G(jω)| = √ =1 (7.5)
1+0
0
φ = tan− 1 (7.6)
1
The end of plot where ω = ∞
1
|G(jω)| = √ =0 (7.7)
1+∞
−∞
φ = tan− 1 = −90deg. (7.8)
1
CHAPTER 7. NYQUIST STABILITY CRITERION 61
Problem 2.
Consider a second order system
1
G(s) = (7.9)
(1 + sT1 )(1 + sT2 )
Solution:
Where T is the time constant. Representing G(s) in the frequency response form G(jω)
by replacing s = jω:
1
G(jω) = (7.10)
(1 + jωT1 )(1 + jωT2 )
1
|G(jω)| = p p (7.11)
1 + ω T1 1 + ω 2 T22
2 2
1
|G(jω)| = √ √ =1 (7.13)
1+0 1+0
1
|G(jω)| = √ √ = 0 (7.15)
∞ ∞
Problem 3.
For the unity feedback system, where G(s) = K/[s(s + 3)(s + 5)], find the range of gain, K,
for stability, instability, and the value of gain for marginal stability. For marginal stability also
find the frequency of oscillation. Use the Nyquist criterion.
Solution:
First set K = 1 and sketch the Nyquist diagram for the system, using the contour
CHAPTER 7. NYQUIST STABILITY CRITERION 63
shown in figure just below. For all points on the imaginary axis,
K
G(jw)H(jw) = K = 1, s = jω (7.17)
s(s + 3)(s + 5)
−8ω 2 − j(15ω − ω 3
= (7.18)
64ω 4 + ω 2 (15 − ω 2 )2
Next find the point where the Nyquist diagram intersects the negative real axis. Set-
√
ting the imaginary part equal to zero, we find w = 15. Substituting this value of w
back into Eq. yields the real part of −0.0083. Finally, at w = ∞,
1
G(jw)H(jw) = G(s)H(s) → j∞ = = 0 < −270 (7.19)
j∞3
From the contour of figure (a) just above, P = 0; for stability N must then be equal to
zero. From figure (b) just above, the system is stable if the critical point lies outside the
contour (N = 0), so that Z = P |N = 0. Thus, K can be increased by 1/0.0083 = 120.5
before the Nyquist diagram encircles −1. Hence, for stability, K < 120.5. For marginal
stability K = 120.5. At this gain the Nyquist diagram intersects −1, and the frequency
of oscillation is root of 15rad/s.
Bibliography
[1] N.S. Nise. Control Systems Engineering 6th Edition Binder Ready Version with Binder
Ready Survey Flyer Set. Wiley, 2011.
[2] R.S. Burns. Advanced Control Engineering. Chemical, Petrochemical & Process.
Butterworth-Heinemann, 2001.
[3] K. Ogata. Modern Control Engineering. Instrumentation and controls series. Prentice
Hall, 2010.
64