0% found this document useful (0 votes)
96 views69 pages

Control Engineering Notes

The document is a set of course notes on Control Engineering, covering topics such as control systems definitions, configurations, modeling in frequency and time domains, system response, stability, and various control methods. It outlines the course structure, required tools, examination criteria, and expected outcomes for students. The notes emphasize the importance of control systems in various applications, including industrial, automotive, and biomedical fields.

Uploaded by

ndatherose
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
96 views69 pages

Control Engineering Notes

The document is a set of course notes on Control Engineering, covering topics such as control systems definitions, configurations, modeling in frequency and time domains, system response, stability, and various control methods. It outlines the course structure, required tools, examination criteria, and expected outcomes for students. The notes emphasize the importance of control systems in various applications, including industrial, automotive, and biomedical fields.

Uploaded by

ndatherose
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 69

Control Engineering 1: Notes

FRANCIS MAINA

Stitched Extracts from:

Control Systems Engineering, Norman S. Nise [1]


Advanced Control Engineering, Roland S. Burns [2]
Modern Control Engineering, Katsuhiko Ogata [3]

Updated: May 14, 2020


Contents

1 Introduction to Control Systems (Definitions, Configurations & System Clas-


sifications) 1
1.1 General Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.1 Constituents of Control . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.2 Configurations of Control Systems . . . . . . . . . . . . . . . . . . 4
1.2 CLASSIFICATION OF FEEDBACK CONTROL SYSTEMS . . . . . . . . 7
1.3 LINEAR VERSUS NONLINEAR CONTROL SYSTEMS . . . . . . . . . . 7
1.3.1 Linear time invariant and Time varying systems . . . . . . . . . . 8
1.3.2 Stochastic and Deterministic Models . . . . . . . . . . . . . . . . . 8
1.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2 Modeling in the Frequency Domain (Classical) 10


2.1 Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 Partial-Fraction Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.1 Case 1. Roots of the Denominator of F (s) Are Real and Distinct . 11
2.2.2 Case 2. Roots of the Denominator of F (s) Are Real and Repeated 13
2.3 The Transfer Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3.1 Electrical Network Transfer Functions . . . . . . . . . . . . . . . . 15
2.3.2 Translational Mechanical System Transfer Functions . . . . . . . . 18

3 Modeling in the Time Domain (State-Space) 20


3.1 The state vector differential equation . . . . . . . . . . . . . . . . . . . . . 21
3.1.1 Mechanical System Example . . . . . . . . . . . . . . . . . . . . . 23
3.1.2 Electrical System Examples . . . . . . . . . . . . . . . . . . . . . . 25
3.2 Converting a Transfer Function to State Space . . . . . . . . . . . . . . . . 29
3.3 Converting from State Space to a Transfer Function . . . . . . . . . . . . 31

4 System Response 33
4.1 Common time domain input functions . . . . . . . . . . . . . . . . . . . . 34
4.1.1 The impulse function . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.1.2 The step function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.1.3 The ramp function . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.1.4 The parabolic function . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.2 Response of second-order systems . . . . . . . . . . . . . . . . . . . . . . 35
4.2.1 Standard form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.2.2 Roots of the characteristic equation and their relationship to damp-
ing in second-order systems . . . . . . . . . . . . . . . . . . . . . . 36

i
CONTENTS ii

4.2.3 Critical damping and damping ratio . . . . . . . . . . . . . . . . . 37


4.2.4 Step response performance specification . . . . . . . . . . . . . . 39

5 Stability of Dynamic Systems 40


5.1 Stability and roots of the characteristic equation . . . . . . . . . . . . . . 40
5.2 Poles, Zeros, and System Response . . . . . . . . . . . . . . . . . . . . . . 41
5.2.1 Poles and Zeros of a First-Order System . . . . . . . . . . . . . . . 41
5.2.2 Poles and Zeros of a Second-Order System . . . . . . . . . . . . . 43
5.3 The Routh-Hurwitz stability criterion . . . . . . . . . . . . . . . . . . . . 47
5.3.1 Algorithm for applying Routh’s stability criterion . . . . . . . . . 47

6 Root-Locus Method 51
6.1 Root locus construction rules . . . . . . . . . . . . . . . . . . . . . . . . . 51
6.2 Root locus construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

7 Nyquist Stability Criterion 59


7.1 Nyquist Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
7.2 Sketching the Polar plot of the Frequency Response . . . . . . . . . . . . 59
Course Organisation

Course Outline
The outline for this course is as shown in the contents section.
Chapter 2: Modeling in the frequency domain.
Chapter 3: Modeling in the time domain.
Chapter 4: System response.
Chapter 5: Stability of dynamic systems.
Chapter 6: Root-Locus Method.
Chapter 7: Nyquist stability criterion.
References are provided within the course notes and listed in the bibliography section.

Required Tools
During the kinematic analysis of 2D mechanisms in this course, you are expected to
have the following tools:

(i) Scientific calculator

(ii) Matlab R software

Examination
To complete this course, a student must have must sat for all CATs, attempted all
Assignments, done all Practicals, attained Minimum Class Attendance and sat for
Final Exam. If any of these five (5) components is missing, the course is considered
Incomplete. The course components weight distribution is as shown below:

Minimum Class Attendance: 75% of Lecture, Tutorial and Prac. Hours


Practicals: 15%
CATs: 10%
Assignments: 05%
Final Exam: 70%
Practical exercises and Assignments will be provided within the course notes. You
will sit for a CAT after every key chapter.

iii
CONTENTS iv

Final exam grading will be as follows (subject to change):

70% and above: A


60% and below 70%: B
50% and below 60%: C
40% and below 50%: D
Below 40%: E (Fail)
Missing component: Incomplete

Expected Outcomes
At the end of this course, you should be able to;

• Model a given system using both classical and state-space methods.

• Solve problems concerning system response.

• Handle system stability using Routh-Hurwitz, Root-Locus, Bode diagrams, and


Nyquist.

• Solve control problems using Matlab R software.


Chapter 1

Introduction to Control Systems


(Definitions, Configurations &
System Classifications)

The objectives of this chapter are:

• To define a control system.

• To explain why control systems are important.

• To introduce the basic components of a control system.

• To give some examples of control-system applications

• To explain why feedback is incorporated into most control systems.

• To introduce types of control systems.

Control systems find use in an array of fields including: in the domestic domain, when
we need to regulate the temperature and humidity of homes and buildings for com-
fortable living, transportation, various functionalities of the modern automobiles and
airplanes that involve control systems.

Deliverable/ Outcomes

• Appreciate the role and importance of control systems in our daily lives.

• Understand the basic components of a control system.

• Understand the difference between the open-loop and closed-loop.

• systems, and the role of feedback in a closed-loop control system.

• Gain a practical sense of real life control problems.

1
CHAPTER 1. INTRODUCTION TO CONTROL SYSTEMS (DEFINITIONS, CONFIGURATION

1.1 General Introduction


Control systems are found in abundance in all sectors of industry, such as quality
control of manufactured products, automatic assembly lines, machine tool control,
space technology, computer control, transportation systems, power systems, robotics,
micro-electromechanical systems (MEMS), nanotechnology, and many others. Even
the control of inventory and social and economic systems may be approached from
the control system theory. More specifically, applications of control systems benefit
many areas, including:
Process control- Enable automation and mass production in industrial setting.
Machine tools- Improve precision and increase productivity.
Robotic systems- Enable motion and speed control.
Transportation systems- Various functionalities of the modern automobiles and
airplanes involve control systems.
MEMS- Enable the manufacturing of very small electromechanical devices such as
microsensors and microactuators.
Lab-on-a-chip- Enable functionality of several laboratory tasks on a single chip of
only millimeters to a few square centimeters in size for medical diagnostics or envi-
ronmental monitoring.
Biomechanical and biomedical- Artificial muscles, drug delivery systems, and
other assistive technologies

1.1.1 Constituents of Control


Basic components of a control system can be described by:

• The objective

• The control-system components and;

• The results or output

Figure 1.1: Basic Components of a Control System

In this case, the objectives can be identified with inputs, or actuating signal, u, and
the results are also called outputs, or controlled variables, y. In general, the objective
of the control system is to control the outputs in some prescribed manner by the inputs
through the elements of the control system.
Example 1
As a simple example of the control system, consider the steering control of an automo-
bile. The direction of the two front wheels can be regarded as the controlled variable,
or the output, y, and the direction of the steering wheel is the actuating signal, or
the input, u. The control system, or process in this case, is composed of the steering
CHAPTER 1. INTRODUCTION TO CONTROL SYSTEMS (DEFINITIONS, CONFIGURATION

mechanism and the dynamics of the entire automobile. If the objective is to control
the speed of the automobile, then the amount of pressure exerted on the accelerator
is the actuating signal, and the vehicle speed is the controlled variable.
As a whole, we can regard the simplified automobile control system as one with two
inputs (steering and accelerator) and two outputs (direction of wheel turn and speed).
In this case, the two controls and two outputs are independent of each other, but there
are systems for which the controls are coupled. Such systems with more than input
and output are regarded as multivariable systems.

Example 2
Consider the idle-speed control of an automobile engine. The objective of such a con-
trol system is to maintain the engine idle speed at a relatively low value (for fuel econ-
omy) regardless of the applied engine loads (e.g., transmission, power steering, air
conditioning). If there is no idle-speed control, sudden engine loads could result in a
sudden drop in the vehicle speed and hence stalling. The objective of the idle-speed
control then is to minimize the speed drop when engine loading is applied and main-
tain the idle speed at the predefined threshold (desired value). A representation of the
idle speed control system from a standpoint of Inputs-System-Outputs. The inputs in
this case ate the load torque, TL , and the throttle angle, α, an the engine speed, ω, is
the output, The entire engine is the controlled process of the system.

Figure 1.2: Mechanical Idle Speed Control Model

Figure 1.3: Idle Speed Control System


CHAPTER 1. INTRODUCTION TO CONTROL SYSTEMS (DEFINITIONS, CONFIGURATION

1.1.2 Configurations of Control Systems


There are two main configurations of control systems:
a) Open-Loop Control Systems (Nonfeedback systems)
This refers to a system that does not have any feedback to the controller/system. The
example 2 above is an open loop control system. if the throttle angle is set at a certain
initial value that corresponds to a certain engine speed, then when a load torque TL is
applied, there is no way to prevent a drop in the engine speed. The only way to make
the system work is to have a means of adjusting in response to a change in the load
torque in order to maintain at the desired level.
Elements of a Nonfeedback system can be divided into:

• the controller and;

• the control process

Figure 1.4: Elements of an Open-Loop Control System

b) Closed-Loop Control Systems (Feedback Control System)


To obtain more accurate control of the open loop control system above, the controlled
signal y should be fed back and compared with the reference input, and an actuating
signal proportional to the difference of the input and the output must be sent through
the system to correct the error. Such a system is called a closed-loop system.

Figure 1.5: Block diagram of a closed-loop idle-speed control system

A closed loop control of the system in example 2 can be modelled as in the block di-
agram above. The reference input ωr sets the desired idling speed. The engine speed
at idle should agree with the reference value ωr and any difference such as the load
torque τL is sensed by the speed transducer and the error detector. The controller will
operate on the difference and provide a signal to adjust the throttle angle α to correct
the error.
CHAPTER 1. INTRODUCTION TO CONTROL SYSTEMS (DEFINITIONS, CONFIGURATION

The figure below compares the typical performances of open-loop and closed-loop
idle speed control systems.

Figure 1.6: A typical response of the idle-speed control system (a) open loop (b) Closed
loop.

In (a), the idle speed of the open-loop system will drop and settle at a lower value
after a load torque is applied. In (b), the idle speed of the closed-loop system is shown
to recover quickly to the preset value after the application of τL .
What is the essence of feedback?
Feedback exists whenever there is a closed sequence of cause-and-effect relationships.
Feedback is used to reduce the error between the reference input and the system out-
put. The reduction of system error is merely one of the many important effects that
feedback may have upon a system that includes stability, bandwidth, overall gain,
impedance and sensitivity.
Considering a simple feedback system configuration in the figure below:

Figure 1.7: Feedback System

r is the input signal, y the output signal, e the error and b the feedback signal. The
parameters GH represent the constant gains. Then the Input/ Output relationship of
the system can be represented as:

y G
M= =
r 1 + GH

This formulation can help uncover the significant effects of the feedback system:
CHAPTER 1. INTRODUCTION TO CONTROL SYSTEMS (DEFINITIONS, CONFIGURATION

• Feedback effect on Overall Gain


From the equation above, it can be deduced that feedback affects the gain G of a
nonfeedback system by a factor of 1 + GH. The system is said to have a negative
feedback since the minus sign is assigned to the feedback signal. The quantity
GH may itself include a minus sign, so the general effect of feedback is that it
may increase or decrease the gain G.

• Effect of Feedback on Stability


Stability describes if the system can follow the input command. Referring to
the equation above, if GH = −1, the output would be infinite for a finite input.
Such a system is unstable. This means that feedback can cause a system that was
initially stable to be unstable. Assuming that the system in figure above was un-
stable due to GH = −1. Let us introduce another feedback loop via the negative
feedback gain of F such that:

Figure 1.8: Feedback system with two feedback loops

y G
=
r 1 + GH + GF

If the outer-loop feedback gain F is properly selected, then the overall system
can be stable.

• Effect of Feedback on External Disturbance or Noise


All physical systems are subject to some types of extraneous signals or noise dur-
ing operation such as thermal-noise voltage in electronic circuits, wind effects on
antenna etc. The control system should be designed such that it is insensitive to
noise and disturbances but sensitive to input commands. The effect of feedback
on noise depends on where the noise occurs but should always reduce the effect
of disturbances and noise in a system Let us consider a system with some dis-
turbance n as shown in the diagram below.
In the absence of feedback, that is, H = 0, the output y due to n acting alone is:

y = G2 n
CHAPTER 1. INTRODUCTION TO CONTROL SYSTEMS (DEFINITIONS, CONFIGURATION

Figure 1.9: Feedback system with a noise signal/ disturbance, n


r denotes the command signal

With the presence of feedback, the system output due to n acting alone is:

G2
y= n
1 + G1 G2 H
this means that the noise component is reduced by the factor 1 + G1 G2 H

1.2 CLASSIFICATION OF FEEDBACK CONTROL


SYSTEMS
Feedback control systems can be classified into three main categories:

• According to the method of analysis and design: linear or nonlinear, and time-
varying or time-invariant.

• According to the types of signal found in the system: continuous-data or discrete-


data systems, and modulated or unmodulated systems.

• According to the main purpose of the system: a position-control system and a


velocity-control system control the output variables just as the names imply.

1.3 LINEAR VERSUS NONLINEAR CONTROL SYS-


TEMS
When the magnitudes of signals in a control system are limited to ranges in which
system components exhibit linear characteristics (i.e., the principle of superposition
applies), the system is essentially linear. In a linear system, the response is propor-
tional to the input (Homogeneity principle). That is, if for a given input, u(t), the
response is y(t), then, for an input u(t) the response is αy(t). The effect of various
inputs is additive. That is, if for a given input, u1(t), the response is y1(t), and for an
input, u2(t), the response is y2(t), Then for an input u(t) = u1(t) + u2(t) the response
is y(t) = y1(t) + y2(t) (Additivity principle).
CHAPTER 1. INTRODUCTION TO CONTROL SYSTEMS (DEFINITIONS, CONFIGURATION

The principle of superposition states that the response produced by the simultane-
ous application of two different forcing functions is the sum of the two individual
responses. Hence, for the linear system, the response to several inputs can be calcu-
lated by treating one input at a time and adding the results.
When the magnitudes of signals are extended beyond the range of the linear opera-
tion, depending on the severity of the nonlinearity, the system is nonlinear. Examples
of such systems include: amplifiers used in control systems often exhibit a saturation
effect when their input signals become large; the magnetic field of a motor usually has
saturation properties. Other common nonlinear effects found in control systems are
the backlash or dead play between coupled gear members, nonlinear spring charac-
teristics. All nonlinear systems do not obey the principle of superposition.

1.3.1 Linear time invariant and Time varying systems


Parameters and functions appearing in the process model may or may not change with
time. While operating a control system.
If the parameters are unaffected by the time, then the system is called Time Invariant
Control System.
In other words, if the initial state and input are the same regardless of the time they
are applied, the output response will always be the same. Therefore, for time-invariant
systems, we can always assume, without loss of generality, that t0 = 0.
Most physical systems have parameters changing with time. If this variation is mea-
surable during the system operation, then the system is called Time Varying System.
If there is no non-linearity in the time varying system, then the system may be called
as Linear Time varying Control System.
Examples

• A circuit network with constant R, C and L components is time invariant system

• A burning rocket is a time varying system because its mass decreases rapidly
with time

1.3.2 Stochastic and Deterministic Models


Stochastic models are models with Erratic or random signals with unknown wave
forms or indistinguishable function, also referred to as (NOISE). It frequently happens
that the model of the process generating the external signals is very complex or un-
known. In this case, it is easier to characterize the signals by their stochastic properties.
If we consider the positioning of an antenna, the force of the wind will present such a
characteristic.
In order to characterize the stochastic signals, some basic statistical concepts and mea-
surements about random variables should be remembered. These includes; distribu-
tion and density functions, mean, variance, covariance, correlation, statistical inde-
CHAPTER 1. INTRODUCTION TO CONTROL SYSTEMS (DEFINITIONS, CONFIGURATION

pendence and linear regression, among others.


Deterministic models are models characterized by Slow varying signals with known
waveforms or functions like pulse, impulse, step, ramp, parabolic or sinusoidal.
Controller Design Procedure

• Outline the objectives of the control system.

• Specify the requirements, design criteria, and constraints

• Develop a mathematical model of the system, including mechanical, electrical,


sensors, actuators ...

• Establish how the overall system subcomponents interact, utilizing block dia-
grams

• Use block diagrams, signal flow graphs, or state diagrams to find the model of
the overall system—transfer function or state space model

• Study the transfer function of the system in the Laplace domain, or the state
space representation of the system.

• Understand the time and frequency response characteristics of the system and
whether it is stable or not.

• Design a controller using time response, root locus technique, frequency re-
sponse, state space approach. . .

• Optimize the controller if necessary

• Implement the design on the experimental/practical system.

1.4 Summary
This chapter has introduced some of the basic concepts of the control system and their
uses. The basic components of the control system were reviewed, the configurations
and feedback effects. Various types of control systems were categorized.
Chapter 2

Modeling in the Frequency Domain


(Classical)

The first step in developing a mathematical model is to apply the fundamental physi-
cal laws of science and engineering. For example, when modeling electrical networks,
Ohm’s law and Kirchhoff’s laws, which are basic laws of electric networks, must be
applied initially. From the equations we obtain the relationship between the system’s
output and input.

2.1 Laplace Transform


A system represented by a differential equation is difficult to model as a block dia-
gram. Thus, the best method is to use Laplace transform, with which the input, out-
put, and system can be represented as separate entities to become simple algebraic
expressions.

Z ∞
L [f (t)] = F (s) = f (t)e−st dt (2.1)
0−

where s = σ + jω, a complex variable. Thus, knowingf (t) and that the integral exists,
we can find a function, F (s), that is called the Laplace transform of f (t)

Using the laplace equation, it is possible to derive a table relating f (t) to F (s) for spe-
cific cases. Table 2.1 below shows the results for a representative sample of functions.
If we use the tables, we do not have to use the laplace equation all the time, which
requires complex integration, to find f (t) given F (s).

10
CHAPTER 2. MODELING IN THE FREQUENCY DOMAIN (CLASSICAL) 11

Inverse Laplace Transform

Problem 1. Find the inverse Laplace transform of F1 (s) = 1/(s + 3)2 .

Solution:
The Laplace transform of f (t) = tu(t), Item 3 of Table 2.1. If the inverse transform of
F (s) = 1/s2 is tu(t), the inverse transform of F (s + a) = 1/(s + a)2 is e−at tu(t). Hence,
f1 (t) = e−3t tu(t).

2.2 Partial-Fraction Expansion


To find the inverse Laplace transform of a complicated function, we can convert the
function to a sum of simpler terms for which we know the Laplace transform of each
term. The result is called a partial-fraction expansion.

2.2.1 Case 1. Roots of the Denominator of F (s) Are Real and


Distinct
An example of an F (s) with real and distinct roots in the denominator is:

Problem 1. Partial Fraction Expansion of:


CHAPTER 2. MODELING IN THE FREQUENCY DOMAIN (CLASSICAL) 12

2
F (s) = (2.2)
(s + 1)(s + 2)

Solution:

2 K1 K2
F (s) = = + (2.3)
(s + 1)(s + 2) (s + 1) (s + 2)

To find K1 , we first multiply everywhere by (s + 1), which isolates K1 . Thus,

2 K2 (s + 1)
= K1 + (2.4)
(s + 2) (s + 2)

Letting s approach −1 eliminates the last term and yields K1 = 2. Similarly, K2


can be found by multiplying everywhere by (s + 2) and then letting s approach −2;
hence,K2 = −2.
2 2 −2
F (s) = = + (2.5)
(s + 1)(s + 2) (s + 1) (s + 2)

Each component part of the equation is an F (s) in Table 2.1. Hence, f (t) is the sum of
the inverse Laplace transform of each term;

f (t) = (2e−t − 2e−2t )u(t) (2.6)

Problem 2. Given the following differential equation, solve for y(t) if all initial conditions
are zero. Use the Laplace transform.

d2 y dy
2
+ 12 + 32y = 32u(t) (2.7)
dt dt
Solution:

Apply Laplace transform to the differential equation to get the following quadratic
equation:

32
s2 Y (s) + 12sY (s) + 32Y (s) = (2.8)
s
Solving for Y (s) yields the following:
CHAPTER 2. MODELING IN THE FREQUENCY DOMAIN (CLASSICAL) 13

32 32
Y (s) = = (2.9)
s(s2 + 12s + 32) s(s + 4)(s + 8)

To solve for y(t), we form the partial-fraction expansion of the resultant equation:

32 K1 K2 K3
Y (s) = = + + (2.10)
s(s + 4)(s + 8) s (s + 4) (s + 8)

32
K1 = =1 (2.11)
(s + 4)(s + 8) s→0

32
K2 = = −2 (2.12)
s(s + 8) s→−4

32
K3 = =1 (2.13)
s(s + 4) s→−8

1 −2 1
Y (s) = + + (2.14)
s (s + 4) (s + 8)

Since each of the three component parts of the above equation is represented as an
F (s) in Table 2.1, y(t) is the sum of the inverse Laplace transforms of each term. Hence,

y(t) = (1 − 2e−4t + e−8t )u(t) (2.15)

2.2.2 Case 2. Roots of the Denominator of F (s) Are Real and


Repeated
An example of an F(s) with real and repeated roots in the denominator is:

Problem 3.

2
F (s) = (2.16)
(s + 1)(s + 2)2

Solution:

The roots of (s + 2)2 in the denominator are repeated, since the factor is raised to an
integer power higher than 1. In this case, the denominator root at −2 is a multiple root
CHAPTER 2. MODELING IN THE FREQUENCY DOMAIN (CLASSICAL) 14

of multiplicity 2. We can write the partial-fraction expansion as a sum of terms, where


each factor of the denominator forms the denominator of each term. In addition, each
multiple root generates additional terms consisting of denominator factors of reduced
multiplicity.

2 K1 K2 K3
F (s) = = + + (2.17)
(s + 1)(s + 2)2 (s + 1) (s + 2)2 (s + 2)

Then K1 = 2, which can be found as previously described. K2 can be isolated by


multiplying the equation by (s + 2)2 , yielding:

2 K1
= (s + 2)2 + K2 + (s + 2)K3 (2.18)
(s + 1) (s + 1)

Letting s approach −2; K2 = −2. To find K3 we see that if we differentiate equation


just above with respect to s;

−2 (s + 2)s
2
= K1 + K3 (2.19)
(s + 1) (s + 1)2

K3 is isolated and can be found if we let s approach −2. Hence, K3 = −2. Each compo-
nent part is an F (s) in Table 2.1; hence, f (t) is the sum of the inverse Laplace transform
of each term;

f (t) = 2e−t − 2te−2t − 2e−2t (2.20)

If the denominator root is of higher multiplicity than 2, successive differentiation


would isolate each residue in the expansion of the multiple root.

2.3 The Transfer Function


Following the above knowledge, we are now ready to formulate the system represen-
tation. This will also allow us to algebraically combine mathematical representations
of subsystems to yield a total system representation.

Problem 1. Find the transfer function represented by

dc(t)
+ 2c(t) = r(t) (2.21)
dt
Solution:

Taking the Laplace transform of both sides, assuming zero initial conditions, we have

sC(s) + 2C(s) = R(s) (2.22)


CHAPTER 2. MODELING IN THE FREQUENCY DOMAIN (CLASSICAL) 15

The transfer function, G(s), is

C(s) 1
G(s) = = (2.23)
R(s) s+2

In general, a physical system that can be represented by a linear, time-invariant


differential equation can be modeled as a transfer function (output/input).

2.3.1 Electrical Network Transfer Functions


We can now formally apply the transfer function to the mathematical modeling of elec-
tric circuits including passive networks and operational amplifier circuits, mechanical,
electromechanical and also mechatronics systems. Equivalent circuits for the electric
networks that we work with first consist of three passive linear components: resistors,
capacitors, and inductors. Table 2.3 summarizes the components and the relationships
between voltage and current and between voltage and charge under zero initial con-
ditions. We now combine electrical components into circuits, decide on the input and
output, and find the transfer function. Our guiding principles are Kirchhoff’s laws.
We sum voltages around loops or sum currents at nodes, depending on which tech-
nique involves the least effort in algebraic manipulation, and then equate the result to
zero. From these relationships we can write the differential equations for the circuit.
Then we can take the Laplace transforms of the differential equations and finally solve
for the transfer function.

Problem 2. Find the transfer function relating the capacitor voltage, VC (s), to the input
voltage, V (s) in Figure 2.3.
CHAPTER 2. MODELING IN THE FREQUENCY DOMAIN (CLASSICAL) 16

Solution:

In any problem, the designer must first decide what the input and output should be.
In this network, several variables could have been chosen to be the output—for exam-
ple, the inductor voltage, the capacitor voltage, the resistor voltage, or the current. The
problem statement, however, is clear in this case: We are to treat the capacitor voltage
as the output and the applied voltage as the input. Summing the voltages around the
loop, assuming zero initial conditions, yields the integro-differential equation for this
network as;

Z t
di(t) 1
L + Ri(t) + i(t)dt = v(t) (2.24)
dt C 0

(simply using table 2.3)


Change variables from current to charge to make it easier: i(t) = dq(t)/dt

d2 q(t) dq(t) 1
L 2
+R + q(t) = v(t) (2.25)
dt dt C

q(t) = Cvc (t) (2.26)

Similar to known Q = CV Substituting to eliminate q(t)

d2 vc (t) dvc (t)


LC 2
+ RC + vc (t) = v(t) (2.27)
dt dt
Taking Laplace transform and simplifying;

(LCs2 + RCs + 1)VC (s) = V (s) (2.28)

Solving for the Transfer Function, VC (s)/V (s)

VC (s) 1 1/LC
= 2
= 2 R 1
(2.29)
V (s) (LCs + RCs + 1) s + Ls + LC
CHAPTER 2. MODELING IN THE FREQUENCY DOMAIN (CLASSICAL) 17

Problem 3. Find the transfer function, Vo (s)/Vi (s), for the circuit given just below:

Solution:

The transfer function of the operational amplifier circuit is given by, say;

Generally, if two impedances are connected to the inverting operational amplifier


as shown just above, we can derive an interesting result... If the input impedance to
the amplifier is high, then by Kirchhoff’s current law, Ia (s) = 0 and I1 (s) = −I2 (s).
Since gain is large, v1 (t) ≈ 0. Thus, I1 (s) = Vi (s)/Z1 (s), and −I2 (s) = −Vo (s)/Z2 (s).
Equating I1 (s), I2 (s); Vo (s)/Z2 (s) = −Vi (s)/Z1 (s) Then VVoi (s)
(s)
= − ZZ21 (s)
(s)

Since the admittances of parallel components add, Z1 (s) is the reciprocal of the sum of
the admittances;

1 1 360 × 103
Z1 (s) = 1 = −6 1 = (2.30)
C1 s + R1
5.6 × 10 s + 360×103
2.016s + 1

For Z2 (s) the impendances add;

1 107
Z2 (s) = R2 + = 220 × 103 + (2.31)
C2 s s
CHAPTER 2. MODELING IN THE FREQUENCY DOMAIN (CLASSICAL) 18

Substituting equations and simplifying;

Vo (s) s2 + 45.95s + 22.55


= −1.232 (2.32)
Vi (s) s

The resulting circuit is called a PID controller and can be used to improve the perfor-
mance of a control system.

2.3.2 Translational Mechanical System Transfer Functions


We have seen that electrical networks can be modeled by a transfer function, G(s), that
algebraically relates the Laplace transform of the output to the Laplace transform of
the input. We can do the same for mechanical systems. Notice that the end product,
will be mathematically indistinguishable from an electrical network. Hence, an elec-
trical network can be interfaced to a mechanical system by cascading their transfer
functions, provided that one system is not loaded by the other.

Mechanical systems are parallel electrical networks to such an extent that there are
analogies between electrical and mechanical components and variables. Mechani-
cal systems, like electrical networks, have three passive, linear components. Two of
them, the spring and the mass, are energy-storage elements; one of them, the viscous
damper, dissipates energy. The two energy-storage elements are analogous to the two
electrical energy-storage elements, the inductor and capacitor. The energy dissipator
is analogous to electrical resistance.
CHAPTER 2. MODELING IN THE FREQUENCY DOMAIN (CLASSICAL) 19

Problem 4. Find the transfer function, X(s)/F (s), for the system shown just below:

Solution:

Begin the solution by drawing the free-body diagram:

Place on the mass all forces felt by the mass. We assume the mass is traveling
toward the right. Thus, only the applied force points to the right; all other forces
impede the motion and act to oppose it. Hence, the spring, viscous damper, and the
force due to acceleration point to the left. We now write the differential equation of
motion using Newton’s law to sum to zero all of the forces shown on the mass in
Figure just above.

d2 x(t) dx(t)
M + f v + Kx(t) = f (t) (2.33)
dt2 dt
Taking Laplace transform assuming zero conditions;

M s2 X(s) + fv sX(s) + KX(s) = F (s) (2.34)

(M s2 + fv s + K)X(s) = F (s) (2.35)

X(s) 1
G(s) = = 2
(2.36)
F (s) M s + fv s + K
Chapter 3

Modeling in the Time Domain


(State-Space)

Two approaches are available for the analysis and design of feedback control systems.
The first, which we began to study, is known as the classical, or frequency-domain,
technique. This approach is based on converting a system’s differential equation
to a transfer function, thus generating a mathematical model of the system that al-
gebraically relates a representation of the output to a representation of the input.
Replacing a differential equation with an algebraic equation not only simplifies the
representation of individual subsystems but also simplifies modeling interconnected
subsystems. The primary disadvantage of the classical approach is its limited appli-
cability: It can be applied only to linear, time-invariant systems or systems that can
be approximated as such. A major advantage of frequency-domain techniques is that
they rapidly provide stability and transient response information. Thus, we can im-
mediately see the effects of varying system parameters until an acceptable design is
met.
With the arrival of space exploration, requirements for control systems increased
in scope. Modeling systems by using linear, time-invariant differential equations and
subsequent transfer functions became inadequate. The state-space approach (also re-
ferred to as the modern, or time-domain, approach) is a unified method for modeling,
analyzing, and designing a wide range of systems. For example, the state-space ap-
proach can be used to represent nonlinear systems that have backlash, saturation, and
dead zone. Also, it can handle, conveniently, systems with nonzero initial conditions.
Time-varying systems, (for example, missiles with varying fuel levels or lift in an air-
craft flying through a wide range of altitudes) can be represented in state space. Many
systems do not have just a single input and a single output (SISO). Multiple-input,
multiple-output systems (MIMO, such as a vehicle with input direction and input
velocity yielding an output direction and an output velocity) can be compactly rep-
resented in state space with a model similar in form and complexity to that used for
single-input, single-output systems.
The time-domain approach can be used to represent systems with a digital com-
puter in the loop or to model systems for digital simulation. With a simulated system,
system response can be obtained for changes in system parameters—an important
design tool. The statespace approach is also attractive because of the availability of
numerous state-space software packages for the personal computer.

20
CHAPTER 3. MODELING IN THE TIME DOMAIN (STATE-SPACE) 21

The time-domain approach can also be used for the same class of systems modeled
by the classical approach. This alternate model gives the control systems designer
another perspective from which to create a design. While the state-space approach can
be applied to a wide range of systems, it is not as intuitive as the classical approach.
The designer has to engage in several calculations before the physical interpretation
of the model is apparent, whereas in classical control a few quick calculations or a
graphic presentation of data rapidly yields the physical interpretation.
Here, the coverage of state-space techniques is to be regarded as an introduc-
tion to the subject, a springboard to advanced studies, and an alternate approach to
frequency-domain techniques. Still, we will limit the state-space approach to linear,
time-invariant systems or systems that can be linearized.

As mentioned, the classical control system design techniques are generally only appli-
cable to:

(a) Single input, Single Output (SISO) systems

(b) Systems that are linear (or can be linearlized) and are time invariant (have pa-
rameters that do not vary with time)

The state-space approach is a generalized time-domain method for modelling,


analysing and designing a wide range of control systems and is particularly well
suited to digital computational techniques. The approach can deal with

(a) Multiple Input, Multiple Output (MIMO) systems, or multivariable systems

(b) Non-linear and time-variant systems

(c) Alternative controller design approaches

3.1 The state vector differential equation


The state of a system described by a set of first-order differential equations in terms of
the state variables (x1 , x2 , ..., xn ) and input variables (u1 , u2 , ..., un ) in general form

dx1
= a11 x1 + a12 x2 + · · · + a1n xn + b11 u1 + · · · + b1m um (3.1)
dt

dx2
= a21 x1 + a22 x2 + · · · + a2n xn + b21 u1 + · · · + b2m um (3.2)
dt

dxn
= an1 x1 + an2 x2 + · · · + ann xn + bn1 u1 + · · · + bnm um (3.3)
dt
CHAPTER 3. MODELING IN THE TIME DOMAIN (STATE-SPACE) 22

These equations set may be combined in matrix format. This results in the state
vector differential equation

ẋ = Ax + Bu (3.4)

The equation just above is generally called the state space equation.

ẋ is the n dimensional state vector;


 
x1
 x2 
(3.5)
 
 .. 
.
xn

u is the m dimensional input vector;


 
u1
 u2 
(3.6)
 
 .. 
 . 
um

A is the n × n system matrix;


 
a11 a12 . . . a1n
 a21 a22 . . . a2n 
(3.7)
 
 .. 
 . 
an1 an2 . . . ann

B is the n × m control matrix;


 
b11 . . . b1m
 b21 . . . b2m 
(3.8)
 
 .. 
 . 
bn1 . . . bnm

In general, the outputs (y1 , y2 , . . . , yn ) of a linear system can be related to the state
variables and the input variables;

y = Cx + Du (3.9)
CHAPTER 3. MODELING IN THE TIME DOMAIN (STATE-SPACE) 23

Output equation The two state space equations are very important;

ẋ = Ax + Bu (3.10)

y = Cx + Du (3.11)

x = state vector
ẋ = derivative of the state vector with respect to time
y = output vector
u = input or control vector
A = system matrix
B = input matrix
C = output matrix
D = feedforward matrix

3.1.1 Mechanical System Example

Problem 1.
Write down the state equation and output equation for the spring-mass-damper system shown
in the figure just below:

Solution:

Make the free-body diagram.


Get the state variables:

x1 = y (3.12)

dy
x2 = = x˙1 (3.13)
dt
CHAPTER 3. MODELING IN THE TIME DOMAIN (STATE-SPACE) 24

Input variable

u = P (t) (3.14)

Now, similar to F = ma

X
F y = mÿ (3.15)

Sum of forces

P (t) − Ky − C ẏ = mÿ (3.16)

Divide every term by m

d2 y K C 1
= − y − ẏ + P (t) (3.17)
dt2 m m m
From the equations above, the set of first-order differential equations are

ẋ1 = x2 (3.18)

K C 1
ẋ2 = − x1 − x2 + u (3.19)
m m m
and the state equations become

      
ẋ1 0 1 x1 0
= K C + 1 u (3.20)
ẋ2 − m − m x2 m

The output equation is therefore as follows

 
  x1
y= 1 0 (3.21)
x2

The state variables are not unique and may be selected to suit the problem being stud-
ied.
CHAPTER 3. MODELING IN THE TIME DOMAIN (STATE-SPACE) 25

3.1.2 Electrical System Examples

Problem 2.
For the RLC network shown just below, write down the state equations when:

(a) the state variables are v2 (t) and v˙2

(b) the state variables are v2 (t) and i(t)

Solution:

Part (a):
State variables

x1 = v2 (t) (3.22)

x2 = v̇2 = ẋ1 (3.23)

d2 v2 dv2
LC 2
+ RC + v2 = v1 (t) (3.24)
dt dt
NB:

dq(t)
i(t) = (3.25)
dt
and

q(t) = Cv(t) (3.26)

The set of first-order differential equations are

ẋ1 = x2 (3.27)
CHAPTER 3. MODELING IN THE TIME DOMAIN (STATE-SPACE) 26

1 RC 1
x2 = − x1 − x2 + u (3.28)
LC LC LC
Form the state equations

      
ẋ1 0 1 x1 0
= 1 −R + 1 u (3.29)
ẋ2 − LC − L x2 LC

Part (b)
State variables

x1 = v2 (t) (3.30)

x2 = i(t) (3.31)

We know

di
L = −v2 (t) − Ri(t) + v1 (t) (3.32)
dt

dv2
C = i(t) (3.33)
dt
These two first-order differential equations can be written in the form

1
ẋ1 = x2 (3.34)
C

1 R 1
ẋ2 = − x1 − x2 + u (3.35)
L L L
Giving the state equations

      
ẋ1 0 1 x1 0
= 1 −R + 1 u (3.36)
ẋ2 −L − L x2 L
CHAPTER 3. MODELING IN THE TIME DOMAIN (STATE-SPACE) 27

Problem 3. Given the electrical network of the figure just below, find a state-space
representation if the output is the current through the resistor.

Solution:
The following steps will yield a variable representation of the network in space.
Step 1
Label all of the branch currents in the network. These include iL , iR , and iC as shown
in the figure.
Step 2
Select the state variables by writing the derivative equation for all energy-storage ele-
ments, that is, the inductor and the capacitor. Thus,

dvc
C = iC (3.37)
dt

diL
L = vL (3.38)
dt
From the above equations, choose the state variables as the quantities that are differ-
entiated, namely vc and iL .
Since iL and vL are not state variables, our next step is to express iC and vL as linear
combinations of the state variables, vC and iL , and the input, v(t).
Step 3
Apply network theory, such as Kirchhoff’s voltage and current laws, to obtain iC and
vL in terms of the state variables, vC and iL . At Node 1,

iC = −iR + iL (3.39)

1
= − vc + iL (3.40)
R
which yields iC in terms of the state variables, vc and iL . Around the outer loop,

vL = −vc + v(t) (3.41)


CHAPTER 3. MODELING IN THE TIME DOMAIN (STATE-SPACE) 28

which yields vL in terms of the state variables, vc , and the source, v(t).
Step 4
Substitute the results of the above equations to obtain the following state equations:

dvc 1
C = − vc + iL (3.42)
dt R

diL
L = −vc + v(t) (3.43)
dt

dvc 1 1
=− vc + iL (3.44)
dt RC C

diL 1 1
= − vc + v(t) (3.45)
dt L L
Step 5
Find the output equation. Since the output is ig(t),

1
iR = vc (3.46)
R
The final result for the state-space representation is found by representing the above
equations in vector-matrix form:

   1 1
   
v̇c − RC C
vc 0
= + 1 v(t) (3.47)
i̇L − L1 0 iL L
(3.48)

 
  vc
iR = 1/R 0 (3.49)
iL
CHAPTER 3. MODELING IN THE TIME DOMAIN (STATE-SPACE) 29

3.2 Converting a Transfer Function to State Space


Despite learning to apply state space representation to Mechanical and Electrical sys-
tems,
If you wanted to simulate a system represented by a transfer function, you would
have to convert the ready transfer function to state space.

Consider the general equation

dn y dn−1 y dy dn−1 u du
n
+ a n−1 n−1
+ · · · + a 1 + a 0 y = b n−1 n−1
+ · · · + b1 + b0 u (3.50)
dt dt dt dt dt
This equation can be represented by a transfer function;
Define a set of state variables such that:

ẋ1 = x2
ẋ2 = x3
.. .. (3.51)
. .
ẋn = −a0 x1 − a1 x2 − · · · − an−1 xn + u

and an output equation

y = b0 x1 + b1 x2 + · · · + bn−1 xn (3.52)

Then the state equation

     

ẋ1 0 1 0 ... 0 x1 0

 ẋ2  0 0 1 ... 0   0
x2
..
 
..

..
  
  .. 
=  + . u (3.53)
   

 .  . 
 .
  
ẋn−1   0 0 0 ... 1  xn−1  0
ẋn −a0 −a1 −a2 . . . −an−1 xn 1

The state-space representation in the equation just above is called controllable canon-
ical form and the output equation is

 
x1
 x2 
 
bn−1  x3 

y = b0 b1 b2 . . . (3.54)
 
.
 .. 
xn
CHAPTER 3. MODELING IN THE TIME DOMAIN (STATE-SPACE) 30

Problem 1.
Find the state and output equation for:

Y 4
(s) = 3 (3.55)
U s + 3s2 + 6s + 2
Solution:

The state equation

      
ẋ1 0 1 0 x1 0
ẋ2  =  0 0 1  x2 + 0 u
  (3.56)
ẋ3 −2 −6 −3 x3 1

Output equation

 
x
  1
y = 4 0 0 x2  (3.57)
x3

Problem 2.
Find the state equation for:

Y 5s2 + 7s + 4
(s) = 3 (3.58)
U s + 3s2 + 6s + 2
Solution:

The state equation is he same as in the question just above.

 
x
  1
y = 4 7 5 x2 
 (3.59)
x3
CHAPTER 3. MODELING IN THE TIME DOMAIN (STATE-SPACE) 31

3.3 Converting from State Space to a Transfer Func-


tion
Given the state and output equations

ẋ = Ax + Bu (3.60)

y = Cx + Du (3.61)

Take the transfer functions assuming zero initial conditions.

sX(s) = AX(s) + BU (s) (3.62)

Y (s) = CX(s) + DU (s) (3.63)

Solving for X(s)

(sI − A)X(s) = BU (s) (3.64)

X(s) = (sI − A)−1 BU (s) (3.65)

where I is the identity matrix.


Substituting

Y (s) = C(sI − A)−1 BU (s) + DU (s) = [C(sI − A)−1 B + D]U (s) (3.66)

Matrix [C(sI − A)−1 B + D] is called the transfer function matrix, since it relates the
output vector, Y (s), to the input vector, U (s). We can find the transfer function;

Y (s)
T (s) = = C(sI − A)−1 B + D (3.67)
U (s)

Problem 1.
Given the system defined by the following, find the transfer function, T (s) = Y (s)/U (S),
where U (s) is the input and Y (s) is the output.

   
0 1 0 10
ẋ =  0 0 1  x +  0 u (3.68)
−1 −2 −3 0
CHAPTER 3. MODELING IN THE TIME DOMAIN (STATE-SPACE) 32
 
y= 1 0 0 x (3.69)

Solution:

The solution revolves around finding the term (sI − A)−1 as illustrated in the above
equations. All other terms are already defined. Hence, fist find (sI − A):

s −1
     
s 0 0 0 1 0 0
(sI − A) = 0 s 0 −  0 0 1  = 0 s −1  (3.70)
0 0 s −1 −2 −3 1 2 s+3

Now from (sI − A)−1 :

 2 
(s + 3s + 2) s+3 1
 −1 s(s + 3) s 
adj(sI − A) −s −(2s + 1) s2
(sI − A)−1 = = (3.71)
det(sI − A) s3 + 3s2 + 2s + 1

Substituting (sI − A)−1 , B, C and D into the equation

 
10
B =0 (3.72)
0

 
C= 1 0 0 (3.73)

D=0 (3.74)

Obtain the final result for the transfer function:

10(s2 + 3s + 2)
T (s) = (3.75)
s3 + 3s2 + 2s + 1
Chapter 4

System Response

The manner in which a dynamic system responds to an input, expressed as a function


of time, is called the time response. The theoretical evaluation of this response is said
to be undertaken in the time domain and is referred to as time domain analysis. It is
possible to compute the response of a system if the following is known:
- the nature of the input(s), expressed as a function of time

- the mathematical model of the system


The time response of any system has two components:
(a) Transient response
Transient response occurs mainly, either after switching on the system, which
is at the time of application of an input signal to the system, or just after any
abnormal condition that happens to the system.
Transient response will, for a stable system, decay exponentially, to zero as time
increases. It is a function only of the system dynamics, and is independent of the
input quantity.

(b) Steady-state response


This occurs when the system becomes settled and starts working normally.
Steady-state response of the system after the transient component has decayed
and is a function of both the system dynamics and the input quantity.
The figure just below shows the transient and steady-state periods of time response.

33
CHAPTER 4. SYSTEM RESPONSE 34

The total response of a system is always the sum of the transient and the steady-state
components. In this case, the system is subjected to a ramp function. The difference
between the input function xi (t) and the system response xo (t) are called transient
errors during the transient period, and the steady-state errors during the steady-state
period. For a control system, all errors should be minimized.

4.1 Common time domain input functions


4.1.1 The impulse function
An impulse is a pulse with a width ∆t → 0. The length of an impulse is its area A,
where
A = height h × ∆t

The Laplace transform of an impulse function is equal to the area of the function. The
impulse function whose area is unity is called a unit impulse δ(t).

4.1.2 The step function


A step function is described as xi (t) = B; Xi (s) = B/s for t > 0. For a unit step
function xi (t) = 1; Xi (s) = 1/s. This can be referred to as a Constant position input.
CHAPTER 4. SYSTEM RESPONSE 35

4.1.3 The ramp function


A ramp function is described as xi (t) = Qt; Xi (s) = Q/s2 for t > 0. For a unit ramp
function xi (t) = r; Xi (s) = 1/s2 . This may be referred to as a Constant velocity input.

4.1.4 The parabolic function


A parabolic function is described as xi (t) = Kt2 ; Xi (s) = 2K/s3 for t > 0. For a
unit parabolic function xi (t) = t2 ; Xi (s) = 2/s3 . This may be referred to as Constant
acceleration input.

4.2 Response of second-order systems


4.2.1 Standard form
Consider a second-order differential equation

d2 x0 dx0
a 2 +b + cx0 = exi (t) (4.1)
dt dt
Take Laplace transforms, zero initial conditions

as2 X0 (s) + bsX0 (s) + cX0 (s) = eXi (s) (4.2)

(as2 + bs + c)X0 (s) = eXi (s) (4.3)

The transfer function is


X0 e
G(s) = (s) = 2 (4.4)
Xi as + bs + c
CHAPTER 4. SYSTEM RESPONSE 36

To obtain the standard form, divide by c


e
c
G(s) = a 2
(4.5)
c
s + cb s +1

which is written as
K
G(s) = (4.6)
1 2
ωn2s + ω2ζn s + 1

This can also be normalized to make the s2 coefficient unity, i.e.

Kωn2
G(s) = (4.7)
s2 + 2ζωn s + ωn2

This is the standard form of transfer functions for a second-order system, where

K = steady-state gain constant,


ωn = undamped natural frequency (rad/s)
ζ = damping ratio

4.2.2 Roots of the characteristic equation and their relationship


to damping in second-order systems
The transient response of a system is independent of the input. Thus for transient
response analysis, the system input can be considered to be zero. Thus;

(as2 + bs + c)X0 (s) = 0 (4.8)

If X0 (s) 6= 0 then

as2 + bs + c = 0 (4.9)

The table below shows the transient behavior of a second-order system.

Discriminant Roots Transient response type


b2 > 4ac s1 and s2 real and unequal (-ve) Overdamped Transient Response
b2 = 4ac s1 and s2 real and equal (-ve) Critically Damped Transient Response
b2 < 4ac s1 and s2
complex conjugate of the form:
s1 , s2 = −σ ± jω Underdamped Transient Response

This polynomial in s is called the Characteristic Equation and its roots will determine
the system transient response. Their values are

−b ± b2 − 4ac
s1 , s2 = (4.10)
2a
CHAPTER 4. SYSTEM RESPONSE 37

The term (b2 − 4ac), called the discriminant, may be positive, zero or negative which
will make the roots real and unequal, real and equal or complex. This gives rise to
the three different types of transient response described in the table just above. The
transient response of a second-order system is given by the general solution

x0 (t) = Aes1 t + Bes2 t (4.11)

This gives a step response function of the form shown in the figure just below; which
shows the effect that roots of the characteristic equation have on the damping of a second-order

4.2.3 Critical damping and damping ratio


Critical damping Cc

When the damping coefficient C of a second-order system has its critical value Cc ,
the system, when disturbed, will reach its steady-state value in the minimum time
without overshoot. This is when the roots of the Characteristic Equation have equal
negative real roots.

Damping ration ζ

The ratio of the damping coefficient C in a second-order system compared with the
value of the damping coefficient Cc required for critical damping is called the Damp-
ing Ratio ζ (Zeta). Hence

C
ζ= (4.12)
Cc
CHAPTER 4. SYSTEM RESPONSE 38

Thus

ζ = 0 No damping
ζ < 1 Underdamping
ζ = 1 Critical damping
ζ > 1 Overdamping

Problem 1. Find the value of the critical damping coefficient Cc in terms of K and m
for the spring-mass-damper system shown just below

Solution:
From Newton’s second law
X
F x = mẍ0 (4.13)

From the free-body diagram

F (t) − Kx0 (t) − C ẋ0 (t) = mẍ0 (t) (4.14)

Taking Laplace transforms, zero initial conditions

F (s) − KX0 (s) − CsX0 (s) = ms2 X0 (s) (4.15)

or

(ms2 + Cs + K)X0 (S) = F (s) (4.16)

Characteristics Equation is

ms2 + Cs + K = 0 (4.17)

That is
C K
s2 + + =0 (4.18)
m m
CHAPTER 4. SYSTEM RESPONSE 39

and the roots are

1 nC q C 2
o
4K

s1 , s2 = ± − (4.19)
2 m m m

For critical damping, the discriminant is zero, hence the roots become

Cc
s1 = s2 = − (4.20)
2m
Also, for critical damping

Cc2 4K
2
= (4.21)
m m

4Km2
Cc2 = (4.22)
m
giving

Cc = 2 Km (4.23)

4.2.4 Step response performance specification


The three parameters shown in the figure below are used to specify performance in
the time domain.

(a) Rise time tr : The shortest time to achieve the final or steady-state value, for the
first time. This can be 100% rise time as shown, or the time taken for example
from 10% to 90% of the final value, thus allowing for non-overshoot response.

(b) Overshoot: For a control system an overshoot of between 0 and 10% (1 < ζ > 0.6)
is generally acceptable.

(c) Settling time ts : This is the time for the system output to settle down to within a
tolerance band of the final value, normally between ±2 or 5%.
Chapter 5

Stability of Dynamic Systems

As mentioned earlier, the response of a linear system to a stimulus (excitation signal)


has two components:

(a) steady-state terms which are directly related to the input

(b) transient terms which are either exponential, or oscillatory with an envelope of
exponential form.

If the exponential term decay as time increases, then the system is said to be stable
If the exponential terms increase with increasing time, the system is considered unsta-
ble.

5.1 Stability and roots of the characteristic equation


A general characteristic equation of a second-order system can be written as:

as2 + bs + c = 0 (5.1)

The roots of the characteristic equation given in the equation just above can be found
using the following formula:


−b ± b2 − 4ac
s1 , s2 = (5.2)
2a
The roots determine the transient response of the system and for a second-order sys-
tem can be written as

(a) Overdamping

s1 = −σ1 (5.3)

s2 = −σ2 (5.4)

40
CHAPTER 5. STABILITY OF DYNAMIC SYSTEMS 41

(b) Critical damping

s1 = s2 = −σ (5.5)

(c) Underdamping

s1 , s2 = −σ ± jω (5.6)

If the coefficient b in the characteristic equation were to be negative, then the roots
would be:

s1 , s2 = +σ ± jω (5.7)

5.2 Poles, Zeros, and System Response


We have seen that the output response of a system is the sum of two responses: the
forced response and the natural response1 . Although many such techniques, such as solv-
ing a differential equation or taking the inverse Laplace transform, enable us to eval-
uate this output response, these techniques are time-consuming! This eats onto your
productivity. The use of poles and zeros can achieve similar results just through inspec-
tion.

Poles are simply the roots of the denominator of a transfer function.

N (s)
H(s) = (5.8)
D(s)

where, N (s) and D(s) are simple polynomials.


Here, poles are the roots of D(S) and can be evaluated by taking D(S) = 0 and is
solved for s. The number of poles is always greater or equal to the number of zeros.

Zeros are the roots of the numerator of a transfer function.


Using the same general transfer function as above, Zeros can be determined by taking
N (s) = 0 and solving for s. The number of zeros is always lesser or equal to the num-
ber of zeros.

As we shall see in details, Poles and Zeros of the system affect its response and stabil-
ity.

5.2.1 Poles and Zeros of a First-Order System


EXAMPLE:
Given the transfer function G(s) in the figure just below, a pole exists at s = −5, and a
zero exists at −2. These values are plotted on the complex s-plane, using an x for the
1
The forced response is also called the steady-state or the particular solution.
CHAPTER 5. STABILITY OF DYNAMIC SYSTEMS 42

pole and a O for the zero. To show the properties of the poles and zeros, let us find the
unit step response of the system.

(s + 2) A B 2/5 3/5
C(s) = = + = + (5.9)
s(s + 5) s s+5 s s+5

where
(s + 2) 2
A= = (5.10)
(s + 5) s→0 5

(s + 2) 3
B= = (5.11)
s s→−5 5

Thus,
2 3 −5t
c(t) = + e (5.12)
5 5

(a) shows input and output, (b) shows the pole-zero plot of the system, (c) following
the arrows, shows the evolution of the system response.
CHAPTER 5. STABILITY OF DYNAMIC SYSTEMS 43

From (c), we draw the conclusion that:

1. A pole of the input function generates the form of the forced response. That is, the
pole of the origin generated a step function at the output.

2. A pole of the transfer function generates the form of the natural response. That is,
the pole at −5 generated e−5t .

3. A pole on the real axis generates an exponential response of the form e−αt , where
−α is the pole location on the real axis. Thus, the farther to the left a pole is on
the negative real axis, the faster the exponential transient response will decay to
zero.

4. The zero and poles generate the amplitudes for both the forced and natural re-
sponses.

5.2.2 Poles and Zeros of a Second-Order System


Compared to the simplicity of a first-order system, a second-order system exhibits a
wide range of responses that must be analyzed and described. Whereas varying a
first-order system’s parameter simply changes the speed of the response, changes in
the parameters of a second-order system can change the form of the response.

EXAMPLE:
Let us take a general case which has two finite poles and no zeros. The term in the numer-
ator is simply a scale or input multiplying factor that can take on any value without
affecting the form of the derived results.
By assigning appropriate values to parameters a and b, we can show all possible
second-order transient responses.
The unit step response then can be found using C(s) = R(s)G(s), where R(s) = 1/s,
followed by a partial-fraction expansion and the inverse Laplace transform.
CHAPTER 5. STABILITY OF DYNAMIC SYSTEMS 44

Generally, additional Poles in first order systems delay the response of a system. Left
half-plane Zeros of a first order system speed up the response of a system and the right
half-plane cause the response to go in the opposite direction.
Additional Poles in a second order system decrease the number of oscillations. Addi-
tional Zeros in a second order system increase the number of oscillations.

Problem 1.
Find the range of gain, K, for the system of the figure just below, that will cause the system
to be stable, unstable, and marginally stable. Assume K > 0.
CHAPTER 5. STABILITY OF DYNAMIC SYSTEMS 45

Solution:

Find the closed-loop transfer function as

K
T (s) = (5.13)
s3 + 18s2 + 77s + K
Form the Routh table:

Since K is assumed positive, we see that all elements in the first column are always
positive except the s1 row.
This entry can be positive, zero, or negative, depending upon the value of K.
If K < 1386, all terms in the first column will be positive, and since there are no sign
changes, the system will have three poles in the left half-plane and be stable.
If K > 1386, the s1 term in the first column is negative. There are two sign changes,
indicating that the system has two right-half-plane poles and one left half-plane pole,
which makes the system unstable.
If K = 1386, we have an entire row of zeros, which could signify jω poles.
Returning to the s2 row and replacing K with 1386, we form the even polynomial

P (s) = 18s2 + 1386 (5.14)

Differentiating with respect to s, we have

dP (s)
= 36s + 0 (5.15)
ds
Replacing the row of zeros with the coefficients, we obtain the Routh Harwitz table as
shown just below for the case of K = 1386.
CHAPTER 5. STABILITY OF DYNAMIC SYSTEMS 46

Since there are no sign changes from the even polynomial (s2 row) down to the bottom
of the table, the even polynomial has its two roots on the jω-axis of unit multiplicity.
Since there are no sign changes above the even polynomial, the remaining root is in
the left half-plane. Therefore, the system is marginally stable.

Problem 2.
Determine the stability of the closed-loop transfer function

10
T (s) = (5.16)
s5 + 2s4 + 3s3+ 6s2 + 5s + 3
Solution:

First write a polynomial that has the reciprocal roots of the denominator of Eq. From
our discussion, this polynomial is formed by writing the denominator of Eq. in reverse
order. Hence,

D(s) = 3s5 + 5s4 + 6s3 + 3s2 + 2s + 1 (5.17)

We form the Routh table as shown in table just below. Since there are two sign changes,
the system is unstable and has two right-half-plane poles. Notice that the table does
not have a zero in the first column.
CHAPTER 5. STABILITY OF DYNAMIC SYSTEMS 47

5.3 The Routh-Hurwitz stability criterion


Routh (1905) and Hurwitz (1875) gives a method of indicating the presence and num-
ber of unstable roots, but not their value. Consider a characteristic equation:

an sn + an−1 sn−1 + · · · + a1 s + a0 = 0 (5.18)

The Routh-Hurwitz stability criterion states:

(a) For there to be no roots with positive real parts then there is a necessary, but not
sufficient, condition that all coefficients in the characteristic equation have the
same sign and that none are zero.

If (a) above is satisfied, then the necessary and sufficient condition for stability is
either

(b) All the Hurwitz determinants of the polynomial are positive, or alternatively

(c) All coefficients of the first column of Routh’s array have the same sign. The
number of sign changes indicate the number of unstable roots.

The Hurwitz determinats are

a1 a3
D 1 = a1 D2 = (5.19)
a0 a2

a1 a3 a5 a7
a1 a3 a5
a a a a6
D3 = a0 a2 a4 D4 = 0 2 4 ... (5.20)
a1 a3 a5
a1 a3
a2 a4

Consider a closed-loop transfer function

b0 sm + b1 sm−1 + · · · + bm−1 s + bm B(s)


H(s) = n n−1
= (5.21)
a0 s + a1 s + · · · + an−1 s + an A(s)

where the ai ’s and bi ’s are real constants and m ≤ n. An alternative to factorizing


the denominator polynomial, Routh’s stability criterion, determines the number of
closed-loop poles in the right-half s plane.

5.3.1 Algorithm for applying Routh’s stability criterion


The algorithm described below, like the stability criterion, requires the order of A(s) to
be finite.
CHAPTER 5. STABILITY OF DYNAMIC SYSTEMS 48

1. Factor out any roots at the origin to obtain the polynomial, and multiply by −1
if necessary, to obtain

a0 sn + a1 sn−1 + · · · + an−1 s + an = 0 (5.22)

where a0 6= 0 and an > 0.

2. If the order of the resulting polynomial is at least two and any coefficient ai is
zero or negative, the polynomial has at least one root with non-negative real
part. To obtain the procise number of roots with non-negative real part, proceed
as follows. Arrange the coefficients of the polynomial, and values subsequently
calculated from them as shown below:

sn a0 a2 a4 a6 ...
sn−1 a1 a3 a5 a7 ...
sn−2 b1 b2 b3 b4 ...
sn−3 c1 c2 c3 c4 ...
sn−4 d1 d2 d3 d4 ... (5.23)
.. .. ..
. . .
s2 e1 e2
s1 f1
s0 g0

where the coefficients bi are


a1 a2 − a0 a3
b1 = (5.24)
a1

a1 a4 − a0 a5
b2 = (5.25)
a1

a1 a6 − a0 a7
b3 = (5.26)
a1

..
. (5.27)

generated until all subsequent coefficients are zero. Similarly, cross multiplying
the coefficients of the two previous rows to obtain the ci , di , etc.

b 1 a3 − a1 b 2
c1 = (5.28)
b1

b 1 a5 − a1 b 3
c2 = (5.29)
b1
CHAPTER 5. STABILITY OF DYNAMIC SYSTEMS 49

b 1 a7 − a1 b 4
c3 = (5.30)
b1

..
. (5.31)

c 1 b2 − b1 c 2
d1 = (5.32)
c1

c1 b3 − b1 c3
d2 = (5.33)
c1

..
. (5.34)

until the nth row of the array has been completed. Missing coefficients are re-
placed by zeros. The resulting array is called the Routh array. The powers of s
are not considered to be part of the array. We can think of them as labels. The
column beginning with a0 is considered to be the first column of the array. The
Routh array is seen to be triangular. It can be shown that multiplying a row by
a positive number to simplify the calculation of the next row does not affect the
outcome of the application of the Routh criterion.

3. Count the number of sign changes in the first column of the array. It can be
shown that a necessary and sufficient condition for all roots of (2) to be located
in the left-half plane is that all the ai are positive and all of the coefficients in the
first column be positive.

Example: Generic Quadratic Polynomial.


Consider the quadratic polynomial:

a0 s 2 + a1 s + a2 = 0 (5.35)

where all the ai are positive. The array of coefficients becomes

s 2 a0 a2
s 1 a1 0 (5.36)
s 0 a2

where the coefficient a1 is the result of multiplying a1 by a2 and subtracting a0 (0) then
dividing the result by a2 . In the case of a second order polynomial, we see the Routh’s
stability criterion reduces to the condition that all ai be positive.

Example: Generic Cubic Polynomial.


Consider the generic cubic polynomial:

a0 s3 + a1 s2 + a2 s + a3 = 0 (5.37)
CHAPTER 5. STABILITY OF DYNAMIC SYSTEMS 50

where all the ai are positive. The Routh array is

s3 a0 a2
s2 a1 a3
a1 a2 −a0 a3 (5.38)
s1 a1
s0 a3

so the condition that all roots have negative real parts is

a1 a2 > a0 a3 (5.39)

Example: A Quadratic Polynomial


Next we consider the fourth-order polynomial:

s4 + 2s3 + 3s2 + 4s + 5 = 0 (5.40)

Here we illustrate the fact that multiplying a row by a positive constant does not
change the result. One possible Routh array is given at left, and an alternative is given
at right,

s4 1 3 5 s4 1 3 5
s3 2 4 0 s3 2 4 0
1 2 0
(5.41)
s2 1 5 s2 1 5
s1 −6 s1 −3
s0 5 s0 5

In the alterative Routh array, the s3 row is divided by two to get 1, 2, 0.


In this example, the sign changes twice (from +ve to −ve and from that −ve to +ve) in
the first column so the polynomial equation A(s) = 0 has two roots with positive real
parts.
Chapter 6

Root-Locus Method

This is a control system design technique developed by W.R. Evans (1948) that deter-
mines the roots of the characteristic equation (closed-loop poles) when the open-loop
gain-constant K is increased from zero to infinity.

The stability of the given closed loop system depends upon the location of the roots of
the characteristics equation. That is the location of the closed loop poles. If we change
some parameter of a system, then the location of closed loop pole changes in ‘s’ plane.
The study of this locus (of moving pole in ‘s’ plane) because of variation of any pa-
rameter of the system is very important while designing any closed loop system.

This movement of poles in ‘s’ plane is called as Root Locus.

The locus of the roots, or closed-loop poles are plotted in the s-plane. This is a complex
plane, since s = σ ± jω. It is important to remember that the real part σ is the index in
the exponential term of the time response, and if positive will make the system unsta-
ble. Hence, any locs in the right-hand side of the plane represents an unstable system.
The imaginary part ω is the frequency of transient oscillation.

When a locus crosses the imaginary axis, σ = 0. This is the condition of marginal sta-
bility, i.e. the control system is on the verge of instability, where transient oscillation
neither increases, nor decay, but remain at a constant value.

The design method requires the closed-loop poles to be plotted in the s-plane as K is
varied from zero to infinity, and then a value of K selected to provide the necessary
transient response as required by the performance specification. The loci always com-
mence at open-loop poles (denoted by x) and terminate at open-loop zeros (denoted
by o) when they exist.

6.1 Root locus construction rules


1. Starting points (K = 0): The root loci start at the open-loop poles.

2. Termination points (K = ∞): The root loci terminate at the open-loop zeros when
they exist, otherwise at infinity.

51
CHAPTER 6. ROOT-LOCUS METHOD 52

3. Number of distinct root loci: This is equal to the order of the characteristic equation.

4. Symmetry of root loci: The root loci are symmetrical about the real axis.

5. Root locus asymptotes: For large values of k the root loci are asymptotic to straight
lines, with angles given by

(1 + 2k)
θ= (6.1)
(n − m)

where

k = 0, 1, . . . (n − m − 1)
n = no. of finite open-loop poles
m = no. of finite open-loop zeros

6. Asymptote intersection: The asymptotes intersect the real axis at a point given by
P P
open − looppoles − open − loopzeros
σa = (6.2)
(n − m)

7. Root locus locations on real axis: A point on the real axis is part of the loci if the sum
of the number of open-loop poles and zeros to the right of the point concerned
is odd.

8. Breakaway points: The points at which a locus breaks away from the real axis van
be calculated using one of two methods:

(a) Find the roots of the equation

dK
=0 (6.3)
ds s=σb

where K has been made the subject of the characteristic equation i.e. K =
...
(b) Solving the relationship
n m
X 1 X 1
= (6.4)
1
(σb + |P i|) 1
(σb + |Zi|)

where |P i| and |Zi| are the absolute values of open-loop poles and zeros
and σb is the breakaway point.

9. Imaginary axis crossover: The location on the imaginary axis of the loci (marginal
stability) can be calculated using either:

(a) The Routh-Hurwitz stability criterion


(b) Replacing s by jω in the characteristic equation (since σ = 0 on the imagi-
nary axis).
CHAPTER 6. ROOT-LOCUS METHOD 53

10. Angles of departure and arrival: Computed using the angle criterion, by positioning
a trial point at a complex open-loop pole (departure) or zero (arrival).

11. Determination of points on the root loci: Exact points on root loci are found using
the angle criterion.

12. Determination of K on root loci: The value of K on root loci is found using the
magnitude criterion.

6.2 Root locus construction

Problem 1.
For a unity feedback system,

K
G(s) = (6.5)
s(s + 4)(s + 2)

Sketch the nature of root locus showing all details on it. Comment on the stability of the
system.

Solution:

Given system is unity feedback system. Therefore H(s) = 1.


K
Therefore G(s)H(s) = s(s+4)(s+2) .

Step 1:
Poles = 0, −4, −2. Therefore P = 3.
Zeros = there are no zeros. Z = 0.
So all (P − Z = 3) branches terminate at infinity.

Step 2: Pole-zero plot and sections of the real axis.


The pole-zero plot of the system is shown in the figure below.
CHAPTER 6. ROOT-LOCUS METHOD 54

Here RL denotes Root Locus existence region and NRL denotes the non-existence re-
gion of root locus. These sections of real axis identified as a part of the root locus as to
the right sum of poles and zeros is odd for those sections.

Step 3: Angle of asymptotes


‘A line to which root locus touches at infinity is called asymptotes.’
Number of asymptotes = P − Z = 3.
Therefore 3 asymptotes are approaching to infinity.

Step 4: Centroid or Centre of asymptotes


Asymptote touches real axis at a point called centroid. Branches will approach infinity
along these lines which are asymptotes.
CHAPTER 6. ROOT-LOCUS METHOD 55

Step 5: To find breakaway point, we have characteristic equation as,

As there is no root locus between −2 to −4, −3.15 can not be a breakaway point. It
also can be confirmed by calculating K for s = −3.15. It will be negative that confirms
s = −3.15 is not a breakaway point.

For s = −3.15, K = −3.079 (Substituting in equation for K). But as there has to be
breakaway point between ‘0’ and ‘−2’, s = –0.845 is a valid breakaway point.
For s = −0.845K = +3.079. As K is positive s = –0.845 is valid breakaway point.
CHAPTER 6. ROOT-LOCUS METHOD 56

Step 6: Intersection point with the imaginary axis. Characteristic equation

s3 + 6s2 + 8s + K = 0 (6.6)

Routh’s array:

Intersection of root locus with imaginary axis is at ±j2.828 and


corresponding value of K(marginal) = 48.
Step 7: As there are no complex conjugate poles or zeros, no angles of departures or
arrivals are required to be calculated.
Step 8: The complete root locus is as shown in the figure below.
CHAPTER 6. ROOT-LOCUS METHOD 57

Step 9: Prediction about stability:


For 0 < K < 48, all the roots are in left half of s-plane hence system is absolutely
stable.
For K(marginal) = +48, a pair of dominant roots on imaginary axis with remaining
root in left half.
So the system is marginally stable oscillating at 2.82rad/sec. For 48 < K < 8, domi-
nant roots are located in right half of s-plane hence system is unstable.
Stability is predicted by locations of dominant roots. Dominant roots are those which
are located closest to the imaginary axis.

Problem 2.
Sketch the root locus for the system shown in figure just below.

Solution:
CHAPTER 6. ROOT-LOCUS METHOD 58

Let us begin by calculating the asymptotes. The real axis intercept is evaluated as

(−1 − 2 − 4) − (−3) 4
σa = =− (6.7)
4−1 3

The angles of the lines that intersect at −4/3, are.

If the value for k continued to increase, the angles would begin to repeat. The number
of lines obtained equals the difference between the number of finite poles and the
number of finite zeros. The locus begins at the open-loop poles and ends at the open-
loop zeros. For the example there are more open-loop poles than open-loop zeros.
Thus, there must be zeros at infinity. The asymptotes tell us how we get to these zeros
at infinity. Figure just below shows the complete root locus as well as the asymptotes
that were just calculated. The real-axis segments lie to the left of an odd number of
poles and/or zeros. The locus starts at the open-loop poles and ends at the open-loop
zeros. For the example there is only one open-loop finite zero and three infinite zeros.
The three zeros at infinity are at the ends of the asymptotes.
Chapter 7

Nyquist Stability Criterion

Nyquist Plots were invented by Nyquist - who worked at Bell Laboratories, the pre-
miere technical organization in the U.S. at the time. Nyquist Plots are a way of show-
ing frequency responses of linear systems. There are several ways of displaying fre-
quency response data, including Bode plots and Nyquist plots. Bode plots use fre-
quency as the horizontal axis and use two separate plots to display amplitude and
phase of the frequency response. Nyquist plots display both amplitude and phase an-
gle on a single plot, using frequency as a parameter in the plot. Nyquist plots have
properties that allow you to see whether a system is stable or unstable.

7.1 Nyquist Plot


A Nyquist plot is a polar plot of the frequency response function of a linear system.
That means a Nyquist plot is a plot of the transfer function, G(s) with s = jω. That
means you want to plot G(jω). G(jω) is a complex number for any angular frequency,
ω, so the plot is a plot of complex numbers. The complex number, G(jω), depends
upon frequency, so frequency will be a parameter if you plot the imaginary part of
G(jω) against the real part of G(jω).

7.2 Sketching the Polar plot of the Frequency Re-


sponse
To sketch the polar plot of G(jω) for the entire range of frequency ω, i.e., from 0 to ∞,
there are four key points that usually need to be known:

1.) The start of plot where ω = 0,

2.) The end of plot where ω = ∞,

3.) Where the plot crosses the real axis, Im(G(jω)) = 0, and

4.) Where the plot crosses the imaginary axis, Re(G(jω)) = 0.

59
CHAPTER 7. NYQUIST STABILITY CRITERION 60

Problem 1.
Consider a first order system

1
G(s) = (7.1)
1 + sT
where T is the time constant.

Solution:

Representing G(s) in the frequency response form G(jω) by replacing s = jω:

1
G(jω) = (7.2)
1 + jωT

The magnitude of G(jω), i.e., |G(jω)|, is obtained as;

1
|G(jω)| = √ (7.3)
1 + ω2T 2

The phase of G(jω), denoted by, φ, is obtained as;

< G(jω) = −arctan(ωT ) (7.4)

The start of plot where ω = 0

1
|G(jω)| = √ =1 (7.5)
1+0

0
φ = tan− 1 (7.6)
1
The end of plot where ω = ∞

1
|G(jω)| = √ =0 (7.7)
1+∞

−∞
φ = tan− 1 = −90deg. (7.8)
1
CHAPTER 7. NYQUIST STABILITY CRITERION 61

Problem 2.
Consider a second order system

1
G(s) = (7.9)
(1 + sT1 )(1 + sT2 )

Solution:
Where T is the time constant. Representing G(s) in the frequency response form G(jω)
by replacing s = jω:

1
G(jω) = (7.10)
(1 + jωT1 )(1 + jωT2 )

The magnitude of G(jω), i.e., |G(jω)|, is obtained as;

1
|G(jω)| = p p (7.11)
1 + ω T1 1 + ω 2 T22
2 2

The phase of G(jω), denoted by, φ, is obtained as;

< G(jω) = −arctan(ωT1 ) − arctan(ωT2 ) (7.12)


CHAPTER 7. NYQUIST STABILITY CRITERION 62

The start of plot where ω = 0

1
|G(jω)| = √ √ =1 (7.13)
1+0 1+0

< G(jω) = −tan− 1(0) − tan− 1(0) = 0 (7.14)

The end of plot where ω = ∞

1
|G(jω)| = √ √ = 0 (7.15)
∞ ∞

< G(jω) = −arctan(∞) − arctan(∞) = −90 − 90 = −180deg. (7.16)

Problem 3.
For the unity feedback system, where G(s) = K/[s(s + 3)(s + 5)], find the range of gain, K,
for stability, instability, and the value of gain for marginal stability. For marginal stability also
find the frequency of oscillation. Use the Nyquist criterion.

Solution:

First set K = 1 and sketch the Nyquist diagram for the system, using the contour
CHAPTER 7. NYQUIST STABILITY CRITERION 63

shown in figure just below. For all points on the imaginary axis,

K
G(jw)H(jw) = K = 1, s = jω (7.17)
s(s + 3)(s + 5)

−8ω 2 − j(15ω − ω 3
= (7.18)
64ω 4 + ω 2 (15 − ω 2 )2

At ω = 0, G(jw)H(jw) = −0.0356 − j∞.

Next find the point where the Nyquist diagram intersects the negative real axis. Set-

ting the imaginary part equal to zero, we find w = 15. Substituting this value of w
back into Eq. yields the real part of −0.0083. Finally, at w = ∞,

1
G(jw)H(jw) = G(s)H(s) → j∞ = = 0 < −270 (7.19)
j∞3

From the contour of figure (a) just above, P = 0; for stability N must then be equal to
zero. From figure (b) just above, the system is stable if the critical point lies outside the
contour (N = 0), so that Z = P |N = 0. Thus, K can be increased by 1/0.0083 = 120.5
before the Nyquist diagram encircles −1. Hence, for stability, K < 120.5. For marginal
stability K = 120.5. At this gain the Nyquist diagram intersects −1, and the frequency
of oscillation is root of 15rad/s.
Bibliography

[1] N.S. Nise. Control Systems Engineering 6th Edition Binder Ready Version with Binder
Ready Survey Flyer Set. Wiley, 2011.

[2] R.S. Burns. Advanced Control Engineering. Chemical, Petrochemical & Process.
Butterworth-Heinemann, 2001.

[3] K. Ogata. Modern Control Engineering. Instrumentation and controls series. Prentice
Hall, 2010.

64

You might also like