100% found this document useful (1 vote)
949 views581 pages

Signals & Control Lecture Guide

This document provides an overview of lecture notes on signals, systems and control. It contains 8 sections that cover topics such as modelling mechanical, electrical, fluidic and thermal systems; systems theory; sensors and actuators; control systems; and practical implementation of control systems. The notes were produced by Duarte Valério and contain examples, glossaries and exercises for each topic. They acknowledge contributions from other professors who provided exercises and material.

Uploaded by

PALANCHA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
949 views581 pages

Signals & Control Lecture Guide

This document provides an overview of lecture notes on signals, systems and control. It contains 8 sections that cover topics such as modelling mechanical, electrical, fluidic and thermal systems; systems theory; sensors and actuators; control systems; and practical implementation of control systems. The notes were produced by Duarte Valério and contain examples, glossaries and exercises for each topic. They acknowledge contributions from other professors who provided exercises and material.

Uploaded by

PALANCHA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 581

Lecture Notes of

Signals, Systems and Control


Duarte Valério

October 31, 2022


Overview of contents
I Modelling 3

II Systems theory 105

III Sensors and actuators 209

IV Control systems 265

V Practical implementation of control systems 355

VI System identication 405

VII Fractional order systems 447

VIII Stochastic systems 493

Epilogue 571
Acknowledgements
Some of the exercises in these lecture notes have been taken or adapted from
collections of exercises or exams conceived by colleagues to whom I am in-
debted for authorising their reproduction here. In particular, I thank profes-
sors Alexandra Moutinho, Carlos Cardeira, Jorge Martins, José Raul Azinheira,
Mário Ramalho, Miguel Ayala Botto, Paulo Oliveira, Pedro Lourtie and Rodrigo
Bernardo for this.

i
ii OVERVIEW OF CONTENTS
Contents

1 The name of the game 1

I Modelling 3
2 The Laplace transform 7
2.1. Denition, 7.  2.2. Finding Laplace transforms, 8.  2.3. Finding
inverse Laplace transforms, 10.  2.4. Important properties: derivatives
and integrals, 14.  2.5. What do we need this for?, 15.  2.6. More
important properties: initial and nal values, convolution, 17.  2.7. The
Fourier transform, 19.  Glossary, 21.  Exercises, 22.
3 Examples of mechatronic systems and signals 23
3.1. Systems, 23.  3.2. Signals, 28.  3.3. Models, 35.  Glossary,
36.  Exercises, 38.
4 Modelling mechanical systems 41
4.1. Modelling the translation movement, 41.  4.2. Simulating transfer
functions in Matlab, 47.  4.3. Modelling the rotational movement, 49. 
4.4. Energy, eort and ow, 50.  4.5. Other components, 51.  Glossary,
56.  Exercises, 56.
5 Modelling electrical systems 61
5.1. Passive components, 61.  5.2. Energy, eort and ow, 64.  5.3. The
operational amplier (OpAmp), an active component, 68.  5.4. Other
components, 73.  Glossary, 74.  Exercises, 75.
6 Modelling uidic systems 79
6.1. Energy, eort and ow, 79.  6.2. Basic components of a uidic system,
80.  6.3. Other components, 83.  Glossary, 83.  Exercises, 85.
7 Modelling thermal systems 87
7.1. Energy, eort and ow, 87.  7.2. Basic components of a thermal
system, 87.  Glossary, 91.  Exercises, 91.
8 Modelling interconnected and non-linear systems 93
8.1. Energy, eort and ow, 93.  8.2. System interconnection, 95.  8.3.
Dealing with non-linearities, 97.  Glossary, 99.  Exercises, 100.

II Systems theory 105


9 Transfer functions and block diagrams 109
9.1. More on transfer functions, 109.  9.2. Block diagrams, 113.  9.3.
Control in open-loop and in closed-loop, 122.  Glossary, 124.  Exercises,
125.
10 Time and frequency responses 129
10.1. Time responses: steps and impulses as inputs, 129.  10.2. Steady-
state response and transient response, 134.  10.3. Stability, 138.  10.4.
Time responses: periodic inputs, 143.  10.5. Frequency responses and the
Bode diagram, 146.  Glossary, 155.  Exercises, 156.
iii
iv CONTENTS

11 Finding time and frequency responses 161


11.1. Time and frequency responses of a pole at the origin, 161.  11.2.
Time and frequency responses of a rst-order system without zeros, 163. 
11.3. Time and frequency responses of a second-order system without zeros,
168.  11.4. Systems with more zeros and poles: frequency responses,
181.  11.5. Systems with more zeros and poles: stability, 190.  11.6.
Systems with more zeros and poles: time responses, 194.  Glossary, 200. 
Exercises, 201.
III Sensors and actuators 209
12 Measuring chains and actuation chains 213
12.1. What are measuring chains and actuation chains, 213.  12.2. Filters,
215.  12.3. Bandwidth, 220.  12.4. Signal characteristics, 222.  12.5.
Op-amp implementation of signal conditioning, 231.  Glossary, 233. 
Exercises, 234.
13 Sensors 239
13.1. Position, proximity, velocity and acceleration sensors, 240.  13.2.
Working principles of position sensors, 242.  13.3. Working principles of
velocity and acceleration sensors, 252.  13.4. Sensors for force, binary,
pressure and level, 254.  13.5. Sensors for ow and pressure in ows,
254.  13.6. Sensors for temperature and luminosity, 254.  13.7. Sensors
for pH and concentration, 254.  Glossary, 254 .  Exercises, 254.
14 Actuators 259
14.1. Generalities about electric motors, 259.  14.2. DC motors, 259. 
14.3. AC motors, 259.  14.4. Generalities about pneumatic and hydraulic
actuators, 259.  14.5. Pneumatic and hydraulic compressors and motors,
259.  14.6. Cylinders and valves, 259.  14.7. Dimensioning of pneumatic
and hydraulic circuits, 260.  Glossary, 260.  Exercises, 260.
IV Control systems 265
15 Control strategies and controller structures 269
15.1. Open loop control, 269.  15.2. Closed loop control, 272.  15.3.
Design of open loop controllers, 276.  15.4. Closed loop controllers, 276. 
Glossary, 281.  Exercises, 282.
16 Root locus 283
16.1. Simple examples, 283.  16.2. Rules for the root locus, 288.  16.3.
Proofs of rules for the root locus, 294.  16.4. Finding desired poles from
specications, 299.  Glossary, 302.  Exercises, 302.
17 The Nyquist stability criterion 305
17.1. The polar diagram, 305.  17.2. The Nyquist diagram when there
are no poles on the imaginary axis, 307.  17.3. The Nyquist criterion,
309.  17.4. The Nyquist diagram when there are poles on the imaginary
axis, 314.  Glossary, 317.  Exercises, 318.
18 Stability margins 321
18.1. Stability margins for stable, minimum phase systems, 321.  18.2.
Stability margins in the general case, 325.  Glossary, 325.  Exercises, 326.
19 The Nichols diagram 327
Glossary, 327.  Exercises, 328.
20 Steady-state errors 329
Glossary, 329.  Exercises, 330.
21 Design of PID controllers 333
Glossary, 335.  Exercises, 335.
22 Design of lead-lag controllers 339
Glossary, 339 .  Exercises, 339.
CONTENTS v

23 Internal Model Control 343


23.1. IMC as a variation of closed-loop control, 343.  23.2. IMC as a design
method for closed-loop controllers, 346.  Glossary, 346.  Exercises, 347.
24 Delay systems 349
24.1. Pure delays, 349.  24.2. Padé approximations, 350.  24.3. Smith
predictor, 350.  24.4. Control of systems similar to delay sistems, 350. 
Glossary, 351.  Exercises, 351.
V Practical implementation of control systems 355
25 Digital signals and systems 359
25.1. The Z transform, 361.  25.2. Discrete transfer functions, 364. 
25.3. Zero order hold, 365.  25.4. Choosing the sampling time, 366. 
25.5. Stability and causality of discrete transfer functions, 369.  25.6.
Primary and complementary strips in s, 371.  Glossary, 372.  Exercises,
372.
26 Digital approximations of continuous systems 375
26.1. Using the Z transform, 375.  26.2. Mapping poles and zeros, 375. 
26.3. Backward rst order approximation, 375.  26.4. Forward rst order
approximation, 375.  26.5. The Tustin approximation, 375.  26.6. Ap-
proximating controllers, 375.  Glossary, 376.  Exercises, 376.
27 Study and control of digital systems 377
27.1. Block diagrams, 377.  27.2. Pole placement, 377.  27.3. Steady
state errors, 377.  27.4. Root locus, 378.  27.5. Jury and Routh-Hurwitz
criteria, 378.  27.6. Frequency responses, 378.  27.7. Studying stability
from frequency responses, 378.  Glossary, 378.  Exercises, 379.
28 Non-linearities in control systems 383
28.1. Non-linearities and responses in time, 383.  28.2. Describing func-
tion, 388.  28.3. Predicting limit cycles, 394.  Glossary, 399.  Exercises,
399.
29 Other aspects of controller design and implementation 403
Glossary, 403.  Exercises, 404.
VI System identication 405
30 Overview and general issues 409
30.1. Types of data, types of identication methods, and types of models,
409.  30.2. Comparing model performance, 411.  30.3. Noise, 414. 
30.4. Pseudo-inverse of a matrix, 416.  Glossary, 419.  Exercises, 419.
31 Identication from time responses 421
31.1. Identication of the order of the model, 421.  31.2. Identication
from analytical characteristics of the response, 422.  31.3. Identication
by minimisation of error, 423.  31.4. Deconvolution, 426.  Glossary,
429.  Exercises, 429.
32 Identication from frequency responses 431
32.1. Finding frequency response data, 431.  32.2. Identication from
the Bode diagram, 433.  32.3. Levy's method, 433.  32.4. Matsuda's
method, 437.  32.5. Oustaloup's method, 439.  Glossary, 440.  Exercises,
440.
33 Identication of non-linearities 443
33.1. Identication from a time response, 443.  33.2. Identication from
a limit cycle, 443.  33.3. Identication of a pure delay, 444.  Exercises,
444.
VII Fractional order systems 447
34 Fractional order systems and their frequency responses 451
vi CONTENTS

34.1. Frequency responses that require fractional order systems, 451.  34.2.
Frequency responses of fractional order systems, 453.  34.3. Identication
from the Bode diagram, 458.  34.4. Levy's method extended, 459. 
Glossary, 462.  Exercises, 462.
35 Fractional order derivatives 467
35.1. Gamma function, 467.  35.2. Two apparently simple examples,
471.  35.3. The Grünwald-Letniko denition of fractional derivatives,
473.  35.4. The Riemann-Liouville denition of fractional derivatives,
475.  35.5. Properties of fractional derivatives, 478.  35.6. Applications
of fractional derivatives, 479.  Glossary, 480.  Exercises, 480 .
36 Time responses of fractional order systems 483
36.1. The Mittag-Leer function, 483.  36.2. Time responses of simple
fractional order transfer functions, 485.  36.3. Stability of fractional sys-
tems, 486.  36.4. Identication from time responses, 488.  36.5. Final
comments about models of fractional order, 491.  Exercises, 491.
VIII Stochastic systems 493
37 Stochastic processes and systems 497
37.1. Stochastic processes, 497.  37.2. Characterisation of stochastic pro-
cesses, 500.  37.3. Relations between stochastic processes, 510.  37.4.
Operations with stochastic processes, 515.  Glossary, 517.  Exercises, 518.
38 Spectral density 519
38.1. The bilateral Fourier transform, 519.  38.2. Denition and properties
of the spectral density, 522.  38.3. Numerical computation of the PSD and
CSD, 527.  38.4. White noise, 531.  Glossary, 532.  Exercises, 532.
39 Identication of continuous stochastic models 533
39.1. Identication in time, 533.  39.2. Identication in frequency, 537. 
Glossary, 541.  Exercises, 542.
40 Filter design 543
40.1. Wiener lters, 543.  40.2. Whitening lters, 546.  Glossary, 547. 
Exercises, 547.
41 Digital stochastic models 549
41.1. Types of digital stochastic models, 549.  41.2. Autocorrelation
of a MA, 552.  41.3. Partial autocorrelation of an AR, 554.  41.4.
Identication, 555.  Glossary, 556.  Exercises, 556.
42 Control of stochastic systems 557
42.1. Minimum variance control, 557.  42.2. Pole-assignment control,
564.  42.3. Control design for time-varying references, 566.  42.4. Adap-
tive control, 567.  Glossary, 568.  Exercises, 568.
Epilogue 571
43 What next? 571
43.1. Discrete events and automation, 571.  43.2. State-space representa-
tions of systems continuous in time, 571.  43.3. State-space representations
of systems discrete in time, 571.  43.4. MIMO systems and MIMO con-
trol, 571.  43.5. Optimal control, 572.  43.6. Fractional order control,
572.  43.7. Other areas, 572.  Glossary, 572.
Chapter 1

The name of the game


Signum est quod et se ipsum sensui et praeter se aliquid animo os-
tendit.

Saint Augustine of Hippo (354  „430), De dialectica, V

System is the part of the Universe we want to study. System


A signal is a function of time or space that conveys information about a Signal
system.
A control system is, in simple terms, a system designed to behave as the Control system
designer desires.
There are systems of many types: biological systems, economic systems,
chemical systems, social systems. . . These lecture notes are concerned in par-
ticular with mechatronic systems, i.e. those combining both mechanical and Mechatronic system
electronic components.

Example 1.1. A Wave Energy Converter (WEC) is a mechatronic system that


extracts the energy of sea waves, usually to produce electricity. The power
injected to the electric grid is a signal that depends on time. The elevation
of the sea waves is a signal that depends on both time and space. Figure 1.1
illustrates these signals.

We will study the following subjects:

Part I addresses the development of mathematical models to describe the be-


haviour of mechatronic systems.

Part II uses the models from Part I to study how systems behave.

Part III presents the technology used to measure signals and to control mechatronic
systems.

Part IV shows how controllers can be designed to make control systems behave as
desired.

2
wave elevation [m]

5
x 10
7
Uncontrolled 1
6 Controlled

5 0
Energy [J]

3 −1

2
−2
1

0
−3
0 100 200 300 400 500 600 700 800 900
−1
0 50 100 150 200 250 300 350 400 time [s]
Time [s]

Figure 1.1: Left: electrical energy produced by a WEC as a function of time.


Right: wave elevation as a function of time at a given point; at a dierent
location, the wave elevation will be dierent.

1
2 CHAPTER 1. THE NAME OF THE GAME

Part V is about practical aspects of the implementation of control systems.

Part VI explores system identication, i.e. ways of nding models from experimen-
tal data.

Part VII studies systems described by models with derivatives and integrals with
orders that are not integer numbers.

Part VIII covers systems with outputs that are not deterministic.

Chapter 43 concludes these lecture notes with an overview of related subjects.

At IST, ULisboa, these subjects are covered in the following courses:

ˆ Signals and Mechatronic Systems covers Parts I, II and III.


ˆ Control Systems covers Parts IV and V.
ˆ System Identication covers Parts VI, VII and VIII.
Matlab We will often use a software called Matlab, as well as Matlab's graphical
Simulink environment for working with block diagrams Simulink. Matlab is not a free
software. On the other hand, some of the functionalities of Matlab can be
Octave supplied by free software such as Octave (which you can install from https://
www.gnu.org/software/octave or run online from https://octave-online.
Scilab net/) or Scilab (which you can install from https://www.scilab.org/). Notice
that several functionalities of Matlab we will need are missing from Octave,
which also lacks anything parallel to Simulink. Scilab is more complete, and
has Xcos, which is similar to Simulink; but is slightly less compatible with
Matlab.
In what follows it is presumed that you are acquainted with the most ba-
sic features of Matlab (or Octave, or Scilab), which you can learn with the
Getting started with MATLAB tutorials in the program's Documentation
Center.
Part I

Modelling

3
5

La ciencia jamás podrá descubrir todos los secretos de la naturaleza,


ya que la ciencia la hacen los hombres y éstos son parte de ella.

J. Juan Rosales García (1967  . . . ), pers. comm., Saint Petersburg, 2001,


qtd. in Ecuaciones diferenciales ordinarias (auth. Juan Rosales, Manuel Guía
Calderón)

In this part of the lecture notes:

Chapter 2 presents very handy mathematical tools for the resolution of dierential
equations, which we will need repeatedly in subsequent chapters.

Chapter 3 gives examples of mechatronic systems and signals, and the basic notions
related thereto.

Chapter 4 addresses the modelling of mechanical systems.

Chapter 5 addresses the modelling of electrical systems.

Chapter 6 addresses the modelling of uidic systems, a particular type of mechanical


systems.

Chapter 7 addresses the modelling of thermal systems.

Chapter 8 addresses the modelling of systems combining several of the components


studied in chapters 4 to 7, as well as of systems with nonlinear models.

Here is what you need to know beforehand to follow these chapters:

ˆ Dierential and integral calculus, up to the usual level of freshman courses


on Calculus;

ˆ Kinematics, dynamics, electrical circuits, uid mechanics, and heat trans-


fer, up to the usual level of freshman courses on Physics, or at least the
level of secondary education.
6
Chapter 2

The Laplace transform


And, if we transmit through a wormhole, the person is always re-
constituted at the other end. We can count on that happening, too.
There was a pause.
Stern frowned.
Wait a minute, he said. Are you saying that when you transmit,
the person is being reconstituted by another universe?
In eect, yes. I mean, it has to be. We can't very well reconstitute
them, because we're not there. We're in this universe.

Michael Crichton (1942  „2008), Timeline, Black rock

The Laplace transform is a very important tool for the resolution of dif- Laplace transform
ferential equations. In this chapter we will study its denition, its properties,
its application to dierential equations (which is the reason we are studying this
subject), and the related Fourier transform, that we will also need.

2.1 Denition
Denition 2.1. Let t ∈ R be a real variable, and f (t) ∈ R a real-valued
function. The Laplace transform of function f , denoted by L [f (t)] or by F (s),
is a complex-valued function F (s) ∈ C of complex variable s ∈ C, given by
Z +∞
L [f (t)] = f (t)e−st dt (2.1)
0

Remark 2.1. Strictly speaking, operation L is the Laplace transformation,


and the result of applying L to a function gives us its Laplace transform. But
it is common to call the operation itself Laplace transform as well.
Remark 2.2. In (2.1), function f (t) only has to be dened for t ≥ 0. This
would not be so if we were using the bilateral Laplace transform, which is Bilateral Laplace trans-
an alternative denition given by form

Z +∞
L [f (t)] = f (t)e−st dt (2.2)
−∞

This bilateral Laplace transform is seldom used; we will use (2.1) instead, as is
common, and will need (2.2) only in Chapter 37. The price to pay for being
able to work with functions dened in R+ only will be addressed below in
section 2.4.
Remark 2.3. The Laplace transform is part of a group of transforms known
as integral transforms, given by
Z +∞
T [f (t)] = f (t)K(s, t) dt (2.3)
0

where T is a generic transform and K(s, t) is a function called kernel. In the


case of the Laplace transform, the kernel is K(s, t) = e−st .

7
8 CHAPTER 2. THE LAPLACE TRANSFORM

The Laplace transform of function f (t) will only exist if the improper integral Existence of the Lap
in (2.1) converges. This will happen in one of two cases: transform

ˆ If f (t) is bounded in its domain R , the integrand f (t)e+ −st


will obviously
tend to 0 as t → +∞.
ˆ If f (t) tends to innity as t → +∞, but does so slower than e−st tends
to 0, the integrand will still tend to 0. More rigorously, f (t) must be of
exponential order, i.e. there must be positive real constants M, c ∈ R such
that
|f (t)| ≤ M ect , 0 ≤ t ≤ ∞. (2.4)

Otherwise, the integrand of (2.1) does not tend to 0 and it is obvious that the
improper integral will be innite. For complete rigour we also have to require
f (t) to be piecewise continuous for F (s) to exist, otherwise the Riemann integral
would not exist.
Remark 2.4. In fact (2.1) may converge only for some values of s, and thus have
a region of convergence which is smaller than C; but then it can be analytically
extended to the rest of the complex plane. This is a question we will not worry
about.

2.2 Finding Laplace transforms


Heaviside function Example 2.1. Let f (t) be function
(
1, if t ≥ 0
H(t) = , (2.5)
0, if t < 0

L [H(t)] known as the Heaviside function. Then


Z +∞  −st +∞
e e−∞ e0 1
L [H(t)] = 1×e −st
dt = = − = . (2.6)
0 −s 0 −s −s s

L [e−at ] Example 2.2. Let f (t) be a negative exponential, f (t) = e−at . Then
Z +∞
L e −at
e−at e−st dt
 
=
0
+∞
e−(a+s)t e−∞ e0
  
1
= =− − − = . (2.7)
−a − s 0 s+a s+a s+a
While Laplace transforms can be found from denition as in the two exam-
ples above, in practice they are found from tables, such as the one in Table 2.1.
To use with prot Laplace transform tables, it is necessary to prove rst the
following result.
L is linear Theorem 2.1. The Laplace transform is a linear operator:
L [k f (t)] = k F (s), k∈R (2.8)
L [f (t) + g(t)] = F (s) + G(s) (2.9)

Proof. Both (2.8) and (2.9) are proved from the linearity of the integration
operator:
Z +∞ Z +∞
L [k f (t)] = k f (t)e −st
dt = k f (t)e−st dt = k F (s) (2.10)
0 0
Z +∞
L [f (t) + g(t)] = (f (t) + g(t)) e−st dt (2.11)
0
Z +∞ Z +∞
= f (t)e−st dt + g(t)e−st dt = F (s) + G(s)
0 0

Example 2.3. The Laplace transform of f (t) = 5t is obtained from line 3 of


Table 2.1 together with (2.8):
5
L [5t] = 5L [t] = (2.12)
s2
2.2. FINDING LAPLACE TRANSFORMS 9

Table 2.1: Table of Laplace transforms

x (t) X (s)

1 δ (t) 1

1
2 H (t)
s
1
3 t
s2
2
4 t2
s3
1
5 e−at
s+a
a
6 1 − e−at
s (s + a)
1
7 te−at 2
(s + a)
n!
8 tn e−at , n ∈ N n+1
(s + a)
ω
9 sin (ωt)
s2 + ω 2
s
10 cos (ωt)
s2 + ω 2
ω
11 e−at sin (ωt)
(s + a)2 + ω 2
s+a
12 e−at cos (ωt) 2
(s + a) + ω 2
1 1
13 e−at − e−bt

b−a (s + a)(s + b)
 
1 1 1
14 b e−at − a e−bt

1+
ab a−b s(s + a)(s + b)
ωn −ξωn t ωn2
15 e sin (ωn Ξt)
Ξ s2 + 2ξωn s + ωn2
1 s
16 − e−ξωn t sin (ωn Ξt − φ)
Ξ s + 2ξωn s + ωn2
2

1 −ξωn t ωn2
17 1− e sin (ωn Ξt + φ)
Ξ 2
s (s + 2ξωn s + ωn2 )
p Ξ
In this table: Ξ = 1 − ξ 2 ; φ = arctan
ξ
10 CHAPTER 2. THE LAPLACE TRANSFORM

Example 2.4. The Laplace transform of f (t) = 1 − (1 + t)e−3t is obtained from


lines 6 and 7 of Table 2.1 together with (2.9):

L [f (t)] = L 1 − e−3t − te−3t = L 1 − e−3t − L te−3t


     

3 1 3s + 32 − s 2s + 9
= − 2 = 2 = 3 (2.13)
s (s + 3) (s + 3) s (s + 3) s + 6s2 + 9s

2.3 Finding inverse Laplace transforms


Inverse Laplace transform Laplace transform tables can also be used to nd inverse Laplace transforms,
i.e. nding the f (t) corresponding to a given F (s) = L [f (t)]. This operation
is denoted by f (t) = L −1 [F (s)].

Example 2.5. The inverse Laplace transform of F (s) = 10


s+10 is obtained from
line 5 of Table 2.1 together with (2.8):
   
10 1
L −1 = 10 L −1 = 10 e−10t (2.14)
s + 10 s + 10

Partial fraction expansion Example 2.6. The inverse Laplace transform of F (s) = s2 +13s+30 s+2
is obtained
from line 5 of Table 2.1 together with (2.8). But for that it is necessary to develop
F (s) in a partial fraction expansion. First we nd the roots of the polynomial
in the denominator, which are −3 and −10. So s2 + 13s + 30 = (s + 3)(s + 10),
and we can write
s+2 A B
= + (2.15)
s2 + 13s + 30 s + 3 s + 10
where A and B still have to determined:
A B As + 10A + Bs + 3B s(A + B) + (10A + 3B)
+ = = (2.16)
s + 3 s + 10 (s + 3)(s + 10) s2 + 13s + 30

Obviously we want that


( ( (
A+B =1 B =1−A B = 78
⇔ ⇔ (2.17)
10A + 3B = 2 10A + 3 − 3A = 2 A = − 17

s+2 − 17 8
So = + 7
, and nally
s2 + 13s + 30 s + 3 s + 10
 1 8
−7
  
s+2
L −1
=L −1
+ 7
(2.18)
s2 + 13s + 30 s + 3 s + 10
 1   8 
−7 1 8
= L −1 + L −1 7
= − e−3t + e−10t
s+3 s + 10 7 7

Remark 2.5. Notice that the result in line 13 of Table 2.1 can be obtained
from line 5 also using a partial fraction expansion:

1 A B As + Ab + Bs + aB s(A + B) + (Ab + aB)


= + = =
(s + a)(s + b) s+a s+b (s + a)(s + b) (s + a)(s + b)
(2.19)

We want
( ( (
−1
A+B =0 B = −A B=
⇔ ⇔ b−a
1
(2.20)
Ab + aB = 1 Ab − aA = 1 A= b−a

and thus
  " 1 # " −1 #
1 b−a b−a
L −1 =L −1
+L −1
(2.21)
(s + a)(s + b) s+a s+b
1 −at −1 −bt 1
e−at − e−bt

= e + e =
b−a b−a b−a
2.3. FINDING INVERSE LAPLACE TRANSFORMS 11

ial fraction expansion Example 2.7. The inverse Laplace transform of F (s) = 4s2 +13s−2
(s2 +2s+2)(s+4) is ob-
complex roots tained from lines 5, 15 and 16 of Table 2.1 together with 2.8 and 2.9. The
transforms in lines 14 and 15 are used because the roots of 4s2 + 13s − 2 are
complex and not real (−1 ± j , to be precise). So we will leave that second order
term intact and we make
4s2 + 13s − 2 As + B C As2 + 4As + Bs + 4B + Cs2 + 2Cs + 2C
2
= 2 + =
(s + 2s + 2)(s + 4) s + 2s + 2 s + 4 (s2 + 2s + 2)(s + 4)
s2 (A + C) + s(4A + B + 2C) + (4B + 2C)
= (2.22)
(s2 + 2s + 2)(s + 4)
Hence
   
A + C = 4
 C = 4 − A
 C = 4 − A
 C = 1

4A + B + 2C = 13 ⇔ 4A + B + 8 − 2A = 13 ⇔ 2A + B = 5 ⇔ A=3
   
4B + 2C = −2 4A − 3B = 15 4A − 3B = 15 B = −1
   
(2.23)
Finally,
4s2 + 13s − 2
   
3s − 1 1
L −1
=L −1
+ (2.24)
(s2 + 2s + 2)(s + 4) s2 + 2s + 2 s + 4
     
s 1 2 1
= 3L −1 2 − L −1 2 + L −1
s + 2s + 2 2 s + 2s + 2 s+4
and since for the rst two terms we have

ω= 2 (2.25)
ξω = 1 (2.26)
1
ξ=√ (2.27)
2
r
1 1
Ξ= 1− =√ (2.28)
2 2
ωΞ = 1 (2.29)
√1
2 π
ϕ = arctan = (2.30)
√1 4
2
we arrive at
4s2 + 13s − 2 √ −t
   π  1 −t
L −1 = −3 2e sin t − − 2e sin(t) + e−4t
(s2 + 2s + 2)(s + 4) 4 2
h √  π π i
−4t −t
=e +e −3 2 sin t cos − cos t sin − sin t
4 4

  
−4t −t 1 1
=e +e −3 2 sin t √ − cos t √ − sin t
2 2
= e−4t + e−t (−4 sin t + 3 cos t) (2.31)
Remark 2.6. If in the example above we had decided to expand the second
order term and use only line 5 of Table 2.1, we would have arrived at the very
same result, albeit with more lengthy and tedious calculations involving complex
numbers. We would have to separate s23s−1
+2s+2 in two as follows:

3s − 1 A + Bj C + Dj
= + (2.32)
s2 + 2s + 2 s+1+j s+1−j
As + A − Aj + Bjs + Bj + B + Cs + C + Cj + Djs + Dj − D
=
s2 + s − js + s + 1 − j + js + j + 1
s(A + C) + js(B + D) + (A + B + C − D) + j(−A + B + C + D)
=
s2 + 2s + 2
Then
C = 23
  

 A+C =3 
 C =3−A 


B + D = 0 
D = −B 
D = 2
⇔ ⇔
A + B + C − D = −1 A + B + 3 − A + B = −1 B = −2

 
 

−A + B + C + D = 0 −A + B + 3 − A − B = 0 A = 32
  

(2.33)
12 CHAPTER 2. THE LAPLACE TRANSFORM

Consequently
 3 3
4s2 + 13s − 2 2 − 2j
  
2 + 2j 1
L −1 =L −1
+ +
(s2 + 2s + 2)(s + 4) s+1+j s+1−j s+4
         
3 1 3 1 1
= − 2j L −1
+ + 2j L −1
+L −1
2 s+1+j 2 s+1−j s+4
   
3 3
= − 2j e−(1+j)t + + 2j e−(1−j)t + e−4t
2 2
   
−4t 3 −t 3
=e + − 2j e (cos(−t) + j sin(−t)) + + 2j e−t (cos t + j sin t)
2 2

−4t −t 3 3
=e +e cos t − j sin t − 2j cos t − 2 sin t+
2 2

3 3
+ cos t + j sin t + 2j cos t − 2 sin t
2 2
= e−4t + e−t (3 cos t − 4 sin t) (2.34)

Notice how all the complex terms appear in complex conjugates, so that the
imaginary parts cancel out. This has to be the case, since f (t) is a real-valued
function.
Partial fraction expansion Example 2.8. The inverse Laplace transform of F (s) = s2 +22s+119
(s+10)3 is obtained
with multiple roots from lines 5, 7 and 8 of Table 2.1 together with (2.8) and (2.9):

s2 + 22s + 119 A B C
3
= + 2
+
(s + 10) s + 10 (s + 10) (s + 10)3
2
As + 20As + 100A + Bs + 10B + C
= (2.35)
(s + 10)3

Hence
 
A = 1
 A = 1

20A + B = 22 ⇔ B=2 (2.36)
 
100A + 10B + C = 119 C = −1
 

Finally,
 2   
−1 s + 22s + 119 1 2 −1
L =L −1
+ + (2.37)
(s + 10)3 s + 10 (s + 10)2 (s + 10)3
     
1 2 1 −1 2
=L −1
+ 2L −1
− L
s + 10 (s + 10)2 2 (s + 10)3
 
−10t −10t 1 2 −10t −10t 1 2
=e + 2t e − t e =e 1 + 2t − t
2 2

Division of polynomials Example 2.9. The inverse Laplace transform of F (s) = 2s+145
s+70 is obtained
from lines 1 and 5 of Table 2.1, but for that it is necessary to begin by dividing
the numerator of F (s) by the denominator. Because the denominator is of rst
order, in this case polynomial division can be carried out with Runi's rule
(otherwise a long division would be necessary):

2 145
−70 −140 (2.38)
2 5

2s + 145 5
So =2+ , and nally
s + 70 s + 70
   
2s + 145 1
L −1 = 2L −1 [1] + 5L −1
s + 70 s + 70
= 2δ(t) + e−70t (2.39)

All polynomial operations mentioned in this sections can be performed with


Matlab using the following commands:
2.3. FINDING INVERSE LAPLACE TRANSFORMS 13

ˆ roots nds the roots of a polynomial, represented by a vector with its


coecients (in decreasing order of the exponent);

ˆ conv multiplies two polynomials, represented by two vectors as above;

ˆ residue performs polynomial division and partial fraction expansion, as


needed, for a rational function, given the numerator and denominator
polynomials represented by two vectors as above.

Example 2.10. The roots of s2 + 3s + 2 are −2 and −1: Matlab's command


roots
>> roots([1 3 2])
ans =
-2
-1

Example 2.11. The roots of 4s3 + 3s2 + 2s + 1 are −0.6058, −0.0721 + 0.6383j
and −0.0721 − 0.6383j :

>> roots([4 3 2 1])


ans =
-0.6058 + 0.0000i
-0.0721 + 0.6383i
-0.0721 - 0.6383i

Example 2.12. The product of s2 + 2s + 3 and 4s3 + 5s2 + 6s + 7 is 4s5 +


13s4 + 28s3 + 34s2 + 32s + 21: Matlab's command conv
>> conv([1 2 3],[4 5 6 7])
ans =
4 13 28 34 32 21

Example 2.13. The partial fraction expansion (2.18) from Example 2.6 is Matlab's command
obtained as residue
>> [r,p,k] = residue([1 2],[1 13 30])
r =
1.1429
-0.1429
p =
-10
-3
k =
[]
Vector r contains the residues or numerators of the fractions in the partial Residues
fraction expansion. Vector p contains the poles or roots of the denominator Poles
of the original expression. Vector k contains (the coecients of the polynomial
which is) the integer part of the polynomial division, which in this case is 0
because the order of the denominator is higher than the order of the numerator.
The polynomials of the original rational function can be recovered feeding
this function back vectors r, p and k:

>> [num,den] = residue(r,p,k)


num =
1 2
den =
1 13 30

Example 2.14. The partial fraction expansion (2.34) from Example 2.7 and
Remark 2.6 is obtained as
14 CHAPTER 2. THE LAPLACE TRANSFORM

>> [r,p,k] = residue([4 13 -2],conv([1 2 2],[1 4]))


r =
1.0000 + 0.0000i
1.5000 + 2.0000i
1.5000 - 2.0000i
p =
-4.0000 + 0.0000i
-1.0000 + 1.0000i
-1.0000 - 1.0000i
k =
[]

Example 2.15. The partial fraction expansion from Example 2.9 is obtained
as
>> [r,p,k] = residue([2 145],[1 70])
r =
5
p =
-70
k =
2
Notice how this time there is an integer part of the polynomial division, since
the order of the numerator is not lower than the order of the denominator.
Example 2.16. From
>> [r,p,k] = residue([1 2 3 4 5 6],[7 8 9 10])
r =
0.1451 + 0.0000i
-0.0276 - 0.2064i
-0.0276 + 0.2064i
p =
-1.1269 + 0.0000i
-0.0080 + 1.1259i
-0.0080 - 1.1259i
k =
0.1429 0.1224 0.1050
we learn that
s5 + 2s4 + 3s3 + 4s2 + 5s + 6
(2.40)
7s3 + 8s2 + 9s + 10
0.1451 −0.0276 − 0.2064j −0.0276 + 0.2064j
= 0.1429s2 + 0.1224s + 0.1050 + + +
s + 1.1269 s + 0.0080 − 1.1259j s + 0.0080 + 1.1259j

2.4 Important properties: derivatives and inte-


grals
Now that we know how to nd Laplace transforms, it is time to wonder why
we are studying them. To answer this, rst we have to establish some very
important results.
L of the derivative Theorem 2.2. If L [f (t)] = F (s), then
L [f 0 (t)] = s F (s) − f (0) (2.41)
Apply integration by parts uv = uv − u v to denition (2.1):
R 0 R 0
Proof.

u v0
Z +∞ z}|{ z}|{
L [f (t)] = f (t) e−st dt
0
+∞ Z +∞
e−st e−st

= f (t) − f 0 (t) dt
−s 0 0 −s
e−st e0 1 +∞ 0
  Z
= lim f (t) − f (0) + f (t)e−st dt (2.42)
t→+∞ −s −s s 0
2.5. WHAT DO WE NEED THIS FOR? 15

The limit has to be 0, otherwise F (s) would not exist. The integral is, by
denition, L [f 0 (t)]. From here (2.41) is obtained rearranging terms.

Corollary 2.1. If L [f (t)] = F (s), then


L [f 00 (t)] = s2 F (s) − s f (0) − f 0 (0) (2.43)

Proof. Apply (2.41) to itself:

L [f 00 (t)] = s L [f 0 (t)] − f 0 (0) = s (s F (s) − f (0)) − f 0 (0) (2.44)

Then rearrange terms.

Corollary 2.2. If L [f (t)] = F (s), then


dn dn−1 f (t)
 
L n
f (t) = s F (s) − s n−1
f (0) − sn−2 0
f (0) − . . . −
dtn dn−1 t t=0
n
dk−1 f (t)
X
= sn F (s) − sn−k (2.45)
dtk−1 t=0
k=1

Proof. This is proved by mathematical induction. The rst case is (2.41). The
inductive step is proved applying (2.41) to (2.45) as follows:
 n+1  n
dn f (t)
 
d d
L f (t) = s L f (t) − (2.46)
dtn+1 dtn dtn t=0
n
!
dk−1 f (t) dn f (t)

X
= s sn F (s) − sn−k −
dtk−1 t=0 dtn t=0
k=1
n k−1
!
dn f (t)

n−k+1 d f (t)
X
n+1
=s F (s) − s −
dtk−1 t=0 dtn t=0
k=1
n
!
k−1 k−1

n+1−k d f (t) n+1−k d f (t)
X X
n+1
=s F (s) − s − s
dtk−1 t=0 dtk−1 t=0
k=1 k=n+1

Theorem 2.3. If L [f (t)] = F (s), then L of the integral

Z t 
1
L f (t) = F (s) (2.47)
0 s

Proof. In (2.42), make


Z t
f (t) = g(t) dt, (2.48)
0

whence f 0 (t) = g(t). Then


Z t  Z 0 Z +∞
1 1
L g(t) dt = − g(t) dt + g(t)e−st dt (2.49)
0 0 −s s 0

The rst integral is 0, the second is L [g(t)].

Remark 2.7. Notice that the Laplace transform of a derivative (2.41) involves
f (0), the value of the function itself at t = 0. This is because we are using
the Laplace transform as dened by (2.1), rather than the bilateral Laplace
transform (2.2).

2.5 What do we need this for?


We are now in position of answering the question above: we need Laplace trans- Use L to solve dierential
forms as a very useful tool to solve dierential equations. equations

Example 2.17. Solve the following dierential equation, assuming that y(0) =
0:
y 0 (t) + y(t) = e−t (2.50)
16 CHAPTER 2. THE LAPLACE TRANSFORM

Apply the Laplace transform to obtain


1 1
L [y 0 (t) + y(t)] = L e−t ⇔ sY (s) + Y (s) =
 
⇔ Y (s) = ⇔
s+1 (s + 1)2
 
1
⇔ L −1 [Y (s)] = L −1 ⇔ y(t) = t e−t (2.51)
(s + 1)2
It is easy to verify that this is indeed the solution: y 0 (t) = e−t − t e−t , and thus

y 0 (t) + y(t) = e−t ⇔ e−t − t e−t + t e−t = e−t , (2.52)

as desired.
Notice how the Laplace transform turned the dierential equation in t into
an algebraic equation in s, which is trivial to solve. All that is left is to apply
the inverse Laplace transform to turn the solution in s into a solution in t.
Take care of non-null ini- Initial conditions must be taken into account if they are not zero.
tial conditions
Example 2.18. Solve the following dierential equation, assuming that y(0) =
1
3 and y 0 (0) = 0:
y 00 (t) + 4y 0 (t) + 3y(t) = 4et (2.53)
Using the Laplace transform, we get
 
2 1 1 4
s Y (s) − s − 0 + 4 sY (s) − + 3Y (s) = ⇔
3 3 s−1
 s 4 4
⇔ Y (s) s2 + 4s + 3 − − = (2.54)
3 3 s−1
Because s2 + 4s + 3 = (s + 1)(s + 3), we get
4 1 s+4
Y (s) = + (2.55)
(s − 1)(s + 1)(s + 3) 3 (s + 1)(s + 3)
We now need two partial fraction expansions:
 
4 1 s+4 A B C 1 D E
+ = + + + +
(s − 1)(s + 1)(s + 3) 3 (s + 1)(s + 3) s−1 s+1 s+3 3 s+1 s+3
A(s + 4s + 3) + B(s + 2x − 3) + C(s2 − 1) 1 Ds + 3D + Es + E
2 2
 
= +
(s − 1)(s + 1)(s + 3) 3 (s + 1)(s + 3)
2
 
s (A + B + C) + s(4A + 2B) + (3A − 3B − C) 1 s(D + E) + (3D + E)
= +
(s − 1)(s + 1)(s + 3) 3 (s + 1)(s + 3)
(2.56)
whence
   
1
A + B + C = 0
 4A − 2B = 4
 8A = 4
 A = 2

4A + B = 0 ⇔ 4A + B = 0 ⇔ B = −2A ⇔ B = −1
C = 12
   
3A − 3B − C = 4 C = 3A − 3B − 4 C = 3A − 3B − 4
   
(2.57)
and
( ( (
D+E =1 E =1−D E = − 12
⇔ ⇔ (2.58)
3D + E = 4 2D = 3 D = 32

Thus
 1 1  3 1 
1 1
y(t) = L −1 2
+ 2 +
− 2
− 2 (2.59)
s−1 s+1 s+3 3 s+1 s+3
 
1 1 1 3 −t 1 −3t 1 1 1
= et − e−t + e−3t + e − e = et − e−t + e−3t
2 2 3 2 2 2 2 3
It is easy to verify that this is indeed the solution: on the one hand,
1 t 1 −t
y 0 (t) = e + e − e−3t (2.60)
2 2
1 1 −t
y 00 (t) = et − e + 3e−3t (2.61)
2 2
2.6. MORE IMPORTANT PROPERTIES: INITIAL AND FINAL VALUES, CONVOLUTION 17

and thus
1 t 1 −t 3 3
y 00 (t) + 4y 0 (t) + 3y(t) = e − e + 3e−3t + 2et + 2e−t − 4e−3t + et − e−t + e−3t
2 2 2 2
= 4e t
(2.62)
as desired; on the other hand,
1 1 1 1
y(0) = − + = (2.63)
2 2 3 3
1 1
y 0 (t) = + − 1 = 0 (2.64)
2 2
as required.
Remark 2.8. Notice what would have happened if we had forgot to include
initial conditions. It would have been as if initial conditions were null, and we
would have got
4 4
s2 Y (s) + 4sY (s) + 3Y (s) = ⇔ Y (s) s2 + 4s + 3 = (2.65)

s−1 s−1
and then
 1 1 
1 1 1
y(t) = L −1 2
− + 2 = et − e−t + e−3t (2.66)
s−1 s+1 s+3 2 2
In this case,
1 t 3
y 0 (t) = e + e−t − e−3t (2.67)
2 2
1 9
y 00 (t) = et − e−t + e−3t (2.68)
2 2
and so it remains true that
1 9 12 −3t 3 t 3
y 00 (t) + 4y 0 (t) + 3y(t) = et − e−t + e−3t + 2et + 4e−t − e + e − 3e−t + e−3t
2 2 2 2 2
= 4et (2.69)
but the initial conditions are indeed
1 1
−1+ =0
y(0) = (2.70)
2 2
1 3
y 0 (t) = + 1 − = 0 (2.71)
2 2
To conclude: if in fact initial conditions were as in Example 2.18, and if we had
written (2.65) instead of (2.54), we would get a wrong result.

2.6 More important properties: initial and nal


values, convolution
Before we are done with Laplace transforms, we must establish some additional
important properties that will often be needed.
Theorem 2.4. If f (t) and f 0 (t) have Laplace transforms, Final value theorem

lim f (t) = lim s F (s) (2.72)


t→+∞ s→0

provided that limt→+∞ f (t) ∈ R.


Proof. Apply a limit to (2.41) to get
lim L [f 0 (t)] = lim (s F (s) − f (0))
s→0 s→0
Z +∞
⇔ f (0) + lim f (t)e−st dt = lim s F (s)
0
s→0 0 s→0
Z +∞
lim f 0 (t)e−st dt = lim s F (s)

⇔ f (0) +
0 s→0 s→0
Z +∞
⇔ f (0) + f 0 (t) dt = lim s F (s)
0 s→0

⇔ f (0) + lim f (t) − f (0) = lim s F (s) (2.73)


t→+∞ s→0
18 CHAPTER 2. THE LAPLACE TRANSFORM

Example 2.19. Let f (t) = e−at , a > 0. We know that limt→+∞ f (t) = 0. We
have F (s) = s+a
1
. And lims→0 s F (s) = lims→0 s+a s
= 0.
Notice that, when a < 0, it is still true that F (s) = s+a
1
and that lims→0 s F (s) =
lims→0 s+a = 0. But now limt→+∞ f (t) = +∞, which is not real.
s

Example 2.20. Let F (s) = s(s+a) , a > 0. We


1
have lims→0 s F (s) = lims→0 s+a
1
=
a. At the same time, f (t) = a (1 − e−at ), and limt→+∞ f (t) = a .
1 1 1

When a < 0, we are in a situation similar to that of the former example: we


still have lims→0 s F (s) = a1 , but limt→+∞ f (t) = +∞.

Initial value theorem Theorem 2.5. If f (t) and f 0 (t) have Laplace transforms,
lim f (t) = lim s F (s) (2.74)
t→0+ s→+∞

provided that lims→+∞ s F (s) ∈ R.

Proof. Apply a limit to (2.41) to get

lim L [f 0 (t)] = lim (s F (s) − f (0))


s→+∞ s→+∞
Z +∞
⇔ f (0) + lim 0
f (t)e−st dt = lim s F (s) (2.75)
s→+∞ 0 s→+∞

In the integrand, e−st goes to zero as s → +∞. If f 0 (t) has a Laplace trans-
form, it must be of exponential
R +∞ 0 order, and thus e−st goes to zero faster enough
to ensure that lims→+∞ 0 f (t)e −st
dt = 0. Because we are assuming the uni-
lateral Laplace transform denition, f (0) is in reality limt→0+ f (t), as whatever
may happen for t < 0 is not taken into account.

Example 2.21. Let f (t) = e−at . We know that limt→0+ f (t) = 1. We have
1
F (s) = s+a . And lims→+∞ s F (s) = lims→0 s+a
s
= 1.
Notice that, unlike what happened when we applied the nal value theorem
in Example 2.19, there is now no need to restrict a > 0.

Example 2.22. Let F (s) = s(s+a) . We have lims→+∞ s F (s) = lims→+∞ s+a =
1 1

0. At the same time, f (t) = 1


a (1 − e
−at
), and limt→+∞ f (t) = 0. There is again
no need now to make a > 0.

Convolution Denition 2.2. Given two functions f (t) and g(t) dened for t ∈ R+ , their
convolution, denoted by ∗, is a function of t given by
Z t
f (t) ∗ g(t) = f (t − τ )g(τ ) dτ (2.76)
0

Theorem 2.6. Convolution is commutative.


Proof.Use the change of variables t = t − τ , for which dτ = −dt. With this
change of variables, when τ = 0 we have t = t, and when τ = t we have t = 0.
Apply this to (2.76) to get
Z t
f (t) ∗ g(t) = f (t − τ )g(τ ) dτ
0
Z 0
=− f (t)g(t − t) dt
t
Z t
= f (τ )g(t − τ ) dτ = g(t) ∗ f (t) (2.77)
0

Theorem 2.7. If these Laplace transforms exist,


L [f (t) ∗ g(t)] = F (s) G(s) (2.78)

Proof.

Z +∞ Z t 
L [f (t) ∗ g(t)] = f (t − τ )g(τ ) dτ e−st dt (2.79)
0 0
2.7. THE FOURIER TRANSFORM 19

We can change the limits of integration of the inner integral by including a


Heaviside function H(t − τ ):
Z +∞ Z +∞ 
L [f (t) ∗ g(t)] = f (t − τ )H(t − τ )g(τ ) dτ e−st dt (2.80)
0 0

H(t − τ ) = 1 if t − τ ≥ 0 ⇔ τ ≤ t, which is the range of values in (2.79). But


H(t − τ ) = 0 if t − τ < 0 ⇔ τ > t, the additional range of values added in
(2.79), which thus does not change the result. We can now change the order of
integration:
Z +∞ Z +∞ 
L [f (t) ∗ g(t)] = f (t − τ )H(t − τ )g(τ ) dτ e−st dt
0 0
Z +∞ Z +∞
= f (t − τ )H(t − τ )g(τ )e−st dt dτ
0 0
Z +∞ Z +∞
= g(τ ) f (t − τ )H(t − τ )e−st dt dτ (2.81)
0 0

We now apply to the inner integral the change of variables t = t − τ , for which
dt = dt. With this change of variables, when t = 0 we have t = −τ , and when
t → +∞ we have t → +∞ too.
Z +∞ Z +∞
L [f (t) ∗ g(t)] = g(τ ) f (t)H(t)e−s(τ +t) dt dτ (2.82)
0 −τ

We have H(t) = 1 if t ≥ 0 and H(t) = 0 if t < 0, so the integration limits can


be changed accordingly:
Z +∞ Z +∞
L [f (t) ∗ g(t)] = g(τ ) f (t)e−sτ e−st dt dτ (2.83)
0 0

All that is left is taking outside integrals terms that do not depend on the
corresponding variables:
Z +∞ Z +∞ 
L [f (t) ∗ g(t)] = g(τ )e−sτ f (t)e−st dt dτ
0 0
Z +∞ Z +∞
= f (t)e−st dt g(τ )e−sτ dτ (2.84)
0 0

and these integrals are the denitions of F (s) and G(s).

Example 2.23. From L −1 1s = H(t) we get


 

    Z t Z t
1 −1 1 1
L −1 = L = H(t − τ )H(τ ) dτ = 1 dτ = t (2.85)
s2 s s 0 0

Remark 2.9. The function obtained is known as the unit slope ramp: Unit slope ramp
(
t, if t ≥ 0
f (t) = (2.86)
0, if t < 0

Table 2.2 gives a list of important properties of the Laplace transform.

2.7 The Fourier transform


Denition 2.3. If F (s) is the Laplace transform of f (t), then the Fourier Fourier transform
transform of f (t), denoted by F [f (t)], is the restriction of F (s) to purely
imaginary values of s, i.e. to the imaginary axis of the complex plane, and

F [f (t)] = L [f (t)]|s=jω = F (jω), ω ∈ R (2.87)

See Figure 2.1.

Remark 2.10. Notice that:


20 CHAPTER 2. THE LAPLACE TRANSFORM

Table 2.2: Laplace transform properties

x (t) X (s)

1 Ax1 (t) + Bx2 (t) AX1 (s) + BX2 (s)


s
2 ax (at) X
a
3 eat x (t) X (s − a)

x (t − a) t > a
4 e−as X (s)
0 t<a
dx (t)
5 sX (s) − x(0)
dt
d2 x (t)
6 s2 X (s) − sx(0) − x0 (0)
dt2
dn x (t)
7 sn X (s) − sn−1 x(0) − . . . − x(n−1) (0)
dtn
dX (s)
8 −tx (t)
ds
d2 X (s)
9 t2 x (t)
ds2
dn X (s)
10 (−1)n tn x (t)
dsn
Zt
1
11 x (u) du X (s)
s
0
Zt Zt Zt
(t − u)(n−1) 1
12 ··· x (u) du = x (u) du X (s)
(n − 1)! sn
0 0 R0t
13 x1 (t) ∗ x2 (t) = 0
x1 (u) x2 (t − u) du X1 (s) X2 (s)

Z∞
1
14 x (t) X (u) du
t
s
ZT
1
15 x (t) = x (t + T ) e−su X (u) du
1 − e−sT
0
16 x (0) lim sX (s)
s→∞

17 x (∞) = lim x (t) lim sX (s)


t→∞ s→0

=[s]

jω, ω ∈ R+

jω, ω = 0
<[s]

jω, ω ∈ R−

Figure 2.1: The imaginary axis in the complex plane.


2.7. THE FOURIER TRANSFORM 21

ˆ f (t) is a real-valued function that depends on a real variable: f (t) ∈ R,


and t ∈ R;

ˆ the Laplace transform of f (t), which is F (s) = L [f (t)], is a complex-


valued function that depends on a complex variable: F (s) ∈ C, and s ∈ C;

ˆ the Fourier transform of f (t), which is F (jω) = F [f (t)], is a complex-


valued function that depends on a real variable, that is the coordinate
along the imaginary axis: F (jω) ∈ C, and ω ∈ R.

Example 2.24. Let f (t) = e−t − e−10t . Then


9
F (s) = (2.88)
(s + 1)(s + 10)
9
F (jω) =
(jω + 1)(jω + 10)
9
=
(10 − ω 2 ) + j11ω

9 (10 − ω 2 ) − j11ω
=
((10 − ω 2 ) + j11ω) ((10 − ω 2 ) − j11ω)

9 10 − ω 2 − j99ω
= 2
(10 − ω 2 ) + 121ω 2
90 − 9ω 2 −99ω
= 4 +j 4 (2.89)
ω + 101ω 2 + 100 ω + 101ω 2 + 100

Example 2.25. Let F (jω) = ω0


ω02 −ω 2
,
where ω0 is a real constant. The function Inverse Fourier transform
f (t) of which F (jω) is the Fourier transform is the inverse Fourier transform
of F (jω), and is given by
   
ω0 ω0
f (t) = F −1 [F (jω)] = F −1 = F −1
ω02 − ω 2 ω02 + (jω)2
 
ω0
= L −1 = sin (ω0 t) (2.90)
ω02 + s2

While it should now be clear what we need Laplace transforms for, we will
only see what we need Fourier transforms for in chapter 10. A more detailed
study of this transform is found in Chapter 38.

Glossary
I said it in Hebrew  I said it in Dutch 
I said it in German and Greek:
But I wholly forgot (and it vexes me much)
That English is what you speak!

Lewis Carroll (1832  „1898), The hunting of the Snark, 4

bilateral Laplace transform transformada de Laplace bilateral


convolution convolução
dierential equation equação diferencial
exponential order function função de ordem exponencial
Fourier transform transformada de Fourier
integral transform transformada integral
Laplace transform transformada de Laplace
Laplace transformation transformação de Laplace
partial fraction expansion expansão em frações parciais
pole polo
polynomial division divisão de polinómios
residue resíduo
22 CHAPTER 2. THE LAPLACE TRANSFORM

Exercises
1. Find the Laplace transforms of the following functions:

(a) f (t) = 80e−0.9t


(b) f (t) = 1000 − e−6t
(c) f (t) = 97.5 × 10−3 sin(0.2785t) + 546.9 × 10−3 e0.9575t cos(0.9649t)
(d) f (t) = sin 5t + π6 Hint: remember that sin(a + b) = sin a cos b +


cos a sin b.

2. Find the inverse Laplace transforms of the following functions:

(a) F (s) = 1
3s2 +15s+18

(b) F (s) = 1
5s2 +6s+5
8s2 +34s−2
(c) F (s) = s3 +3s2 −4s
s2 +2s+8
(d) F (s) = 2s+4
−s2 +5s−2
(e) F (s) = s3 −2s2 −4s+8

3. Consider dierential equation (2.50) from Example 2.17, but now with the
initial condition y(0) = 2.
h i
(a) Show that y(t) = L −1 (s+1)2s
2 + (s+1)2 .
3

(b) Show from (2.41) that L [e−t (1 − t)] = (s+1)2 .


s

(c) Find y(t) and check that it veries both the dierential equation
(2.50) and the new initial condition.

4. Solve the following dierential equations:

(a) y 00 (t) + y(t) = t e−t , y(0) = 0, y 0 (0) = 0


(b) y 00 (t) + y(t) = t e−t , y(0) = 12 , y 0 (0) = − 12
(c) y 00 (t) + y(t) = 10t − 20, y(0) = 0, y 0 (0) = 0
(d) 3y 00 (t) + 7y 0 (t) + 2y(t) = 0, y(0) = −5, y 0 (0) = 10

5. Use the nal value and initial value theorems to nd the initial and nal
values of the inverse Laplace transforms of the functions of Exercise 2.

6. Find the Fourier transforms of the functions of exercises 1 and 2, putting


them in the form F (jω) = < [F (jω)] + j= [F (jω)].

7. Prove the result in line 7 of Table 2.1. Hint: use (2.78) together with the
result in line 5.

8. Prove the result in line 8 of Table 2.1. Hint: use mathematical induction.
Chapter 3

Examples of mechatronic
systems and signals
Eia comboios, eia pontes, eia hoteis à hora do jantar,
Eia aparelhos de todas as espécies, férreos, brutos, mínimos,
Instrumentos de precisão, aparelhos de triturar, de cavar,
Engenhos, brocas, máquinas rotativas !
Eia ! eia ! eia !
Eia electricidade, nervos doentes da Matéria !
Eia telegraa-sem-os, simpatia metálica do Inconsciente !

Álvaro de Campos, heteronym of Fernando Pessoa (1888  „1935), Ode


triunfal, Orpheu, I 1, JanuaryMarch 1915

In this chapter we discuss dierent types of mechatronic signals and systems,


and present examples of each.

3.1 Systems
In chapter 1 we have already dened system as the part of the Universe we want
to study.
A system made up of physical components may be called a plant. A system Plant
which is a combination of operations may be called a process. Process

Example 3.1. WECs, mentioned in Example 1.1, are plants. Figures 3.1, 3.2
and 3.3 show three dierent WECs; many other such devices exist.

Example 3.2. If we want to study the wave elevation at a certain onshore


location as a function of the weather on the middle of the ocean, we will be
studying a process.

The variables describing the characteristics of the system that we are inter-
ested in are its outputs. The variables on which the outputs depend are the Outputs
system's inputs. Inputs in the general sense

Figure 3.1: The Pelamis, a oating near-shore Wave Energy Con-


verter, at Aguçadoura, Portugal (source: left, Wikimedia; right, DOI
10.1155/2013/186056). Waves cause an angular movement of the several sec-
tions of the device. This movement pumps oil in a closed circuit; high pressure
oil is then used to run a turbine driving a usual rotational generator.

23
24CHAPTER 3. EXAMPLES OF MECHATRONIC SYSTEMS AND SIGNALS

Figure 3.2: The Archimedes Wave Swing, a submerged oshore Wave Energy
Converter before submersion, at Viana do Castelo, Portugal. The device is lled
with air which is compressed when wave crests pass and expands during wave
troughs. The heaving movement of the AWS upper part moves an electrical
linear generator.

Figure 3.3: The Pico Power Plant (concluded in 1999, decommissioned in 2018),
an onshore Wave Energy Converter of the Oscillating Water Column (OWC)
type (source: left, WavEC; right, DOI 10.3390/en11112939). In an OWC, the
heaving movement of the water inside a chamber compresses and expands the
air, which can ow in and out the chamber through a turbine designed to always
rotate in the same direction irrespective of the sense of the ow. The turbine
drives a usual rotational generator.
3.1. SYSTEMS 25

Figure 3.4: A Panhard & Levassor Type A motorcar, the rst mass produced
car in the world, driven by the French priest Jules Gavois (1863  „1946) in
1891 (source: Wikimedia). This car still does not have a steering wheel (rst
introduced in 1894), but only a tiller.

Figure 3.5: A window unit air conditioning system (source: Wikimedia). There
are many other types of AC units.

Example 3.3. An internal combustion engine motorcar (see Figure 3.4) is a


plant. We are usually interested in its position, velocity, and attitude. We also
may want to know the rotation speed of the motor, the temperature of the oil,
the fuel consumption, or other such values. All these are outputs. They depend
on the position of the steering wheel, the position of the accelerator and brake
pedals, the gear selected, the condition and inclination of the road the car in
running on, the direction and speed of the wind, the outside temperature, and
other such values. These are the inputs. Not all the outputs depend on all the
inputs.

A control system is one devised to make one or more of the system's Control system
outputs follow some reference. Reference

Example 3.4. An air conditioning (AC) unit (see Figure 3.5) is a mechatronic
system that heats or cools a room to a temperature set by the user. It is
consequently a control system. The value of the temperature selected by the
AC user is the reference. The room's temperature is the output of the plant
that has to follow this reference.

Example 3.5. The wind at the location of a wind turbine is related to the
temperature, the solar exposition, and the atmospheric pressure, among other
variables. This is a process we cannot control. It is not a control system.

For a control system to exist, it must be possible to modify one or more


of the inputs, so as to aect the desired outputs and thereby cause them to
follow the reference. Such inputs are called manipulated variables or inputs
in the strict sense. The inputs of the system that cannot be modied are Inputs in the strict sense
called disturbances. When studying control systems, it is usual to call simply Disturbances
26CHAPTER 3. EXAMPLES OF MECHATRONIC SYSTEMS AND SIGNALS

Figure 3.6: A lever, an example of a linear SISO system without dynamics


(source: Wikimedia).

inputs to the inputs in the strict sense, and to call outputs only to the variable
or variables that have to follow a reference.
Example 3.6. In the case of the OWC from Example 3.1 and Figure 3.3, the
sea waves are disturbances, since we cannot control them. The rotation speed
of the turbine is an input in the strict sense, since we can manipulate it (e.g.
varying the resistance of the electrical generator). If the OWC chamber has
a relief valve, the pressure in the chamber will be also an input, since we can
change it opening or closing the relief valve.
Example 3.7. In the case of the car from Example 3.3, the positions of the
steering wheel and of the pedals are inputs. If the car has a manual gear box, the
gear selected is an input too; if the gear box is automatic, it is not. The gusts
of wind are a disturbance, since we cannot modify them. If we are studying the
temperature of the motor of the car, this will depend on the outside temperature,
which we cannot control and is therefore a disturbance.
A system with only one input and only one output is a Single-Input, Single-
SISO system Output (SISO) system. A system with more than one input and more than one
MIMO system output is a Multi-Input, Multi-Output (MIMO) system. It is of course possible
to have Single-Input, Multiple-Output (SIMO) systems, and Multiple-Input,
Single-Output (MISO) systems. These are usually considered as particular
cases of MIMO systems.
Example 3.8. Both the OWC of Example 3.1 and the car of Example 3.3 are
MIMO plants.
Example 3.9. The lever in Figure 3.6 is a SISO system: if the extremities are
at heights x(t) and y(t), and the rst is actuated, then y(t), the output, depends
on position x(t), the input, and nothing more.
Remark 3.1. We can sometimes model a part of a MIMO system as a separate
SISO system. A car is a MIMO system, as we said, but we can model the
suspension of one wheel separately, as a SISO system relating the position of the
wheel to the position of the vehicle frame, neglecting the inuence that all the
other inputs have on this particular output. Such a model is an approximation,
but for many purposes it is good enough (as we will see in Example 4.2, in
Chapter 4 below).
Model A system's model is the mathematical relation between its outputs, on the
one hand, and its inputs in the general sense (inputs in the strict sense and
disturbances), on the other.
Linear system A system is linear if its exact model is linear, and non-linear if its exact
Non-linear system model is non-linear. Of course, exact non-linear models can be approximated
by linear models, and often are, to simplify calculations.
Example 3.10. The lever of Figure 3.6 is a linear plant, since, if its arm lengths
are Lx and Ly for the extremities at heights x(t) and y(t) respectively,
Ly
y(t) = x(t). (3.1)
Lx
Example 3.11. A Cardan joint (see Figure 3.7) connecting two rotating shafts,
with a bent corresponding to angle β , is a non-linear plant, since a rotation of
θ1 (t) in one shaft corresponds to a rotation of the other shaft given by
tan θ1 (t)
θ2 (t) = arctan . (3.2)
cos β
3.1. SYSTEMS 27

Figure 3.7: A Cardan joint, a non-linear mechanical system without dynamics


(source: Wikimedia).

If β ≈ 0, (3.2) can be approximated by

tan θ1 (t)
θ2 (t) = arctan = θ1 (t). (3.3)
1
The error incurred in approximating (3.2) by (3.3) depends on how close cos β
is to 1. There will be no error at all if the two shafts are perfectly aligned
(β = 0).

Example 3.12. A car is also an example of a non-linear plant, as any driver


knows.

A system is time-varying if its exact model changes with time, and time- Time-varying system
invariant otherwise. Time-invariant system

Example 3.13. An airplane consumes enormous amounts of fuel. Thus its


mass changes signicantly from take-o to landing. Any reasonable model of a
plane will have to have a time-varying mass. But it is possible to study a plane,
for a short period of time, using an approximation consisting of a time-invariant
model, as the mass variation is neglectable in that case.

Example 3.14. A drone powered by a battery will not have a similar variation
of mass. It is a time-invariant system (unless e.g. its mass changes because it is
a parcel-delivering drone).

Example 3.15. WECs can have time-varying parameters due to the eects of
tides. This is the case of the AWS in Figure 3.2, which is submerged and xed
to the ocean bottom. Consequently, the average height of sea water above it
varies from low tide to high time, even if the sea waves remain the same. Other
WECs are time-invariant, at least with respect to tides. That is the case of the
oating OWC in Figure 3.8, which, precisely because it oats, is not aected by
tides.

A system has no dynamics if its outputs in a certain time instant do not


depend on past values of the inputs or on past values of the disturbances. Oth-
erwise, it is a dynamic system. A system without dynamics is called static Dynamic system
system, which does not mean that it never changes; it means that, if its inputs Static system
do not change, neither do the outputs.

Example 3.16. Both mechanical systems in Figures 3.6 and 3.7 have no dy-
namics, since the output y(t) only depends on the current value of the input
u(t). Past values of the input are irrelevant.

Example 3.17. Consider a pipe with a tap (or a valve) that delivers a ow
rate Q(t) given by
Q(t) = kQ f (t) (3.4)
where f (t) ∈ [0, 1] is a variable that tells is if the tap is open (f (t) = 1) or closed
(f (t) = 0). This system is static. But a tap placed far from the point where the
ow exits the pipe will deliver a ow given by

Q(t) = kQ f (t − τ ) (3.5)

Here, τ is the time the water takes from the tap to the exit of the pipe. This is
an example of a dynamic plant, since its output at time instant t depends on a
past value of f (t).
28CHAPTER 3. EXAMPLES OF MECHATRONIC SYSTEMS AND SIGNALS

Figure 3.8: A oating OWC (source: DOI 10.1016/j.energy.2016.06.054).

A system is deterministic if the same inputs starting from the same initial Deterministic system
condition always lead to the same output. A system is stochastic if its outputs Stochastic system
are not necessarily the same when it is subject to the same inputs beginning
with the same initial conditions, or, in other words, if its output is random.

Example 3.18. The process from Example 3.5 is stochastic. Even though we
may know all those variables, it is impossible to precisely predict the wind speed.
The same happens with the process from Example 3.2, and even more so.

Example 3.19. Figure 3.9 shows a laboratory setup to test controllers for the
lithography industry (which produces microchips with components positioned
with precisions of the order of 1 nm). This is a deterministic system. If lithog-
raphy plants and processes were not deterministic, it would be far more dicult
to mass produce microchips.

In this course we will only address deterministic, SISO, linear time-invariant


LTI systems (LTI) systems.

3.2 Signals
In chapter 1 we have already dened signal as a function of time or space that
conveys information about a system. In other words, it is the evolution with
time or with space of some variable that conveys information about a system.
Most of the signals we will meet depend on time but not on space.

Example 3.20. An image given by a camera is a signal that depends on space,


but not on time. A video is a signal that depends on both space and time.

Quantised signal Some signals can only take values in a discrete set; they are called quantised
Analogical signal signals. Others can take values in a continuous set; they are called analogical
signals.
Example 3.21. Consider a turbine, such as the turbine in Figure 3.10, of the
Wells type, installed in the Pico Power Plant (shown above in Figure 3.3). Its
rotation speed is real valued; it takes values in a continuous set. So the signal
consisting in the turbine's rotation speed as a function of time is an analogical
signal.
3.2. SIGNALS 29

Figure 3.9: Left: precision positioning system used at the Delft University of
Technology (source: DOI 10.1007/s11071-019-05130-2). Coil actuators 1 move
masses 2, which are connected through exures to mass 3, the position of which
is measured using sensors (encoders) 4. Mass 2 can be positioned with a preci-
sion of 1 µm or less. Right: NASA clean room for lithography (source: Wiki-
media).

Figure 3.10: The Wells turbine of the Pico Power Plant OWC in Figure 3.3
(source: DOI 10.1016/j.renene.2015.07.086).
30CHAPTER 3. EXAMPLES OF MECHATRONIC SYSTEMS AND SIGNALS

Figure 3.11: Three-speed manual gearbox, typical of cars in the 1930s (source:
Wikimedia).

Example 3.22. Consider the gearbox of a car (see Figure 3.11). The signal
consisting in the speed engaged as a function of time (neutral, reverse, 1st, 2nd,
etc.) takes values in a discrete set. It is a quantised signal.
Remark 3.2. It is possible, and sometimes desirable, to approximate a quan-
tised signal by an analogical signal, and vice-versa.
Example 3.23. The rotation of a shaft θ(t) is an analogical signal; it is of course
possible to rotate the shaft by an angle as small as desired. But it is often useful
to replace it by a discrete signal ϑ(t) which is the number of revolutions (i.e. the
number of 360◦ jrotations
k of the shaft). This corresponds to an approximation
θ(t)
given by ϑ(t) = 360◦ . Figure 3.12 shows a mechanical revolution counter.

Example 3.24. A population  be it the number of persons in a country, the


number of rabbits in a eld, or the number of bacteria on a Petri dish  is a
quantised signal. It always increases or decreases in multiples of one, since it is
impossible that half a child be born, or that 34 of a rabbit dies. However, if the
population is large enough, a variation of one individual is so small that it is
possible to assume that it is an analogical signal, and write equations such as
dp(t)
= b(t)p(t) − d(t)p(t), (3.6)
dt
where p(t) is the population, b(t) is the birth rate, and d(t) is the death rate.
(Terms for immigration and emigration rates must be included if the popula-
tion is not isolated.) Such models (and others more complicated, that we will
mention in passing below in Chapter 43) are used for instance in Bioengineering
and in many other areas.
Example 3.25. Strictly speaking, variables such as the fuel admitted to one of
the cylinders of an internal combustion engine are also quantised, since the num-
3.2. SIGNALS 31

Figure 3.12: Mechanical 19th century revolution counter, from the former Bar-
badinhos water pumping station (currently the Water Museum), Lisbon. Nowa-
days mechanical revolution counters are still used, though electronic ones exist.

ber of molecules of fuel admitted is integer. Of course, in practice an analogical


value is assumed.

Some signals take values for all time instants: they are said to be continu- Continuous signal
ous. Others take values only at some time instants: they are said to be discrete Discrete signal
in time, or, in short, discrete. The time interval between two consecutive values
of a discrete signal is the sampling time. The sampling time may be variable Sampling time
(if it changes between dierent samples), or constant. In the later case, which
makes mathematical treatment far more simple, the inverse of the sampling time
is the sampling frequency. Sampling frequency

Example 3.26. The air pressure inside the chamber of an OWC is a continuous
signal: it takes a value for every time instant.

Example 3.27. The number of students attending the several classes of this
course along the semester is a discrete signal: there is a value for each class, and
the sampling time is the time between consecutive classes. The sampling time
may be constant (if there is e.g. one laboratory class every Monday) or variable
(if there are e.g. two lectures per week on Mondays and Wednesdays).

Example 3.28. One of the controllers used with the laboratory setup from
Example 3.19 in Figure 3.9 provided a discrete control action with sampling
frequency 20 kHz. So the sampling time was
1
Ts = = 50 × 10−6 s = 50 µs, (3.7)
20 × 103

or, in other words, every 50×10−6 s the control action for the coil actuators was
updated; or, again, the control action was updated 20 × 103 times per second.
The sampling frequency could also be given as

ωs = = 2π × 20 × 103 = 125.7 × 103 rad/s. (3.8)
50 × 10−6
Remark 3.3. Mind the numerical dierence between the value of the sampling
frequency in Hertz and in radians per second. It is a common source of mistakes
in calculations.

Remark 3.4. It is possible, and sometimes desirable, to approximate a discrete


signal by a continuous signal, and vice-versa. Approximating a continuous signal
by a discrete one is an operation called discretisation. We will study this issue Discretisation
in more detail below in Chapters 12 and 25.

Example 3.29. The control action from Example 3.28 had in fact to be con-
verted into a continuous signal to be applied by the coil actuators. As described,
this was done by keeping the control action signal constant between sampling
times. The operation corresponds to converting a discrete signal as seen in
the left of Figure 3.13 into a continuous signal as seen in the right diagram of
that Figure. The conversion of a digital signal into an analogical signal will be
addressed in detail in Chapter 25.
32CHAPTER 3. EXAMPLES OF MECHATRONIC SYSTEMS AND SIGNALS

Figure 3.13: Left: discrete signal; right: continuous signal obtained from the
discrete signal by keeping the previous value between sampling times (source:
Wikimedia, modied).

Figure 3.14: Discretising a signal (source: Wikimedia), i.e. approximating a


continuous signal (grey) by a discrete one (red).

Example 3.30. Figure 3.14 illustrates the operation of discretisation.

A signal which is both discrete and quantised is a digital signal. Digital signal
A system, too, is said to be continuous, discrete, or digital, if all its inputs Continuous system
and outputs are respectively continuous, discrete, or digital. Discrete system
Electronic components are nowadays ubiquitous. As a result of sensors, Digital system
actuators, controllers, etc. being electronic, most signals are digital. Likewise,
systems that incorporate such components are digital, inasmuch their inputs
and outputs are all digital.

Example 3.31. Consider an industrial oven, seen in Figure 3.15, with a control
system to regulate its temperature. The output of this system is the actual
temperature inside the oven, and the input is the desired temperature (i.e. the
reference of the control system). The oven is heated by gas, and so the gas ow is
the manipulated variable that allows controlling the oven. This is a continuous
system, since all variables exist in all time instants. But, in all likelihood, a
digital sensor will be used for the temperature, and changes in gas ow will also
take place at sampling times, after the temperature reading is compared with
the reference and processed to nd the control action that will better eliminate
the error between actual and desired temperatures. So in practice the system
will probably be digital.

Example 3.32. A ush tank for a toilet equipped with a oat valve as seen
in the top scheme of Figure 3.16 is a control system devoid of any electronic
component, and for which all signals are continuous. (See also Figure 3.17.)
This is a continuous control system.

Bounded signal A signal is bounded if it can only assume values in a bounded interval. In
engineering, most signals (if not all) are bounded.

Example 3.33. The wave elevation at given coordinates cannot be less than
the depth of the sea there. Similarly, the rotation speed of a turbine, or the
linear velocity of a shaft, or a voltage in a circuit, are always limited by physical
constraints.

Remark 3.5. Bounded continuous signals can assume innite values, but bounded
quantised signals can only assume a nite number of values.
3.2. SIGNALS 33

Figure 3.15: Industrial oven for aircraft component manufacture (source: Wiki-
media).

Figure 3.16: Top: oat valve mechanism, well known by its use in ush tanks
(source: Wikimedia). Bottom: ush tank with a oat valve (notice that the lever
has two arms, to increase the speed with which the water ow is interrupted as
soon as the oat raises the level from the lower end of stroke).
34CHAPTER 3. EXAMPLES OF MECHATRONIC SYSTEMS AND SIGNALS

Figure 3.17: Top: a oat valve (see Figure 3.16) was also used in water me-
ters devised by António Pinto Bastos in the 1850s, which were used in Lisbon
until the 1960s in spite of being obsolescent for a long time by then (source:
Wikimedia). These meters were purely mechanical. Bottom: electromagnetic
ow meters have no mechanical components; the reading can be sent elsewhere
rather than having to be read in the dials in loco (source: Wikimedia). We will
address sensors for ow measurements in Chapter 13.
3.3. MODELS 35

3.3 Models
In Section 3.1 we have already dened a system's model as a mathematical rela-
tion between its inputs and outputs. There are basically two ways of modelling
a system:
1. A model based upon rst principles is a theoretical construction, re- First principles model
sulting from the application of physical laws to the components of the
plant.
2. A model based upon experimental data results from applying identi- Experimental model
cation methods to data experimentally obtained with the plant.
It is also possible to combine both these methods.
In this course we will concentrate on models based upon rst principles, and
you will nd abundant examples thereof in Chapters 4 through 8. They can
be obtained whenever the way the system works is known. They are the only When to use rst princi-
possibility if the system does not exist yet because it is still being designed and ples models
built, or if no experimental data is available. They may be quite hard to obtain
if the system comprises many complicated interacting sub-parts. Simplications
can bring down the model to more manageable congurations, but its theoretical
origin may mean that results will dier signicantly from reality if parameters
are wrongly estimated, if too many simplications are assumed, or if many
phenomena are neglected.
The models of dynamic continuous LTI systems are given by linear dier- Dierential equations
ential equations. The models of dynamic digital LTI systems are given by
linear dierence equations. The models of static LTI systems are linear and Dierence equations
have neither derivatives nor time dierences.
Example 3.34. The static model of the lever (3.1) includes neither dierential
nor dierence equations. It is irrelevant whether x(t) and y(t) are discretised
or not. The same happens with the non-linear static model of the Cardan joint
(3.2).
Example 3.35. Continuous model (3.6) is a dierential equation. Suppose
that the model is applied to the population of a country, where immigration
and emigration are neglectable, and for which population data is available on a
yearly basis. Also suppose that birth and death rates are constant and given by
b = 0.03/year and d = 0.02/year. So

dp(t)
= 0.01p(t) (3.9)
dt
Because the sampling time is Ts = 1 year, we can perform the following approx-
imation:

dp(t) pk − pk−1
≈ , (3.10)
dt t=year k 1 year

where pk is the population in year k , and pk−1 is the population in the year
before. Notice that this is a rst order approximation for the derivative in year
k , in which we use the value of the year before, and is consequently called a
backward approximation. So we end up with the following dierence equation:
1
pk − pk−1 = 0.01pk ⇔ pk = pk−1 , (3.11)
0.99
which is an approximation of dierential equation (3.9); approximations other
than (3.10) could have been used instead. We will address this subject further
below in Part V.
Example 3.36. Dierential equation (2.53) can be approximated by dierence
equation

3yk = 0.4yk−1 + 0.2yk−2 + 0.8eTs k + 1.6eTs (k−1) + 0.8eTs (k−2) (3.12)

for sampling time Ts . Once more, we will see how to arrive at this result below
in Part V.
36CHAPTER 3. EXAMPLES OF MECHATRONIC SYSTEMS AND SIGNALS

Experimental data should, whenever available, be used to conrm, and if Experimental identi
necessary modify, models based upon rst principles. This often means that rst tion of model paramet
principles are used to nd a structure for a model (the orders of the derivatives
in a dierential equation, or the number of delays in a dierence equation), and
then the values of the parameters are found from experimental data: feeding
the model the inputs measured, checking the results, and tuning the parameters
until they are equal (or at least close) to measured outputs. This can sometimes
be done using least squares; sometimes other optimisation methods, such as
genetic algorithms, are resorted to. If the outputs of experimental data cannot
be made to agree with those of the model, when the inputs are the same, then
another model must be obtained; this often happens just because too many
simplications were assumed when deriving the model from rst principles. It
may be possible to nd, from experimental data itself, what modications to
model structure are needed. This area is known as identication, and will be
addressed below in Parts VI to VIII.
White box model Models based upon rst principles can be called white box models, since
the reason why the model has a particular structure is known. If experimental
data requires changing the structure of the model, a physical interpretation of
the new parameters may still be possible. The resulting model is often called a
Grey box model grey box model.
There are methods to nd a model from experimental data that result in
something that has no physical interpretation, neither is it expected to have.
Still the resulting mathematical model ts the data available, providing the
correct outputs for the inputs used in the experimental plant. Such models
Black box model are called black box models, in the sense that we do not understand how
they work. Such models include, among others, neural network (NN) models
(see an example in Figure 3.18) and models based upon fuzzy logic, known as
fuzzy models (see Figure 3.19). These modelling techniques are increasingly
important, but we will not study them in these Lecture Notes.

Glossary
Lascia stare, cerchiamo un libro greco!
Questo? chiedevo io mostrandogli un'opera dalle pagine coperte di
caratteri astrusi. E Guglielmo: No, questo è arabo, sciocco! Aveva
ragione Bacone che il primo dovere del sapiente è studiare le lingue!
Ma l'arabo non lo sapete neppure voi! ribattevo piccato, al che
Guglielmo mi rispondeva: Ma almeno capisco quando è arabo!

Umberto Eco (1932  „2016), Il nome della rosa, Quinto giorno, Sesta

black box model modelo de caixa negra


bounded limitado
control system sistema de controlo
continuous contínuo
deterministic determinístico
dierence equation equação às diferenças
digital digital
discrete discreto
disturbance perturbação
dynamic dinâmico
rst principles primeiros princípios
grey box model modelo de caixa cinzenta
identication identicação
manipulated variable variável manipulada
mechatronic mecatrónico
mechatronics mecatrónica
multiple input entradas múltiplas
multiple output saídas múltiplas
input entrada
model modelo
output saída
3.3. MODELS 37

Figure 3.18: Top: scheme of an example of an articial neural network (source:


DOI 10.1016/j.apor.2008.11.002). It is made of several neurons, arranged in
layers. These neurons are oversimplied models of biological neurons, seen in the
bottom scheme (source: Wikimedia), which are arranged in far more complex
patterns. The parameters of an articial neural network are the conguration of
its interconnections and the parameters of each neuron. Neuron parameters can
be optimised from experimental data using numerical methods. The NN shown
can be used to model a static MIMO system with two inputs and two outputs,
or a dynamic system with one input and two outputs if x2 (t) = x1 (t − T s), in
which case it provides a non-linear dierence equation model with sampling time
Ts . NNs are typically black box models, and parameters are not expected to
have any physical meaning at all; recently, however, signicant eorts in neural
network interpretability have been advanced. We will not study NNs in these
Lecture Notes.

Figure 3.19: In Boolean logic, propositions are either true or false. These two
cases correspond respectively to logical values 1 and 0. In fuzzy logic, all inter-
mediate logical values can be used. The plot above shows an example of this
(source: Wikimedia). For the temperature shown by the grey line, proposition
temperature is hot has the logical value 0, proposition temperature is warm
has the logical value 0.2, and proposition temperature is cold has the logical
value 0.8. This type of logic can then be used to build models, both static and
dynamic. We will not not study fuzzy logic or fuzzy models in these Lecture
Notes.
38CHAPTER 3. EXAMPLES OF MECHATRONIC SYSTEMS AND SIGNALS

Figure 3.20: The Stansted Airport Transit System conveys passengers between
Terminals 1 and 2 of Stansted Airport, United Kingdom (source: Wikimedia).
Vehicles have no driver. They stop, open doors, close doors, and move between
terminals automatically.

plant planta
process processo
reference referência
sampling frequency frequência de amostragem
sampling time tempo de amostragem
signal sinal
single input entrada única
single output saída única
static estático
stochastic estocástico
white box model modelo de caixa branca

Exercises
1. Answer the following questions for each of the mechatronic systems below:

ˆ What are its outputs?


ˆ What are its inputs?
ˆ Is the system SISO or MIMO?
ˆ Which of the inputs can be manipulated, if any?
ˆ Is it a static or a dynamic system?
ˆ Is it time varying or time invariant?
ˆ Is the system continuous or digital?

(a) An automated train system, as seen in Figure 3.20.


(b) A power wheelchair, as seen in Figure 3.21.
(c) A motorboat, as seen in Figure 3.22.
(d) A rigged ship, as seen in Figure 3.22.
(e) A submarine, as seen in Figure 3.23.
(f) A space probe, as seen in Figure 3.24.
(g) A robotic arm, as seen in Figure 3.25.

2. Use the Laplace transform to solve (3.9) for the situation starting at a
time when the country's population is 10 million inhabitants. Find the
population for t = 1, 2, 3 . . . years. Then use (3.11) to nd the evolution
of the population starting with year k = 1 (corresponding to t = 0 years)
when the country's population is 10 million inhabitants. Find the popu-
lation for k = 2, 3, 4 . . . and compare the results with those obtained with
(3.9).
3.3. MODELS 39

Figure 3.21: Physicist Stephen Hawking (1942  „2018) attending a scientic


conference in 2001 (source: Wikimedia).

Figure 3.22: Left: a motorboat with an outboard motor at Zanzibar, Tan-


zania (source: Wikimedia). Right: Portuguese Navy school ship Sagres (for-
merly Brazilian school ship Guanabara, formerly German school ship Albert
Leo Schlageter; source: Wikimedia).

Figure 3.23: The Portuguese Navy submarine Tridente, of the Tridente class,
propelled by a low noise skew back propeller and powered by hydrogenoxygen
fuel cells (source: Wikimedia).

Figure 3.24: Astronomer Carl Sagan (1934  „1996) with a model of one of the
two Viking landers, space probes that descended on Mars in 1976 and worked
until 1980 and 1982 (source: Wikimedia). Descent speed was controlled by
deploying a parachute and launching three retrorockets (one on each leg) to
ensure a soft landing. The descent control system employed an inertial reference
unit, four gyroscopes, a radar altimeter, and a landing radar.
40CHAPTER 3. EXAMPLES OF MECHATRONIC SYSTEMS AND SIGNALS

Figure 3.25: Two KUKA LWR IV robotic arms extant at the Control, Automa-
tion and Robotics Laboratory of Instituto Superior Técnico, Universidade de
Lisboa, Portugal. Each robot has seven rotational joints. (Source: Professor
Jorge Martins.)
Chapter 4

Modelling mechanical systems


Lex I.
Corpus omne perseverare in statu suo quiescendi vel movendi unifor-
miter in directum, nisi quatenus a viribus impressis cogitur statum
illum mutare. (. . . )
Lex II.
Mutationem motus proportionalem esse vi motrici impressæ, & eri
secundum lineam rectam qua vis illa imprimitur. (. . . )
Lex III.
Actioni contrariam semper & æqualem esse reactionem: sive corpo-
rum duorum actiones in se mutuo semper esse æquales & in partes
contrarias dirigi.

Isaac Newton (1643  „1727), Philosophiæ Naturalis Principia Mathematica,


Axiomata sive Leges Motus

Ut tensio sic vis ; That is, The Power of any Spring is in the same
proportion with the Tension thereof: That is, if one power stretch
or bend it one space, two will bend it two, and three will bend it
three, and so forward.

Robert Hooke (1635  „1703), Lectures de Potentia Restitutiva Or of Spring


Explaining the Power of Springing Bodies

In this and the following chapters, we will pass in review the basic concepts
of system modelling, for dierent types of components. In this chapter, we
concentrate upon mechanical components. Surely you will have already learned,
if not all, at least most of these subjects in other courses. However, there are
two reasons why a brief review is convenient at this point of your studies:
1. We will systematically resort to the Laplace transform to study dynamic
systems, and after seeing too many equations with variable s it is easy to
forget that we keep talking about real things  namely, in this course,
mechatronic systems that are all around us in our daily life.
2. This is a good time to stress the similarities between apparently very
dierent systems that can be described by the very same equations. We
will see that thinking of any system as an energy converter helps to see
those parallels.

4.1 Modelling the translation movement


Mechanical systems with movement along a straight line can usually be modelled
using three components with the respective three equations:
1. A mass. This component stores energy under the form of kinetic energy. Mass
To model them, apply Newton's second law (which you can read in Latin Newton's second law
at the beginning of this chapter):
X d
F = (m(t) ẋ(t)) (4.1)
dt

41
42 CHAPTER 4. MODELLING MECHANICAL SYSTEMS

Figure 4.1: Usual translation springs. Left: helical or coil spring; centre: volute
spring; right: leaf spring. (Source: Wikimedia.)

Here, F is the sum of all forces applied on the mass m(t), which is at
P
position x(t). (Product m(t) ẋ(t), as you know, is called momentum.) Momentum
Because we are assuming a movement of translation, we need not bother to
use vectors, but the forces must be applied along the direction considered;
if not, their projection onto the said direction must be used. And, as we
said in Section 3.1, we will only consider LTI systems; since the mass is,
in (4.1), a parameter, this restriction means that it will not change with
time, and so we are left with
X
F = m ẍ(t) (4.2)

A mass is usually represented by m or M .

Spring 2. A spring. This is a mechanical device that stores energy under the form
of elastic potential energy (see Figure 4.1). A translation spring usually
Hooke's law follows Hooke's law (which you can read in Latin and English at the
beginning of this chapter):

F = k (x1 − x2 ) (4.3)

Here, F is the force exerted by the spring, x1 and x2 are the positions of
the extremities of the spring (dened so that x1 − x2 is the variation in
length of the spring measured from the repose length), and k is the spring
constant. This constant is usually represented by k or K , and its SI units
are N/m. Force F opposes the relative movement of the extremities that
is its cause (i.e. force F contracts the spring when it is stretched, and
stretches the spring when it is compressed).

Damper 3. A damper. This is a mechanical device that dissipates energy (see Fig-
Viscous damping ure 4.2). The most usual model for dampers is viscous damping:

F = c (ẋ1 − ẋ2 ) (4.4)

Here, F is the force exerted by the damper, ẋ1 and ẋ2 are the velocities
of the extremities of the spring (dened so that ẋ1 − ẋ2 is the relative
velocity of the extremities of the damper), and c is the damping constant.
This constant is usually represented by c, C , b or B , and its SI units are
N s/m. Force F opposes the relative movement of the extremities that is
its cause.
Model (4.4) can also be used to model unintended energy dissipation, such
as that due to friction. Notice that since energy dissipation is ubiquitous
even a mechanical system consisting only of a mass and a spring will be
more exactly modelled by a mass, a spring, and a damper, the latter to
account for energy dissipation.

Remark 4.1. Unlike (4.2), Hooke's law (4.3) is often only an approximate
model of the phenomenon it addresses. There are three ways in which reality
usually deviates from (4.3).

1. The relation between force and variation in length can be non-linear. In


any case, as long as the relation is continuous and has a continuous deriva-
tive, a linear approximation will be valid in a limited range of length
variations (see Figure 4.3).
4.1. MODELLING THE TRANSLATION MOVEMENT 43

Figure 4.2: Dashpot damper (source: Wikimedia). There are other types of
dampers. This one, because it contains a viscous uid, follows (4.4) rather
closely.

Figure 4.3: A linear approximation of a continuous function with a continuous


derivative provides good results in some limited range (source: Wikimedia).

2. Springs that have a dierent behaviour for positive variations of length


(x > 0, extension) and negative variations of length (x < 0, compression)
are not uncommon.

3. In any case, Hooke's law is obviously valid only for a limited range of
length variations.

Example 4.1. A stainless steel helicoidal spring is 10 cm long. When a traction


force of 10 N is applied, its length increases to 12 cm. What force must be
applied so that its length increases to 15 cm? What force must be applied so
that its length increases to 40 × 103 km?
A length increase of 2×10−2 m corresponds to a 10 N force, so k = 2×10 10
−2 =

500 N/m. The answer to the rst question, when x = 5 cm = 5 × 10 m, is −2

F = 500 × 5 × 10−2 = 25 N. Or, alternatively, since the length increase is to be


12−10 = 2.5 times larger than in the original situation, the force should also be
15−10

2.5 times larger, i.e. 2.5 × 10 = 25 N.


In the second case, it should be obvious that a 10 cm helicoidal spring cannot
be stretched to a length which is roughly the perimeter of the Earth. You should
not have to calculate the ludicrous result F = 500×40×106 = 2×1010 N= 20 GN
obtained applying the linear relation (4.3) to realise that the spring will surely
break well before such a force is applied. You should have by now seen a sucient
number of diagrams such as the one in Figure 4.4 to realise this at once, without
even having to look for the yield strength of stainless steel and coming up with
an educated guess for the spring's cross-sectional area.

Remark 4.2. Our models are approximations of reality. They are valid only for
limited ranges of parameters. These important truths cannot be overstated.

Remark 4.3. Viscous damping (4.4) is another model of reality that very often
is only a rough approximation. Dashpot dampers such as the ones in Figure 4.2
follow this law more closely than other damping phenomena, where damping
may be non-linear or, if linear, proportional to another derivative of position
x (some damping models even use fractional orders of dierentiation). In any
case, it is obvious that after a while x will reach its end of stroke, and (4.4) will End of stroke
no longer apply.

Combining (4.2)(4.4) with Newton's third law  which states that when Newton's third law
a body exerts a force on another, this latter body exerts an equal force, but
opposite in direction, on the rst body , it is possible to nd the dierential
equations that model translation mechanical systems.
44 CHAPTER 4. MODELLING MECHANICAL SYSTEMS

Figure 4.4: Schematic stress-strain curve of steel (source: Wikimedia).

Example 4.2. One of the most simple, but also most useful, mechanical mod- Massspringdamper
els is the so-called massspringdamper system, which can be used to model tem
the behaviour of a mass, on which a force is applied, connected to an inertial
referential by a spring and a damper (remember that any real spring also has
some damping, so, even in the absence of a dashpot damper or a similar device,
energy dissipation must be accounted for). See Figure 4.5. This model can be
applied to many systems, among which the vertical behaviour of a car's suspen-
sion (for which of course far more accurate, and complex, models can also be
used  see Figure 4.6).
Since one of the extremities of the spring is xed, the force that it exerts on
M is

FK (t) = −K x(t) (4.5)

or, omitting the dependence on time, FK = −K x. There is a minus sign


because, when x increases, the force on M opposes the increase of x. Similarly,
the force exerted on M by the damper is

FB (t) = −B ẋ(t) (4.6)

or FB = −B ẋ to simplify. There is a minus sign because, when x increases, ẋ


is positive, and the force on M opposes the increase of x. Thus

F (t) − K x(t) − B ẋ(t) = M ẍ(t) (4.7)

We will now assume that initial conditions are zero. Applying the Laplace
transform,

F (s) − KX(s) − BsX(s) = M s2 X(s) (4.8)

which we can rearrange as


X(s) 1
= 2
(4.9)
F (s) M s + Bs + K

The form (4.9) in which the model of the massspringdamper system was
Transfer function put is called transfer function. It is very practical for the resolution of dy-
namic models.
Denition 4.1. Given a SISO system modelled by a dierential equation, its
transfer function is the ratio of the Laplace transform of the output (in the nu-
merator) and the Laplace transform of the input (in the denominator), assuming
that all initial conditions are zero.
Remark 4.4. When you see a transfer function, never forget that it is nothing
but a dierential equation under disguise. The transfer function is a rational
function in s, which conceals a dynamic relation in time (or a relation in space,
if the dierential equation has derivatives in space rather than in time).
4.1. MODELLING THE TRANSLATION MOVEMENT 45

Figure 4.5: A massspringdamper system, with mass M , spring constant K ,


and damping coecient B .

Figure 4.6: Independent suspension of a car's wheel (source: Wikimedia). The


spring is clearly visible. The damper can be seen inside the coils of the spring.
Even when the suspension does not consist of one spring and one damper, a
massspringdamper model can be used, as nearly all suspensions have a spring-
like restitution force and some sort of energy dissipation.
46 CHAPTER 4. MODELLING MECHANICAL SYSTEMS

Remark 4.5. Notice that it is necessary to assume zero initial conditions to


obtain a transfer function. Otherwise, additional terms would appear, and it
would be impossible to isolate on one side of the equation the ratio of the Laplace
transforms of the output and the input. We will further study this subject in
Chapter 8.

Example 4.3. Let us nd the transfer functions corresponding to (4.2)(4.4)


 i.e. to a mass, to a spring, and to a damper. For the spring and the damper,
consider that extremity x2 is xed. Assume that the output is position X(s),
and force F (s) is the input. Then

X(s) 1
= (4.10)
F (s) ms2
X(s) 1
= (4.11)
F (s) k
X(s) 1
= (4.12)
F (s) cs

Since transfer functions are functions of s, they are usually represented by


one capital letter, such as F of G; when F (s) is used to represent a transfer
function, care must be taken not to use the same letter to represent the Laplace
transform F (s) of a force f (t).

Example 4.4. Suppose that force F (t) = sin(t) is applied to the system in
Figure 4.5, in which M = 1 kg, B = 3.5 N s/m, K = 1.5 N/m. What is the
output x(t)?
The system's transfer function is

X(s) 1 1
G(s) = = 2 = (4.13)
F (s) s + 3.5s + 1.5 (s + 3)(s + 0.5)

We have
1
F (s) = (4.14)
s2 + 1
and thus
L of the input transfer function
z }| { z }| {
1 1 as + b c d
X(s) = = 2 + + (4.15)
s2 + 1 (s + 3)(s + 0.5) s + 1 s + 3 s + 0.5
(as + b)(s2 + 3.5s + 1.5) + c(s2 + 1)(s + 0.5) + d(s2 + 1)(s + 3)
=
(s2 + 1)(s + 3)(s + 0.5)
3 2
s (a + c + d) + s (3.5a + b + 0.5c + 3d) + s(1.5a + 3.5b + c + d) + (1.5b + 0.5c + d)
=
(s2 + 1)(s + 3)(s + 0.5)

whence
  

a+c+d=0 
c + d = −a 
 c + d = −a

3.5a + b + 0.5c + 3d = 0 
3.5a − 0.5b = −1 
50b = 2
⇔ ⇔


1.5a + 3.5b + c + d = 0 

0.5a + 3.5b = 0 

 a = −7b
1.5b + 0.5c + 3d = 1 c + 6d = 2 − 3b c + 6d = 2 − 3b
  

7 7 1
  

 c + d = 25 
 c = 25 − d 
 c = − 25
b = 1
 
 

⇔ 25 ⇔ ⇔ (4.16)


 a = − 257 

  

 
47 40 8
c + 6d = 25 5d = 25 d = 25
  

Finally,
7 1 1 8
− 25 − 25
 
s
x(t) = L −1 + 25
+ + 25
s2 + 1 s2 + 1 s + 3 s + 0.5
7 1 1 −3t 8 −0.5t
= − cos(t) + sin(t) − e + e (4.17)
25 25 25 25
4.2. SIMULATING TRANSFER FUNCTIONS IN MATLAB 47

Figure 4.7: The system from Example 4.5, modelled by (4.20).

Example 4.5. Generalise the massspringdamper system of Example 4.2 to


include three masses connected by springs and dampers as seen in Figure 4.7.
The forces exerted by these components will be


 fK1 = K1 (x2 − x1 )

f = K (x − x )
K2 2 3 2
(4.18)


 fB1 = B (ẋ
1 2 − ẋ 1)
fB2 = B2 (ẋ3 − ẋ2 )

Applying Newton's law to the masses, we get



K1 (x2 − x1 ) + B1 (ẋ2 − ẋ1 ) = M1 ẋ1

K2 (x3 − x2 ) + B2 (ẋ3 − ẋ2 ) − K1 (x2 − x1 ) − B1 (ẋ2 − ẋ1 ) = M2 ẋ2 (4.19)

−K2 (x3 − x2 ) − B2 (ẋ3 − ẋ2 ) = M3 ẋ3

Finally, the mathematical model of the system is



M1 ẍ1 + K1 (x1 − x2 ) + B1 (ẋ1 − ẋ2 ) = 0

M2 ẍ2 + K1 (x2 − x1 ) + B1 (ẋ2 − ẋ1 ) + K2 (x2 − x3 ) + B2 (ẋ2 − ẋ3 ) = 0

M3 ẍ3 + K2 (x3 − x2 ) + B2 (ẋ3 − ẋ2 ) = 0

(4.20)
To nd a transfer function from the equations above, we would have to know
which of the three positions x1 , x2 and x3 is the input and which is the output.

Remark 4.6. Remember that it is somewhat irrelevant if positive displace-


ments are assumed to be in one direction or the other. In the example above,
positive displacements were arbitrarily assigned to the direction from the left
to the right; the opposite could have been assumed, and signs would then be
changed in such a way that the resulting model would still be correct.

4.2 Simulating transfer functions in Matlab


There are two ways of creating a transfer function with Matlab:

ˆ tf creates a transfer function, represented by two vectors with the coef-


cients of the polynomials in the numerator and in the denominator (in
decreasing order of the exponent);

ˆ s = tf('s') creates the Laplace transform variable s, which can then be


manipulated using algebraic operators.

Example 4.6. Transfer function (4.13) from Example 4.4 can be created as Matlab's command tf
>> G = tf(1,[1 3.5 1.5])

G =

1
-----------------
s^2 + 3.5 s + 1.5

Continuous-time transfer function.


48 CHAPTER 4. MODELLING MECHANICAL SYSTEMS

0.4

0.3

0.2

0.1

x [m]
0

−0.1

−0.2

−0.3
0 10 20 30 40 50
t [s]

Figure 4.8: Results of Example 4.4.

or else as
>> s = tf('s')

s =

Continuous-time transfer function.

>> G = 1/(s^2+3.5*s+1.5)

G =

1
-----------------
s^2 + 3.5 s + 1.5

Continuous-time transfer function.

Simulation Command lsim (linear simulation) uses numerical methods to solve the dif-
ferential equation represented by a transfer function for a given input. In other
words, it simulates the LTI represented by the transfer function.
Matlab's command lsim Example 4.7. The output found in Example 4.4 can be obtained and displayed
as follows, if transfer function G(s) has been created as above:
>> t = 0 : 0.01 : 50;
>> f = sin(t);
>> x = lsim(G, f, t);
>> figure,plot(t,x, t,-7/25*cos(t)+1/25*sin(t)-1/25*exp(-3*t)+8/25*exp(-0.5*t))
>> xlabel('t [s]'), ylabel('x [m]')
See Figure 4.8. Notice that we plotted two curves: the rst was created with
lsim, the second is (4.17). As expected, they coincide (there is a small numerical
dierence, too small to show up in the plot), and only one curve can be seen.
Remark 4.7. The result (4.17) from Example 4.4 is exact. So the second curve
in Figure 4.8 only has those numerical errors resulting from the implementation
of the functions. The rst curve has the errors resulting from the numerical
method with which the dierential equation was solved. Of course, both curves
are based upon the same transfer function, and thus will suer from any errors
that there may be in that model (e.g. imprecise values of parameters M , B ,
and K , or neglected non-linearities in the spring or the damper). Do you still
remember Remark 4.2?
4.3. MODELLING THE ROTATIONAL MOVEMENT 49

Figure 4.9: Tangential force F for a rotation of radius r applied on a point-like


mass.

4.3 Modelling the rotational movement


Mechanical systems with movement of rotation can usually be modelled using
three components with the respective three equations:
1. A moment of inertia. For these components, apply Newton's second Moment of inertia
law for rotation: Newton's second law for
rotation
Theorem 4.1.
X d
τ= (J(t) ω̇(t)) (4.21)
dt
Here, τ is the sum of all torques applied on the moment of inertia J(t),
P
which is at angular position ω(t).

Proof. Let r be the radius of rotation for which are applied the tangential
forces F that cause the torque (see Figure 4.9). Because x = rω , then
P
ẋ = rω̇ , and Newton's second law (4.1) becomes
X d X d
m(t) r2 ω̇(t) (4.22)

F = (m(t) rω̇(t)) ⇔ r F =
dt dt
Because the torque τ of a force F is rF , and the moment of inertia J
of mass m is mr2 , (4.21) follows for a point-like mass. In the case of a
distributed mass, integrating (4.22) over the volume occupied will then
yield the desired result.
Corollary 4.1. Considering an LTI system, J will not change with time,
and so we are left with
X
τ = J ω̈(t) (4.23)

A moment of inertia is usually represented by J or I (the latter letter is


avoided when it can be confounded with an electrical current); its SI units
are kg m2 . A torque is usually represented by τ or T ; its SI units are N m.
2. A torsion spring. This is a mechanical device that stores energy (see Torsion spring
Figure 4.10) and usually follows the angular form of Hooke's law: Angular form of Hooke's
law
τ = κ (ω1 − ω2 ) (4.24)

Here, τ is the torque exerted by the spring, ω1 and ω2 are the angular
positions of the extremities of the spring, and κ is the spring constant.
This constant is usually represented by the Greek character κ or K (to
avoid confusion with a translation spring, for which a Latin character is
used), and its SI units are N/rad.
3. A rotary damper, or torsional damper. The most usual model for Rotary (or torsional)
this mechanical device that dissipates energy is viscous damping: damper

τ = c (ω̇1 − ω̇2 ) (4.25)

Here, τ is the torque exerted by the damper, ω̇1 and ω̇2 are the angular
velocities of the extremities of the damper, and c is the damping constant.
50 CHAPTER 4. MODELLING MECHANICAL SYSTEMS

Figure 4.10: A torsion spring mounted on a mousetrap (source: Wikimedia).


Notice that Hooke's law for torsion springs will only apply in the range of angles
comprised within the ends of stroke (say, from 0 rad to π rad).

Figure 4.11: A mechanical system comprising a moment of inertia J , a torsion


spring with constant κ, and a rotational damper with constant B .

This constant is usually represented by c, C , b or B , just like for the


translation case, but its SI units are N s/rad.
Also like (4.4), model (4.25) can be used to model unintended energy
dissipation, such as that due to friction.
Remark 4.8. When dealing with rotation, take care with angular units. Confu-
sion about values in degrees, radians, and rotations is a common source of error.
This is true also for angular speed, angular velocity, angular spring constants,
etc..
Example 4.8. Consider the system in Figure 4.11. The torque exerted on J
by the spring is

τκ (t) = −κ ω(t) (4.26)

where ω is the rotation of J in the sense of rotation in which the applied torque
τ is positive. The torque exerted by the damper is

τB (t) = −B ω̇(t) (4.27)

Thus

τ (t) − κ ω(t) − B ω̇(t) = J ω̈(t) (4.28)

Applying the Laplace transform (and assuming, once more, that all initial con-
ditions are zero),

Ω(s) 1
T (s) − κΩ(s) − BsΩ(s) = Js2 Ω(s) ⇔ = (4.29)
T (s) Js2 + Bs + κ

4.4 Energy, eort and ow


A comparison of transfer functions (4.9) and (4.29) shows us that dierent
systems can have similar models. Actually, if the numerical values of M and J ,
4.5. OTHER COMPONENTS 51

and of B (in translation) and B (in rotation), and of K and κ, are the same,
then the model will be the same.
As you surely know by now, this happens not only with mechanical systems,
but also with systems of other, dierent types, as we shall see in the following
chapters. One of the best ways of studying this parallelism is to see systems as
energy converters, and energy E as the integral of the product of two variables, Energy
called eort variable e and ow variable f : Eort
Z t Flow
E(t) = e(t) f (t) dt (4.30)
0

In other words, the product e(t) × f (t) is the instantaneous power Ė(t). Power
In the case of a translation movement,
Z t
Ė(t) = F (t) ẋ(t) ⇔ E(t) = F (t) ẋ(t) dt (4.31)
0

In the case of a rotation movement,


Z t
Ė(t) = τ (t) ω̇(t) ⇔ E(t) = τ (t) ω̇(t) dt (4.32)
0

We will consider force F and torque τ as the ow variable, and velocity ẋ and
angular velocity ω̇ as the eort variable. But notice that it would make no
dierence if it were the other way round. In any case, their product will be the
power. Both choices can be found in published literature.
The components of a system are the eort accumulator, the ow accu- Eort accumulator
mulator, and the energy dissipator, as seen in Table 4.1. For both accumu- Flow accumulator
lators, energy is the integral of accumulated ow or accumulated eort: elastic Dissipator
potential energy in the case of eort, and kinetic energy in the case of ow. The
dissipator dissipates energy and it makes no dierence whether it is kinetic or
potential energy that it dissipates. Table 4.1 also includes the relations between
these quantities.
Denition 4.2. A transfer function of a system that has the ux as input and
the eort as output is called impedance of that system. A transfer function of a Impedance
system that has the eort as input and the ux as output is called admittance Admittance
of that system. Consequently, the admittance is the inverse of the impedance.

Transfer functions (4.10)(4.12) can be rewritten so as to give the mechanical


impedance of a mass, a spring, and a damper: Mechanical impedance

sX(s) 1
= (4.33)
F (s) ms
sX(s) s
= (4.34)
F (s) k
sX(s) 1
= (4.35)
F (s) c

4.5 Other components


Among the several other components that may be found in mechanical systems,
the following ones, because of their general use and of their linearity, deserve a
passing mention:
ˆ Cogwheels, or gears. These wheels convert rotation movement into an- Cogwheels
other rotation movement, but the wheels need not be in the same plane.
For two external cogwheels, the sense of rotation is inverted, whereas for
one external and one internal cogwheel it is not (see Figure 4.12).
Theorem 4.2. Let ω1 and ω2 be the angular positions of the two cog-
wheels, and r1 and r2 the respective radius. Then
ω1 r2
=± (4.36)
ω2 r1
52

Translation mechanical system SI Rotation mechanical system SI


eort e velocity ẋ m s−1 angular velocity ω̇ rad s−1
ow f force F N torque τ Nm
eort accumulator
R spring, with spring Rconstant K N/m angular spring, with springRconstant κ N/rad
CHAPTER 4.

accumulated eort ea = e dt position x = ẋ dt m angular position ω = ω̇ dt rad


1
position x = K F angular position ω = κ1 τ
1 1 2
relation between accumulated eort and ow ea R= ϕ(f )
accumulated energy Ee = ea df elastic potential energy Ee = 2K F2 J elastic potential energy Ee = 2κ τ J
ow accumulator
R mass M R kg moment of inertia JR kg m2
accumulated ow fa = f dt momentum p = F dt kg m s−1 angular momentum h = τ dt kg m2 s−1
relation between accumulated ow and eort fa R= ϕ(e) momentum p = M ẋ angular momentum h = J ω̇
accumulated energy Ef = fa de kinetic energy Ef = 21 M ẋ2 J kinetic energy Ef = 21 J ω̇ 2 J
dissipator damper, with damping constant b N s/m rotary damper, with damping constant b N s/rad
relation between eort and ow e =R ϕ(f ) ẋ = 1b F ω̇ = 1b τ
dissipated energy Ed = f de Ed = 12 bẋ2 J Ed = 12 bω̇ 2 J
Table 4.1: Eort, ow, accumulators and dissipators in mechanical systems
MODELLING MECHANICAL SYSTEMS
4.5. OTHER COMPONENTS 53

Figure 4.12: Left: two external cogwheels (source: Wikimedia). Right: two
cogwheels, the outside one being an internal cogwheel (because the cogs are on
the inside), and the inside one being an external cogwheel (because the cogs are
on the outside; source: https://etc.usf.edu/clipart).

Figure 4.13: Proof of Theorem 4.3.

Figure 4.14: Transmission belts in a Diesel engine (source: Wikimedia).


54 CHAPTER 4. MODELLING MECHANICAL SYSTEMS

Proof. This is a straightforward consequence of the linear movement of


the point of contact between the two cogwheels being the same:

ω1 r1 = ω1 r2 (4.37)

Corollary 4.2. Let n1 and n2 be the numbers of cogs, or teeth, in the


cogwheels. Then
ω1 ω̇1 ω̈1 r2 n2
= = =± =± (4.38)
ω2 ω̇2 ω̈2 r1 n1

Proof. The spacing of the teeth in both cogwheels has to be the same,
otherwise they would not match. Thus the perimeters of the two wheels,
2πr1 and 2πr2 , are directly proportional to n1 and n2 .
As to the angular velocity and acceleration, it suces to dierentiate
(4.36).
Theorem 4.3. Let τ1 and τ2 be the torques of the two cogwheels. Then
τ1 r1 n1
= = (4.39)
τ2 r2 n2

Proof. See Figure 4.13. The forces that each wheel exerts on the other are
equal. Thus
τ1
τ1 = F r1 ⇒ F = (4.40)
r1
τ2
τ2 = F r2 ⇒ F = (4.41)
r2
and the result follows.

Belt ˆ A transmission belt. It also converts rotation movement into another


rotation movement:
ω1 ω̇1 ω̈1 r2
= = = (4.42)
ω2 ω̇2 ω̈2 r1
Here ω1 and ω2 are the angular positions of the two wheels connected by
the belt, and r1 and r2 are the respective radius. See Figure 4.14.
Rack and pinion ˆ A rackand pinion. It converts rotation movement into translation move-
ment and vice-versa:

x = ωr ⇔ ẋ = ω̇r ⇔ ẍ = ω̈r (4.43)

Here x is the distance of the translation movement, r the radius of the


wheel, and ω the angle of the rotation movement. See Figure 4.15.
Harmonic drive ˆ A harmonic drive. It converts rotation movement into another rotation
movement, using an outside internal circular gear, inside which there is an
external elliptical gear, to which an elliptical shaft is connected through a
rolling bearing:
ω1 ω̇1 ω̈1 n1 − n0
= = =− (4.44)
ω2 ω̇2 ω̈2 n0
Here ω2 is the angular position of the elliptical shaft inside the elliptical
gear, ω1 is the angular position of the shaft connected to the elliptical
gear, n1 is the number of teeth of the said gear, and n0 is the number of
teeth of the outside internal gear (which is xed). See Figure 4.15.
Remark 4.9. Linear models (4.36)(4.44) omit non-linear eects that may
appear, such as backlash due to gaps betweens cogs (see Figure 4.16). These
eects may sometimes be important but are not our concern here.
Remark 4.10. Models (4.36)(4.44) are not only linear but also static (in the
sense that outputs do not depend on past inputs, not in the sense that these
components never move, of course). Non-linearities such as backlash make the
components in fact dynamic.
4.5. OTHER COMPONENTS 55

Figure 4.15: Left: rack and pinion in a canal gate; the rack is the linear ele-
ment (in this case, vertical). Right: schematic of a harmonic drive. (Source:
Wikimedia.)

Figure 4.16: Backlash (source: Wikimedia).


56 CHAPTER 4. MODELLING MECHANICAL SYSTEMS

Glossary
Als Storm weer bijkomt is het eerste wat hij ziet het vriendelijke
gezicht van de jager.
Wat wilt u dat ik voor u doe, god?
Ik. . . kan je verstaan, maar. . . maar spreek ik nu jouw taal of jij de
mijne? En wat bedoel je met god? Ik ben geen god!
Maar natuurlijk bent u dat! Alles wijst erop. U sprak wartaal en
begreep mij niet. Nou, het is duidelijk dat de goden de mensen
niet begrijpen, anders hadden ze ze allang uitgeroeid! En nu u van
de parel der kennis hebt gegeten, begrijpt u er nog niets van. Dat
bewijst dat u gek bent! En de goden moeten gek zijn anders hadden
ze de wereld nooit zo gemaakt als hij is.

Don Lawrence (1928  „2003), Martin Lodewijk (1939  . . . ), Storm, De


kronieken van Pandarve 10, De piraten van Pandarve

accumulated eort potencial acumulado


accumulated ow uxo acumulado
admittance admitância
cogwheel roda dentada
coil spring mola helicoidal
compression compressão
damper amortecedor
damping amortecimento
dashpot amortecedor viscoso
dissipator dissipador
eort accumulator acumulador de potencial
eort variable variável de potencial
end of stroke m de curso
energy energia
extension extensão
ow accumulator acumulador de uxo
ow variable variável de uxo
gear roda dentada
harmonic drive redutor harmónico
helical spring mola helicoidal
impedance impedância
leaf spring mola de folhas, mola de lâminas
mass massa
mechanical impedance impedância mecânica
moment of inertia momento de inércia
momentum quantidade de movimento, momento linear
point-like mass massa pontual
power potência
rack and pinion pinhão e cremalheira
rotary damper amortecedor rotativo, amortecedor de torção
simulation simulação
spring mola
spring constant constante de mola
torque binário, torque (bras.)
torsion spring mola de torção
torsional damper amortecedor rotativo, amortecedor de torção
transfer function função de transferência
transmission belt correia de transmissão
volute spring mola de volutas, mola voluta

Exercises
1. Consider the system in Figure 4.17.

(a) Find the dierential equations that model the system.


4.5. OTHER COMPONENTS 57

Figure 4.17: System of Exercise 1.

Figure 4.18: System of Exercise 2.

(b) From the result above, knowing that M1 = 1 kg, M2 = 0.5 kg,
K1 = 10 N/m, K2 = 2 N/m, and B2 = 4 N s/m, nd transfer
function XF2(s)
(s)
.

2. Consider the system in Figure 4.18. The wheels have neglectable mass and
inertia; they are included to show that the masses move without friction.
(a) Find the dierential equations that model the system.
(b) From the result above, knowing that M1 = 100 kg, M2 = 10 kg,
K = 50 N/m, and B = 25 N s/m, nd transfer function X 2 (s)
X1 (s) .

3. Consider the system in Figure 4.19. The wheels have neglectable mass and
inertia; they are included to show that the masses move without friction.
(a) Find the dierential equations that model the system.
(b) From the result above, knowing that M1 = M2 1 kg, K = 5 N/m,
and B = 10 N s/m, nd transfer function XF1(s)
(s)
.
X2 (s)
(c) For the same constants, nd transfer function F (s) .

4. Consider the system in Figure 4.20.


(a) Find the dierential equations that model the system.
(b) From the result above, knowing that M1 = 1 kg, M2 = 2 kg, M3 =
3 kg, K1 = 10 N/m, K2 = 20 N/m, K3 = 30 N/m, B1 = 5 N s/m,
B2 = 10 N s/m, and B3 = 15 N s/m, nd transfer function XF1(s)
(s)
.

Figure 4.19: System of Exercise 3.


58 CHAPTER 4. MODELLING MECHANICAL SYSTEMS

Figure 4.20: System of Exercise 4.

Figure 4.21: System of Exercise 5.

X2 (s)
(c) For the same constants, nd transfer function F (s) .
X3 (s)
(d) For the same constants, nd transfer function F (s) .

5. Consider the system in Figure 4.21. The cogwheels have neglectable mo-
ments of inertia, when compared to J . Let Nu be the number of cogs in
the upper cogwheel, and Nl the number of cogs in the lower cogwheel.

(a) Find the dierential equations that model the system.


(b) From the result above, knowing that J = 50 kg m2 , Nu = 20, Nl =
30, and B = 40 N s/m, nd transfer function TΩ(s)
(s) .

6. Consider the system in Figure 4.22. The pinion's centre is xed, and its
moment of inertia I includes the lever actuated by force F .

(a) Find the dierential equations that model the system.


(b) From the result above, knowing that r = 0.2 m, I = 0.8 kg m2 ,
m = 20 kg, k = 1000 N/m, and b = 480 N s/m, nd transfer function
X(s)
F (s) .

7. Consider the system in Figure 4.23. Force F is applied through a bar of


neglectable mass, connected by the spring and the damper to mass m,
aected by friction force Fa that follows the law of viscous damping with
constant ba . The bar has velocity vF ; mass m has velocity F .

(a) Find the dierential equations that model the system.


VF (s)
(b) From the result above, nd transfer function F (s) .

8. Consider the system in Figure 4.24. The position of mass M is x(t).

(a) Find the dierential equations that model the system.


X(s)
(b) From the result above, nd transfer function T (s) .

9. For all the systems in the exercises above, nd:

(a) the eort variables;


(b) the eort accumulators;
(c) the ow variables;
(d) the ow accumulators;
4.5. OTHER COMPONENTS 59

Figure 4.22: System of Exercise 6.

Figure 4.23: System of Exercise 7.

Figure 4.24: System of Exercise 8.


60 CHAPTER 4. MODELLING MECHANICAL SYSTEMS

Figure 4.25: From left two right: two springs in series; two springs in parallel;
two dampers in series; two dampers in parallel.

Figure 4.26: Mechanical system from Exercise 11.

(e) the dissipators.

10. Find the X(s)


F (s) transfer functions, as in (4.10)(4.12), of the following sys-
tems (see Figure 4.25):

(a) two springs, with constants K1 and K2 , in series;


(b) two springs, with constants K1 and K2 , in parallel;
(c) two dampers, with constants B1 and B2 , in series;
(d) two dampers, with constants B1 and B2 , in parallel.

11. Consider the mechanical system in Figure 4.26 comprising a mass M , a


damper with viscous friction coecient B and a spring with stiness K .
These elements are connected to a xed pulley of radius r with inertia J .
The system is driven by a torque T . Let θ be the rotation angle of the
pulley.

(a) Model the system, writing the equations that describe its dynamics.
(b) Obtain the transfer function considering the torque T as input and
the mass position x as output.
Chapter 5

Modelling electrical systems


By that time Mordor was deservedly being called smithy of the na-
tions, and it could trade its manufactured goods for any amounts of
food from Khand and Umbar. Trading caravans went back and forth
through the Ithilien Crossroads day and night, and more and more
voices in Barad-dúr were saying that the country has had enough
tinkering with agriculture, which was nothing but a net loss anyway,
and the way to go was to develop what nobody else had  namely,
metallurgy and chemistry. Indeed, the industrial revolution was well
underway; steam engines toiled away in mines and factories, while
the early aeronautic successes and experiments with electricity were
the talk of the educated classes.
Kirill Yeskov (1956  . . . ), The Last Ringbearer, I 3 (transl. Yisroel Markov)
This chapter addresses the modelling of electrical systems.

5.1 Passive components


The three simplest elements in an electrical circuit are:
1. A resistor. This component (see Figure 5.1) dissipates energy according Resistor
to Ohm's law: Ohm's law

U (t)
R(t) = (5.1)
I(t)
Here R is the resistance, U is the voltage (or tension, or electric potential
dierence), and I is the current. Notice that U = U1 − U2 , where U1 and
U2 are the voltages at the resistor's terminals.
The resistance R of a uniform conductor is directly proportional to its
length L and inversely proportional to its cross-section A:
L
R=ρ (5.2)
A
Proportionality constant ρ is called resistivity. This variation with length
is used to build variable resistances, shown in Figure 5.2.
2. A capacitor. This component (see Figure 5.4) stores energy and its most Capacitor
usual model is
1
U (t) = Q(t) (5.3)
C
where Q(t) is the electrical charge stored, and C is the capacity. Since
I(t) = dQ(t)
dt ,

1 t
Z
U (t) = I(t) dt (5.4)
C 0
Dierentiating, we get
dU (t)
I(t) = C (5.5)
dt

61
62 CHAPTER 5. MODELLING ELECTRICAL SYSTEMS

Figure 5.1: Dierent types of resistors. Left: individual resistors for use in
electronics; centre: many resistors in one encasing; right: wirewound resistors
for high tensions and currents in a train. (Source: Wikimedia.) There are still
other types of resistors.

Figure 5.2: Potentiometers (or variable resistors, or rheostats) have a slider or


a screw to move the position of a terminal, and thus the length of the resistor
which is actually employed; resistance is proportional to this length (source:
Wikimedia). See Figure 5.3.

Figure 5.3: Top: a linear potentiometer with length xmax and resistance Rmax ,
connected on the left and on the movable terminal at position x, has a resistance
R proportional to the position: xmax
x R
= Rmax ⇔ R = Rmax xmaxx
. Bottom: this
potentiometer is used as a tension divider, grounded on the left and at Vmax on
the right; the voltage at the terminal is given by Vmax
V R
= Rmax x
= xmax ⇒V =
Vmax xmax ⇔ x = xmax Vmax .
x V
5.1. PASSIVE COMPONENTS 63

Figure 5.4: Left: 100 nF capcitor; right: 20 µF capacitor.

Figure 5.5: Inductor for electronic circuits.

3. An inductor. This component (see Figure 5.5) also stores energy and its Inductor
most usual model is
1
I(t) = λ(t) (5.6)
L
Rt
where λ(t) = 0 U (t) dt is the ux linkage, and L is the inductance. Dif-
ferentiating, we get

dI(t)
U (t) = L (5.7)
dt

The transfer functions of the resistor, the capacitor, and the inductor, cor-
responding to (5.1), (5.5), and (5.7), considering always tension U (s) as the
output and current I(s) as the input, are

U (s)
=R (5.8)
I(s)
U (s) 1
= (5.9)
I(s) Cs
U (s)
= Ls (5.10)
I(s)

Remark 5.1. Notice that Ohm's law (5.1) or (5.8) corresponds to a static
system.

Remark 5.2. It cannot be overstated that relations (5.8)(5.10) are not fol-
lowed by many components:

ˆ Many resistances do not follow a linear relation between U and I such as


(5.1), and are thus called non-ohmic resistors. Still, Ohm's law can be a Non-ohmic resistors
good approximation in a limited range of values (see Figure 4.3 again).
64 CHAPTER 5. MODELLING ELECTRICAL SYSTEMS

Table 5.1: Eort, ow, accumulators and dissipators in electrical systems


Electrical system SI
eort e voltage U V
ow f current I A
eort accumulator inductor with induction L H
accumulated eort ea = e dt ux linkage λ = U dt Wb
R R

relation between accumulated eort and ow ea R= ϕ(f ) ux linkage λ = LI


accumulated energy Ee = ea df inductive energy Ee = 12 LI 2 J
ow accumulator capacitor with capacity C F
accumulated ow fa = f dt charge Q = I dt C
R R

relation between accumulated ow and eort fa R= ϕ(e) charge Q = CU


accumulated energy Ef = fa de capacitative energy Ef = 12 C V̇ 2 J
dissipator resistance R Ω
relation between eort and ow e =R ϕ(f ) U = RI
dissipated energy Ed = f de Ed = 12 RI 2 J

ˆ Many capacitors have variable capacity C , depending on the voltage ap-


plied. Others follow dierential equations of fractional order (which we
will study in Chapter 35).

ˆ Inductances always have some resistance, which is often not neglectable.


So their transfer function would more accurately be R + Ls.

ˆ Even when (5.8)(5.10) are accurately followed, this only happens for a
limited range of values. Increase U or I too much, and any electrical
component will cease to function (burn, melt. . . ). What is too much
depends on the particular component: there are components that cannot
stand 1 V while others work at 104 V and more.

5.2 Energy, eort and ow


Rt
Because Ė(t) = U (t)I(t) and E(t) = 0 U (t)I(t) dt, eort and ow variables
are U and I . While either can once more play each of the roles, by universal
convention,

ˆ U is the eort variable,

ˆ I is the ow variable, and thus

ˆ the inductor is the eort accumulator,

ˆ the capacitor is the ux accumulator,

ˆ the resistor is the dissipator.

Table 5.1 sums up the passing information and relations.

Remark 5.3. Transfer functions (5.8)(5.10) have the ux as input and the
Electrical impedance eort and output. They consequently give the impedance of the corresponding
components.

To model electric systems with these components, (5.8)(5.10) are combined


Kircho 's laws with Kircho's laws:

Kircho 's current law ˆ The current law states that the sum of the currents at a circuit's node is
zero.

Kircho 's voltage law ˆ The voltage law states that the sum of the voltages around a circuit's
closed loop is zero.

Voltage divider Example 5.1. Consider the system in Figure 5.6 known as voltage divider.
The input is Vi (t) and the output is Vo (t). Applying the current law at point
5.2. ENERGY, EFFORT AND FLOW 65

Figure 5.6: Left: voltage divider. Right: RC circuit.

B , we see that the current owing from A to B must be the same that ows
from B to C . Applying Ohm's law (5.1) to the two resistances, we see that

VB (t) − VA Vo (t) − Vi (t) Vo − Vi


R1 = = ⇒I= (5.11)
I(t) I(t) R1
VC (t) − VB (t) 0 − Vo (t) −Vo
R2 = = ⇒I= (5.12)
I(t) I(t) R2

In the last equalities above, we dropped the dependence on t to alleviate the


notation. Consequently,
Vo R2
(Vo − Vi )R2 = −Vo R1 ⇔ Vo (R1 + R2 ) = Vi R2 ⇔ = (5.13)
Vi R1 + R2
Vo (t) Vo (s)
Notice that this system is static, and from Vi (t) = R2
R1 +R2 we get Vi (s) = R1 +R2 .
R2

Remark 5.4. Remember that, similarly to what happens with the positive
direction of displacements in mechanical systems, it is irrelevant if a higher
tension is presumed to exist to the left or to the right of a component. Current
is always assumed to ow from higher to lower tensions; as long as equations are
coherently written, if in end current turns out to be negative, this only means
that it will ow the other way round.

Example 5.2. The transfer function of the system in Figure 5.6 known as RC
circuit can be found in almost the same manner, thanks to impedances: RC circuit

VB (s) − VA (s) Vo (s) − Vi (s) Vo − Vi


R= = ⇒I= (5.14)
I(s) I(s) R
1 VC (s) − VB (s) 0 − Vo (s)
= = ⇒ I = −Vo Cs (5.15)
Cs I(s) I(s)

In the last equalities above, we dropped the dependence on s to alleviate the


notation. Consequently,
Vo 1
Vo − Vi = −Vo RCs ⇔ Vo (1 + RCs) = Vi ⇔ = (5.16)
Vi 1 + RCs

Notice that this system is dynamic, and from Vo (s)(1 + RCs) = Vi (s) we get
Vo (t) + RC dVdt
o (t)
= Vi (t).

Example 5.3. Both systems above are particular cases of the generic system Two generic impedances
in Figure 5.7 with two impedances:

VB (s) − VA (s) Vo (s) − Vi (s) Vo − Vi


Z1 (s) = = ⇒I= (5.17)
I(s) I(s) Z1
VC (s) − VB (s) 0 − Vo (s) −Vo
Z2 (s) = = ⇒I= (5.18)
I(s) I(s) Z2
66 CHAPTER 5. MODELLING ELECTRICAL SYSTEMS

Figure 5.7: Left: generic electrical system with two impedances, of which the
voltage divider (Figure 5.6), the RC circuit (Figure 5.6) and the CR circuit (to
the right) are particular cases. Right: CR circuit.

Figure 5.8: Two impedances in parallel.

Consequently,

Vo Z2
(Vo − Vi )Z2 = −Vo Z1 ⇔ = (5.19)
Vi Z1 + Z2

Replacing Z1 (s) = R1 and Z2 (s) = R2 in (5.19), we obtain (5.13).


Replacing Z1 (s) = R and Z2 (s) = Cs
1
in (5.19), we obtain (5.16).
We can also obtain the transfer function of the case where the resistor and
the capacitor are switched as also shown in Figure 5.7: when Z1 (s) = Cs 1
and
Vo (s)
CR circuit Z2 (s) = R, we have Vi (s) = R+ 1 = 1+RCs . This is known as a CR circuit.
R RCs
Cs

Impedances in series Remark 5.5. We have incidentally shown that two impedances Z1 and Z2 in
series correspond to a single impedance Z = Z1 + Z2 , i.e. two impedances in
series are summed, just as two resistances in series are. You know that two
resistances R1 and R2 in parallel are equivalent to resistance R = 1 +
1
1 , and
R1 R2
Impedances in parallel something similar happens to two impedances in parallel (see Figure 5.8):

VB − VA VB − VA
Z1 = ⇔ I1 = (5.20)
I1 Z1
VB − VA VB − VA
Z2 = ⇔ I2 = (5.21)
I2 Z2
VB − VA VB − VA 1
Z= = VB −VA = (5.22)
I1 + I2 Z + VBZ−VA
1 2
1
Z1 + 1
Z2

Because of the parallelism between systems of dierent types, this is true for me-
chanical impedances as well (see Exercise 10 from Chapter 2), and for impedances
of other types of systems we will study in the next chapters.

RLC circuit Example 5.4. Consider the system in Figure 5.9 known as RLC circuit. The
input is Vi (t) and the output is Vo (t). Applying the current law, we see that
the current owing from A to B must be the same that ows from B to C and
5.3. THE OPERATIONAL AMPLIFIER (OPAMP), AN ACTIVE COMPONENT 67

Figure 5.9: RLC circuit.

the same that ows from C to D. Then


VB (t)−VA
= VB (t)−V i (t)

R =
 I(t) I(t) ⇒ VB = RI + Vi
VC (t)−VB (t) Vo (t)−VB (s)
Ls = I(t) = I(t) ⇒ ILs = Vo − VB (5.23)
VD (s)−VC (s) 0−Vo (s)

 1
Cs = I(s) = I(s) ⇒ I = −Vo Cs

We now replace the rst equation in the second, and use it together with the
third to get

ILs = Vo − RI − Vi ⇔ I(R + Ls) = Vo − Vi (5.24)


Vo − Vi Vo 1
⇒ = −Vo Cs ⇔ Vo + Vo CRs + Vo CLs2 = Vi ⇔ =
R + Ls Vi CLs2 + CRs + 1

From Vo (s) + Vo (s)CRs + Vo (s)CLs2 = Vi (s) we get

dVo (t) d2 Vo (t)


Vo (t) + CR + CL = Vi (t) (5.25)
dt dt2

Remark 5.6. We could have established (5.25) rst, without using impedances:
VB (t)−VA (t)

1 1
R =
 I(t) ⇒ I(t) = R VB (t) − R Vi (t)
V (t) − VB (t) = dI(t)
L dt ⇒ Vo (t) − VB (t) = R L dVB (t) L dVi (t)
−R (5.26)
 C 1
R dVo (t)
dt dt
VD (t) − VC (t) = I(t) dt ⇒ − = f rac1CI(t)

C dt

Replacing the rst equation in the third, and then the result in the second,

dVo (t) 1 1 dVo (t)


− = VB (t) − Vi (t) ⇒ VB (t) = Vi (t) − RC
dt RC RC dt
(5.27)
d2 Vo (t)
 
dVo (t) L dVi (t) L dVi (t)
Vo (t) − Vi (t) + RC = − RC 2
− (5.28)
dt R dt dt R dt

Rearranging terms in the last equality gives (5.25). Applying the Laplace trans-
form, we then obtained transfer function (5.24). The results are of course the
same. Notice that in both cases zero initial conditions were implicitly assumed
(i.e. integrals were assumed to be zero at t = 0; in the case of the Laplace
transform, this means that there is no f (0) term in (2.41)). We will address
this further in Chapter 9.

Remark 5.7. Transfer function (5.24) is similar to transfer functions (4.9) and
(4.11). As you know, this is one case of a so-called electrical equivalent of a
mechanical system, or of a mechanical equivalent of an electrical system. The
notions of eort and ux make clear why this parallel between models exists:
both consist of an eort accumulator, a ux accumulator, and a dissipator.
But notice that the parallel is not complete: (4.9) has a ux as input and
an accumulated eort as output; both the input and the output of (5.24) are
eorts.
68 CHAPTER 5. MODELLING ELECTRICAL SYSTEMS

Figure 5.10: Left: an integrated circuit with a 741 OpAmp, one of the most usual
types of OpAmps (source: Wikimedia). Other OpAmp types are manufactured
in integrated circuits that have several OpAmps each, sharing the same power
supply. Right: the symbol of the OpAmp (source: Wikimedia). Power supply
tensions are often omitted in diagrams for simplicity, but never forget that an
active component without power supply does not work.

Figure 5.11: The output of an OpAmp.

5.3 The operational amplier (OpAmp), an ac-


tive component
The resistor, the capacitor and the inductor are called passive components be- Passive components
cause they do not need a source of energy to function. Components that need
Active components a source of energy to function are called active components. Among them are
diodes and transistors, together with sensors that we will study in Chapter 13.
A component we will study right away because of its importance is the opera-
tional amplier, or in short the OpAmp.
An OpAmp is an electronic component that presents itself as an integrated
circuit (see Figure 5.10) and amplies the dierence between its two inputs V −
and V + :

V out = K V + − V − (5.29)


The output V out is limited to the power supply tensions:

V S− ≤ V out ≤ V S+ (5.30)

As can be expected from the fact that the OpAmp is an active component, if
No output if no power sup- no power is supplied, i.e. if the corresponding pins of the integrated circuit are
ply disconnected and thus V S+ = V S− = 0 V, then V out = 0 V, i.e. there is no
output. The gain of the OpAmp K should ideally be innite; in practice it is
very large, e.g. 105 or 106 . See Figure 5.11.
The other important characteristic of the OpAmp is that the impedance
between its two inputs V − and V + is very large. Ideally it should be ininite;
in practice it is 2 MΩ or more.
5.3. THE OPERATIONAL AMPLIFIER (OPAMP), AN ACTIVE COMPONENT 69

Figure 5.12: Inverting OpAmp with two generic impedances.

parator Example 5.5. The OpAmp can be used to compare two tensions, connected
to the two inputs V − and V + . Because K is very large, if V + > V − , even if
only by a very small margin, the output will saturate at tension V S+ . Likewise,
if V + < V − , even if only by a very small margin, the output will saturate at
tension V S− .
Only if V + and V − are equal, and equal to a great precision, will the output
be 0 V. Consider the case of a 741 OpAmp, typically supplied with V S± =
±15 V. Suppose that K = 105 . Then the output V out will not saturate at
either +15 V or −15 V only if |V + − V − | < 15 × 10−5 V.

Example 5.6. OpAmps are very usually employed in the conguration shown
in Figure 5.12, known as inverting OpAmp or inverter. In this case, because Inverting OpAmp or in-
the OpAmp's input impedance is very large, the current I will ow from input verter
Vi to output Vo , as shown in Figure 5.12. Consequently,
 Vo
+ − −
Vo = K(V − − V ) ⇔ V −= − K

Z2 = Vo −V
I ⇔ I = Vo −V
Z2
(5.31)

V −Vi V − −Vi

Z1 = ⇔ I = Z1

I

Eliminating I and V − , we get

Vo + VKo − Vo − Vi Z1 Z1 Vo Z2 K
= K ⇔ V o Z1 + V o + Vo = −Vi Z2 ⇔ =−
Z2 Z1 K K Vi Z1 K + Z1 + Z2
(5.32)

Because K is large, (5.32) reduces to

Vo Z2
=− (5.33)
Vi Z1

Remark 5.8. Notice how (5.33) shows that we can assume


V+ =V− (5.34)

(and so in this case V − = 0). This is because of the high input impedance.

Example 5.7. If in Figure 5.12 we make Z1 = R1 and Z2 = R2 , we obtain the


circuit in Figure 5.13, known as inverting amplier, for which Inverting amplier

Vo R2
=− (5.35)
Vi R1

Notice that, because R1 , R2 > 0 (there are no negative resistances!), in this


circuit the signs of Vi and Vo are always opposite. In spite of the circuit's name,
it can

ˆ amplify the input (i.e. |Vo | > |Vi |, if R2 > R1 ), or Amplication


70 CHAPTER 5. MODELLING ELECTRICAL SYSTEMS

Figure 5.13: Inverter amplier.

Figure 5.14: Inverting summer or summing circuit.

ˆ attenuate the input (i.e. |Vo | < |Vi |, if R1 > R2 ). Attenuation

Example 5.8. Consider the circuit in Figure 5.14, which is another variation
of the negative feedback OpAmp. From (5.34) and the Kircho law of nodes,
implicit in the currents shown in Figure 5.14, we get

V1 −0 V1
Z 1 =
 I1 ⇔ I1 = Z1

Z2 = V2 −0
I2
V2
⇔ I2 = Z2
(5.36)
0−Vo V1 V2 −Vo
−Z Z3

Z3 = ⇔Z + Z2 = ⇔ Vo = Z1 V 1
3
− Z2 V2

I1 +I2 1 Z3

Consider what happens when all the impedances are resistors:

Inverting summer or in- inverting summer or invert-


ˆ If Z1 = Z2 = Z3 = R this circuit is called
verting summing circuit ing summing circuit. The output Vo is the sum of the two inputs V1
and V2 , but with the sign inverted.

Inverting amplifying sum- inverting amplifying


ˆ If Z1 = Z2 = R and Z3 = R3 this will be an
mer or inverting summing summer or inverting summing amplier. The amplifying ratio is
amplier − RR3 (and can correspond to amplication or attenuation).

Inverting weighted sum- ˆ If all the resistances are dierent, we will have an inverting weighted
mer summer. If R3
R1 + R3
R2 = 1 there is no amplication or attenuation; other-
wise there is.

Remark 5.9. Notice that the circuit in Figure 5.14 is a MISO system.
5.3. THE OPERATIONAL AMPLIFIER (OPAMP), AN ACTIVE COMPONENT 71

Figure 5.15: Inverting RC circuit with an OpAmp.

Figure 5.16: Inverting CR circuit with an OpAmp.

Example 5.9. If in Figure 5.12 we have Z1 = R and Z2 consists in a resistor


R and a capacitor C in parallel, we obtain the circuit in Figure 5.15, with
1 R
Z2 = 1 1 = (5.37)
R + 1 1 + RCs
Cs

Vo 1
=− (5.38)
Vi 1 + RCs
and similar to the RC circuit from Example 5.2 with transfer function (5.16).
If in Figure 5.12 we have Z2 = R and Z1 consists in a resistor R and a
capacitor C in series, we obtain the circuit in Figure 5.16, with
1 1 + RCs
Z1 = R + = (5.39)
Cs Cs
Vo RCs
=− (5.40)
Vi 1 + RCs
and similar to the CR circuit from Example 5.3.
Example 5.10. Other than the inverter conguration in Figure 5.12, the most
usual conguration with which OpAmps are used is the on in Figure 5.17, known
as the non-inverting OpAmp or non-inverter. Because of the very large Non-inverting OpAmp or
input impedance, current ows as shown, and non-inverter


Vo = K(Vi −− V ) ⇔ V =
 −
Vi − VKo

Z2 = Vo −V
I ⇔ I = Vo −V
Z2
(5.41)

V −0 V−

Z1 = I ⇔ I = Z1

Eliminating I and V − , we get


Vo
Vo − Vi + K Vi − VKo Z1 Z2 Vo Z1 + Z2
= ⇔ Vo Z1 − Vi Z1 + Vo = Vi Z2 − Vo ⇔ =
Z2 Z1 K K Vi Z1 + ZK1 + ZK2
(5.42)
72 CHAPTER 5. MODELLING ELECTRICAL SYSTEMS

Figure 5.17: Non-inverting OpAmp with two generic impedances.

Figure 5.18: Non-inverting amplier.

Because K is large, (5.42) reduces to

Vo Z1 + Z2
= (5.43)
Vi Z1

Remark 5.10. (5.43) shows once again that we can assume (5.34). We would
have arrived sooner at the same result.

Example 5.11. If in Figure 5.17 we make Z1 = R1 and Z2 = R2 , we obtain


Non-inverting amplier the circuit in Figure 5.18, known as non-inverting amplier, for which

Vo R1 + R2
= (5.44)
Vi R1
Notice that, because R1 , R2 > 0, not only in this circuit the signs of Vi and
Vo are always the same, as the input is always amplied (i.e. |Vo | > |Vi |): it is
impossible to attenuate the input.

Example 5.12. Suppose that we want to amplify a tension 4 times. We can


use the non-inverting amplier of Figure 5.18 with R2 = 3R1 . As an alternative,
we can use two inverting ampliers as in Figure 5.19.

Remark 5.11. When we want to attenuate a tension without inverting its


signal, a non-inverting amplier cannot be used, since it must be always true
that VVoi > 1; two inverting ampliers in series must be used instead, as in the
previous example.

Figure 5.19: Two inverting ampliers, that amplify the input 4 times.
5.4. OTHER COMPONENTS 73

Figure 5.20: Voltage buer.

Figure 5.21: Subtractor.

Example 5.13. Consider the circuit in Figure 5.20. Because of (5.34), we have
V + = V − = Vi , and Vo = V − ; hence

Vo = Vi (5.45)

While at rst sight this may seem a good candidate for the prize of the most
useless circuit, it is in reality a most useful one. We can be sure that Vo = Vi
and that whatever components are connected to Vo will not aect Vi , because
there is no current owing between Vi and Vo . (The source of energy is the
OpAmp's power supply.) If it were not for the OpAmp, anything connected to
Vo would modify the value of Vi . This circuit is known as voltage buer or
tension buer. Voltage buer

Example 5.14. The MISO system in Figure 5.21 is know as subtractor, be- Subtractor
cause

V2 −V ±
R =

 I2
V ± −Vo

R =
I2
V1 −V ±
(5.46)

R= I1
V ± −0


R =
I1

From the last two equations, we get 2V ± = V1 . From the rst two equations,
and replacing this last result,

V2 = 2V ± − Vo ⇔ Vo = V1 − V2 (5.47)

5.4 Other components


Among the several other components that may be found in mechanical systems,
we will study the model of the transformer, shown in Figure 5.22: Transformer

VP NP
= (5.48)
VS NS

Here VP and VS are the tensions in the two windings, and NP and NS are
the corresponding numbers of turns in each winding. This is an ideal model; in
practice, there are losses, but we will not need to use a more accurate expression.
74 CHAPTER 5. MODELLING ELECTRICAL SYSTEMS

Figure 5.22: Transformer (source: Wikimedia commons).

Glossary
Et le professeur Lidenbrock devait bien s'y connaître, car il passait
pour être un véritable polyglotte. Non pas qu'il parlât couramment
les deux mille langues et les quatre mille idiomes employés à la sur-
face du globe, mais enn il en savait sa bonne part.

Jules Verne (1828  „1905), Voyage au centre de la Terre, 2

active component componente ativo


amplication amplicação
attenuation atenuação
capacitance capacidade elétrica, capacitância (bras.)
capacitor condensador, capacitor (bras.)
current corrente, intensidade (de corrente elétrica)
electric potential dierence voltagem, tensão, diferença de potencial elétrico
ux linkage uxo magnético total
impedance impedância
inductance indutância
inductor bobina, indutor
inverting amplier amplicador inversor
inverter inversor
inverting amplifying summer somador amplicador inversor
inverting OpAmp AmpOp inversor
inverting summer somador inversor
inverting summing amplier amplicador somador inversor
inverting summing circuit circuito somador inversor
inverting weighted summer somador inversor ponderado
non-inverter não-inversor
non-inverting OpAmp AmpOp não-inversor
OpAmp AmpOp
operational amplier amplicador operacional
passive component componente passivo
potentiometer potenciómetro, resistência variável, reóstato
resistance resistência
resistor resistência, resistor (bras.)
rheostat potenciómetro, resistência variável, reóstato
subtractor subtrator
tension voltagem, tensão, diferença de potencial elétrico
5.4. OTHER COMPONENTS 75

(a) (b)

(c) (d)

(e) (f)

(g) (h)

Figure 5.23: Systems of Exercises 1 and 2.

tension buer AmpOp seguidor de tensão, buer de tensão


variable resistor potenciómetro, resistência variável, reóstato
voltage voltagem, tensão, diferença de potencial elétrico
voltage buer AmpOp seguidor de tensão, buer de tensão

Exercises
1. Find the equations describing the dynamics of the systems in Figure 5.23,
and apply the Laplace transform to the equations to nd the corresponding
transfer function.
2. Again for the systems in Figure 5.23, nd the transfer function directly
from the impedances of the components, and apply the inverse Laplace
transform to the transfer functions to nd the corresponding equations.
3. Show that the dierential equations modelling the circuit in Figure 5.24
are similar to those of the mechanical system of Exercise 1 in Chapter 4.
Explain why, using the concepts of eort and ow.
4. Find the mechanical systems equivalent to the circuits in Figure 5.25.
5. Find the transfer function of the circuit in Figure 5.12 from the impedance
of the components for the following cases:
(a) Impedance Z1 is a resistor, impedance Z2 is a capacitor.
76 CHAPTER 5. MODELLING ELECTRICAL SYSTEMS

Figure 5.24: Circuit of Exercise 3.

(a) (b)

Figure 5.25: Systems of Exercise 4.

(b) Impedance Z1 is a capacitor, impedance Z2 is a resistor.


(c) Impedance Z1 is a resistor, impedance Z2 is an inductor.
(d) Impedance Z1 is an inductor, impedance Z2 is a resistor.
(e) Both impedances Z1 and Z2 are capacitors.
(f) Both impedances Z1 and Z2 are inductors.
(g) Impedance Z1 consists in a resistor and a capacitor in series, impedance
Z2 is a resistor.
(h) Impedance Z1 consists in a resistor and a capacitor in parallel, impedance
Z2 is a resistor.
(i) Impedance Z1 is a resistor, impedance Z2 consists in a resistor and
a capacitor in series.
(j) Impedance Z1 is a resistor, impedance Z2 consists in a resistor and
a capacitor in parallel.
(k) Both impedances Z1 and Z2 consist in a resistor and a capacitor in
series.
(l) Both impedances Z1 and Z2 consist in a resistor and a capacitor in
parallel.
(m) Impedance Z1 consists in a resistor and a capacitor in series, impedance
Z2 consists in a resistor and a capacitor in parallel.
(n) Impedance Z1 consists in a resistor and a capacitor in parallel, impedance
Z2 consists in a resistor and a capacitor in series.
(o) Impedance Z1 consists in a resistor and an inductor in series, impedance
Z2 is a resistor.
(p) Impedance Z1 consists in a resistor and an inductor in parallel,
impedance Z2 is a resistor.
(q) Impedance Z1 is a resistor, impedance Z2 consists in a resistor and
an inductor in series.
(r) Impedance Z1 is a resistor, impedance Z2 consists in a resistor and
an inductor in parallel.
(s) Both impedances Z1 and Z2 consist in a resistor and an inductor in
series.
(t) Both impedances Z1 and Z2 consist in a resistor and an inductor in
parallel.
(u) Impedance Z1 consists in a resistor and an inductor in series, impedance
Z2 consists in a resistor and an inductor in parallel.
(v) Impedance Z1 consists in a resistor and an inductor in parallel,
impedance Z2 consists in a resistor and an inductor in series.
5.4. OTHER COMPONENTS 77

Figure 5.26: Circuit of Exercise 6.

6. Find the transfer function of the circuit in Figure 5.26. Assume that all
resistors are equal.

7. How could you perform operation Vo = V1 − V2 using two OpAmps and


without resorting to the subtractor in Figure 5.21?

8. Design a circuit to perform the operation Vo = V1 + V2 − V3 + 2V4 − 3V5 .


Use only one OpAmp.

9. Design a circuit to perform the operation Vo = 10(V1 + V2 + 21 V3 ). Use


two OpAmps.

10. Modify the subtractor in Figure 5.21 so as to give:

(a) Vo = V1 − 13 V2
(b) Vo = 5(V1 − V2 )

11. Suppose you are using an OpAmp with power supply V S± = ±20 V as
comparator. Use Matlab to plot the expected output Vo for 0 s≤ t ≤ 10 s
and the following inputs:

(a) V + = sin(tπ) V and V − = 5 V


(b) V + = 5 V and V − = sin(tπ) V
(c) V + = 10 sin(tπ) V and V − = 5 V
(d) V + = 5 V and V − = 10 sin(tπ) V
78 CHAPTER 5. MODELLING ELECTRICAL SYSTEMS
Chapter 6

Modelling uidic systems


Suppose a solid held above the surface of a liquid and partially im-
mersed: a portion of the liquid is displaced, and the level of the
liquid rises. But, by this rise of level, a little bit more of the solid is
of course immersed, and so there is a new displacement of a second
portion of the liquid, and a consequent rise of level. Again, this sec-
ond rise of level causes a yet further immersion, and by consequence
another displacement of liquid and another rise. It is self-evident
that this process must continue till the entire solid is immersed,
and that the liquid will then begin to immerse whatever holds the
solid, which, being connected with it, must for the time be consid-
ered a part of it. If you hold a stick, six feet long, with its end in
a tumbler of water, and wait long enough, you must eventually be
immersed. The question as to the source from which the water is
suppliedwhich belongs to a high branch of mathematics, and is
therefore beyond our present scopedoes not apply to the sea. Let
us therefore take the familiar instance of a man standing at the edge
of the sea, at ebb-tide, with a solid in his hand, which he partially
immerses: he remains steadfast and unmoved, and we all know that
he must be drowned.
Lewis Carroll (1832  „1898), A tangled tale, Knot IX
In this chapter we are concerned with uid ow in pipes (not with uid Pipe ow
ow with a free surface). Fluidic systems can be accurately modelled using the
Navier-Stokes equations, which you learn in a dierent course. Fortunately, in
many cases of uid ow in pipes it is possible to use simplied equations as
follows.

6.1 Energy, eort and ow


Energy E is given by the integral over distance x of the force F exerted by the
uid:
Z x
E= F dx (6.1)
0

The force is equal to the product of the pressure p and the cross-sectional area
A, so
Z x
F = pA ⇒ E = p A dx (6.2)
0

The volume ow rate (or volumetric ow rate) Q is the derivative of the Volume ow rate
volume V = A x, given by
dx
Q=A (6.3)
dt
where we assume a constant A. With some abuse of notation, we can write
A = Qdxdt and replace this in (6.2) to rewrite the integral in (6.1) as
Z t
E= pQ dt (6.4)
0

79
80 CHAPTER 6. MODELLING FLUIDIC SYSTEMS

Table 6.1: Eort, ow, accumulators and dissipators in uidic systems


Fluidic system SI
eort e pressure P Pa
ow f volume ow rate Q m3 /s
eort accumulator uidic inductance with inertance L kg m−4
accumulated eort ea = e dt uidic moment Γ = p dt Pa s
R R

relation between accumulated eort and ow ea R= ϕ(f ) uidic moment Γ = LQ


accumulated energy Ee = ea df kinetic energy of the ow Ee = 21 LQ2 J
ow accumulator reservoir with capacitance C F
accumulated ow fa = f dt volume V = Q dt m3
R R

relation between accumulated ow and eort fa R= ϕ(e) volume V = Cp


accumulated energy Ef = fa de potential energy of the ow Ef = 21 Cp2 J
dissipator uidic resistance R kg s
−1
m−4
relation between eort and ow e =R ϕ(f ) p = RQ
dissipated energy Ed = f de Ed = 12 RQ2 J

So pressure p and volume ow rate Q can be used as eort and ow. By an
understandable universal convention, Q is always considered as the ow, and p
as the eort.
Table 6.1 sums up the passing information and relations. The next section
presents the basic components mentioned in that Table.

6.2 Basic components of a uidic system


The basic components of a uidic system are the following:
Reservoir 1. A reservoir or tank, which may either have a free surface (see Figure 6.1)
Tank or not. Tanks of the rst case are often used with liquids; closed tanks
are the only option when the uid is a gas, since the gas might otherwise
escape, even if its density is higher than that of the air.
Open tank In the case of a tank with a free surface, there must be a pipe at the
bottom (otherwise the tank could not be emptied). The pressure p at
that point, as you know, is
p = ρgh (6.5)
where ρ is the uid density, g is the acceleration of gravity, and h is the
height of the uid in the tank. But the volume of uid in the tank is given
Capacitance by V = A h, and so, replacing h = VA in (6.5) and solving in order to V ,
A
V =p (6.6)
ρg
|{z}
capacitance C

Pressurised tank In the case of reservoirs without a free surface, it can also be shown that
V = pC , where the value of capacitance C will depend on whether the uid
is is a liquid, a gas undergoing an isothermal compression or expansion,
or a gas undergoing an adiabatic compression or expansion. We need not
worry with that, as long as the value of C is known.
Fluidic inductance 2. A uidic inductance. This is in fact one of the two phenomena that
take place in a pipe. Its model is an application of Newton's second law
(4.1) to the uid contained in a length ` of pipe (see Figure 6.2):
d2 x
A p = ρA` (6.7)
|{z} |{z} dt2
force mass
The force is the product of the cross-sectional area and the pressure (or
rather the dierence of pressures at the two extremities of the uid sep-
R `). Integrating both sides, and introducing the uidic
arated by length
moment Γ = p dt,
dx
Γ = ρ` (6.8)
dt
6.2. BASIC COMPONENTS OF A FLUIDIC SYSTEM 81

Figure 6.1: A water reservoir at the Évora train station. (Source: Wikimedia.)

Figure 6.2: A pipe with cross-sectional area A.

From (6.3) we know that dx


dt = A,
Q
so
ρ`
Γ= Q (6.9)
A
|{z}
inertance L

3. A pressure drop. This is the other phenomena taking place in any pipe, Pressure drop
due to the resistance of viscous forces both between the uid and the
wall of the pipe and between uid particles themselves. In another course
you learn about the dierence between laminar ow (i.e. a situation in Laminar ow
which uid particles move essentially in the direction of the ow only)
and turbulent ow (i.e. a situation in which uid particles move in a far Turbulent ow
more chaotic manner).
Here it suces to notice that in laminar ow the Hagen-Poiseuille equa- Hagen-Poiseuille equation
tion applies: for laminar ow

8µ`
p= Q (6.10)
πr4
|{z}
uidic resistance R

Here p is the pressure drop over length ` of the pipe, µ is the dynamic
viscosity, and r is q
the pipe radius. (If the cross-section of the pipe is not
circular, then r = Ar .) This expression was rst determined experimen-
tally, and then proved from the Navier-Stokes equations; all that we need
to worry about is the value of the uidic resistance.
The pressure drop is always higher for turbulent ow than for laminar
ow, and the relation between p and Q is no longer linear. However, it
may be linearised around a convenient point, so as to nd an approximate
value of resistance R = Q
p
valid in some range of values of these variables.
(See Figure 4.3 again.)
The pressure also drops when the ow crosses valves, bends (or shouldered
ttings), orices, diameter reducers, etc.. Model p = RQ is usually a good
t for those situations as well.
Remark 6.1. Pipes have both inertance and resistance. Of course, it may be
that one of the two is neglectable when compared to the other; but in reality
both are present.
82 CHAPTER 6. MODELLING FLUIDIC SYSTEMS

Figure 6.3: System of Example 6.1.

Fluidic impedances Remark 6.2. The impedances of these components are as follows:
P (s) 1
= (6.11)
Q(s) Cs
P (s)
= Ls (6.12)
Q(s)
P (s)
=R (6.13)
Q(s)
Pipe ow can be modelled putting together the equations describing these
components with the conservation of mass.
Example 6.1. Consider the system in Figure 6.3, supplied with water by a
pump providing a pressure p, that ows through a long pipe with inertance L
and neglectable resistance, and either lls a tank with capacitance C or leaves
the system through a valve with resistance R. We want to know the pressure
below the tank pt .
Let q(t) be the volume ow rate through the long pipe, that is then divided
into the ow feeding the tank qt (t) and the ow through the valve qv (t). Using
the impedances, we get

P (s)−Pt (s)
P (s) − Pt (s) = LsPt (s) R1 + Cs
 

 Q(s) = Ls 
 
Q(s) = Pt (s) R1 + Cs
  
 Q(s) = Qt (s) + Qv (s) 
Pt (s) 1 ⇒ (6.14)
Qt (s) = Cs Qt (s) = Pt (s)Cs

 

 
 Pt (s) = R
 
Qv (s) = R1 Pt (s)
Qv (s)

The rst equation then gives the desired answer:


 
L Pt (s) 1
P (s) = Pt (s) 1 + s + LCs2 ⇔ = L
(6.15)
R P (s) 1+ Rs + LCs2

Remark 6.3. Notice that (6.15) is similar to the model of a massspring


damper system (4.9) or the model of an RLC system (5.24).
Remark 6.4. Liquids can be presumed to be incompressible, so ρ is constant
and independent from p. Thus (6.5) shows that there is a one-to-one relation
Hydraulic head between p and h, where h is the hydraulic head. So, when the uid is a liquid,
p is often replaced by ρgh. To do this, of course, the density of the liquid used
in the system must be xed in advance; the most usual cases are water, brine,
and crude oil.
Model (6.9) of a uidic inductance tells us that p dt = Γ = LQ; applying
R
Using the head instead of
the pressure the Laplace transform, this becomes
Fluidic inductance
p ρgh L
= LQ ⇔ = LQ ⇔ h = sQ (6.16)
s s ρg
Here, L∗ = ρg L
is the inertance relating hydraulic head and ow; L is the in-
ertance relating pressure and ow. Notice that the SI units of L are kg m−4 ;
those of L∗ are s2 m−2 .
Fluidic resistance Likewise, model (6.10) of uidic resistance tells us that p = RQ; so
R
ρgh = RQ ⇔ h = Q (6.17)
ρg
Here, R∗ = ρgR
is the resistance relating hydraulic head and ow; R is the resis-
tance relating pressure and ow. Notice that the SI units of R are kg s−1 m−4 ;
those of R∗ are s m−2 .
6.3. OTHER COMPONENTS 83

Figure 6.4: System of Example 6.2.

Example 6.2. Consider the system in Figure 6.4, with two water reservoirs
fed by a pump that delivers a ow q , and connected by a pipe with neglectable
inertance and resistance R. We have
 
q(t) = q1 (t) + q2 (t)





Q(s) = A1 H1 (s)s + A2 H2 (s)s
q (t) = A ḣ (t) Q (s) = A H (s)s
1 1 1 1 1 1
⇒ (6.18)
q2 (t) = A2 ḣ2 (t)
 Q2 (s) = A2 H2 (s)s


 h2 (t)−h1 (t) 
= q1 (t) H2 (s) − H1 (s) = RA1 H1 (s)s

R

Thus
( (
Q(s) = A1 H1 (s)s + A2 H1 (s)(RA1 s + 1)s Q(s) = H1 (s)(RA1 A2 s2 + A1 s + A2 s)
⇔ ⇔
H2 (s) = H1 (s)(RA1 s + 1) H2 (s) = H1 (s)(RA1 s + 1)

H1 (s) 1
=


Q(s) RA1 A2 s2 + (A1 + A2 )s

(6.19)
H2 (s) RA1 s + 1
=


Q(s) RA1 A2 s2 + (A1 + A2 )s

Remark 6.5. When modelling pipe ows, pay special attention to non-linearities.
All pipes have some maximum value that the ow can take. So in a pipe like
that in Figure 6.4 we have −Qmax ≤ Q ≤ Qmax . But in some systems pipes
feed tanks from above: see for instance Figure 6.8 below, corresponding to Ex-
ercise 2. In that case, 0 ≤ Q3 ≤ Q3 max . In other words, the ow can ll up the
tank below, but it cannot empty it.
Neglecting non-linearities leads to absurd models and ludicrous conclusions,
such as those mocked by mathematician Charles Dogdson in the quote at the
beginning of this chapter.

6.3 Other components


Among the several other components that may be found in pipe ow systems, the
hydraulic press deserves a passing mention. Its principle is shown in Figure 6.5.
Because of (6.2), a similar pressure on both sides means that
F1 F2 A2
= ⇔ F2 = F1 (6.20)
A1 A2 A1
where F1 and F2 are the forces exerted on the two pistons, with areas A1 and
A2 . This principle is used in presses such as that in Figure 6.6.

Glossary
Había en el puerto gran multitud de buques de todas clases y tamaños,
resplandeciendo entre ellos, llamando la atención y hasta excitando
la admiración y la envidia de los españoles, un enorme y hermosísimo
navío, construido con tal perfección, lujo y elegancia, que era una
maravilla.
Los españoles, naturalmente, tuvieron la curiosidad de saber quién
era el dueño del navío y encargaron al secretario que, sirviendo de
intérprete, se lo preguntase a algunos alemanes que habían venido a
84 CHAPTER 6. MODELLING FLUIDIC SYSTEMS

Figure 6.5: Principle of the hydraulic press (source: Wikimedia).

Figure 6.6: Hydraulic press (source: Wikimedia).


6.3. OTHER COMPONENTS 85

Figure 6.7: System of Exercise 2.

bordo.
Lo preguntó el secretario y dijo luego a sus paisanos y camaradas:
 El buque es propiedad de un poderoso comerciante y naviero de
esta ciudad en que estamos, el cual se llama el señor Nichtverstehen.

Juan Valera (1824  „1905), Cuentos y chascarrillos andaluces,El señor


Nichtverstehen

brine água salgada (lit. salmoura)


uidic inductance indutância uídica
uidic moment momento uídico
uidic resistance resistência uídica
hydraulic head altura de coluna de uido (de água, de água salgada, de crude)
hydraulic press prensa hidráulica
inertance indutância uídica
pressure pressão
pressure drop perda de carga
reservoir reservatório, tanque
tank reservatório, tanque
volume ow rate caudal volumétrico
volumetric ow rate caudal volumétrico

Exercises
1. Consider the system from Example 6.1, shown in Figure 6.3. Find its
mechanical equivalent.

2. Consider the system in Figure 6.7, fed by a pump delivering volume ow
rate q1 (t). Tanks 1 and 2 are connected by a pipe with neglectable iner-
tance and uidic resistance R1 ; tanks 2 and 3 are emptied through valves
with resistances R2 and R3 respectively. Find transfer functions H 1 (s)
Q1 (s) ,
H2 (s) H3 (s)
Q1 (s) and Q1 (s) .

3. Consider the system in Figure 6.8, fed by a pump delivering volume ow
rate q1 (t). Find transfer functions H 1 (s) H2 (s) H3 (s) Q5 (s)
Q1 (s) , Q1 (s) , Q1 (s) and Q1 (s) .
86 CHAPTER 6. MODELLING FLUIDIC SYSTEMS

Figure 6.8: System of Exercise 3.


Chapter 7

Modelling thermal systems


 Mais qu'entends-tu par le vide ? demanda Michel, est-ce le vide
absolu ?
 C'est le vide absolument privé d'air.
 Et dans lequel l'air n'est remplacé par rien ?
 Si. Par l'éther, répondit Barbicane.
 Ah ! Et qu'est-ce que l'éther ?
 L'éther, mon ami, c'est une agglomération d'atomes impondéra-
bles, qui, relativement à leurs dimensions, disent les ouvrages de
physique moléculaire, sont aussi éloignés les uns des autres que
les corps célestes le sont dans l'espace. Leur distance, cependant,
est inférieure à un trois-millionièmes de millimètre. Ce sont ces
atomes qui, par leur mouvement vibratoire, produisent la lumière
et la chaleur, en faisant par seconde quatre cent trente trillions
d'ondulations, n'ayant que quatre à six dix-millièmes de millimètre
d'amplitude.

Jules Verne (1828  „1905), Autour de la Lune, 5

This chapter concerns thermal systems. You will study heat transfer in
depth in another course, but simple cases can be modelled with a formulation
similar to that employed to the systems in the previous chapters.

7.1 Energy, eort and ow


There is, however, an important dierence. By convention:

ˆ temperature T is the eort variable; Temperature

ˆ heat ow rate q is the ow variable. Heat ow rate

However, heat is energy; it is in fact kinetic energy of molecules and atoms. Heat is energy
(As you may know, an omnipresent æther was postulated for some centuries
to explain the propagation of heat and especially of light, but this hypothesis,
of which you may nd a popular explanation at the beginning of this chapter,
has been abandoned for about one century, having been by then contested for
decades because of experimental results with which it could not reconciled.)
Consequently, it is not true in thermal systems that energy is the integral of the
product of eort and ow  see (4.30) , as the variable used for ow is an
energy rate itself. And, as a result, the parallels with the other types of systems
we have been studying are not perfect.

7.2 Basic components of a thermal system


There is only one type of accumulator, that of heat, i.e. of ow: Heat accumulator

H(t) = m Cp (T (t) − T (0)) (7.1)

87
88 CHAPTER 7. MODELLING THERMAL SYSTEMS

Here H(t) = 0 q(t) dt is the accumulated heat in mass m, which has a specic
Rt

heat Cp and is heated from temperature T (0) to temperature T (t). Specic heat
Dissipation can take place in three dierent ways:

Conduction ˆ In conduction there is no macroscopic movement of the solid or uid


that undergoes the process. Heat is transmitted, or rather diused, at
the molecular and atomic levels. The heat ow, which we assume to be
positive in the sense of increasing values of position x, is

∂T (x, t)
q(t) = −kA (7.2)
∂x
where A is the cross-sectional area and k is the thermal conductivity.
Assuming that the temperature distribution over x is linear over a distance
L, (7.2) becomes

kA
q(t) = (T (0, t) − T (L, t)) (7.3)
L
|{z}
conduction heat transfer coecient hc

∂T (x,t) T (L,t)−T (0,t)


Notice that the minus sign is gone because ∂x = limL→0 L .

Convection ˆ In convection there is macroscopic movement of the uid where heat


transfer is taking place. In solids matter cannot move and this way and
consequently there can be no convection. If the uid movement is due
Free convection solely to the temperature gradients, this is called free convection; if
uid movement is due at least in part to some other reason (like uid ow
Forced convection in pipes, or a blower), this is called forced convection. In any case,
the heat ow, again assumed positive in the sense of increasing values of
position x, is

hA (T (0, t) − T (L, t))


q(t) = |{z} (7.4)
convection heat transfer coecient hh

Here h is the convection heat transfer coecient, and A is the cross-


sectional area over which heat transfer takes place.

Radiation ˆ In radiation heat is propagated by the emission of photons. It is the only


heat transmission that can take place in a vacuum. It is also the only one
corresponding to a non-linear law:

q(t) = Cr A T14 − T24 (7.5)




Here Cr is a proportionality constant that we need not delve into. This


law, of course, can be linearised around some point, and the result will be
approximately valid in a vicinity thereof; for instance:

q(t) = Cr A (T1 + T2 ) T12 + T22 (T1 − T2 ) (7.6)



| {z }
radiation heat transfer coecient hr

Notice that all cases  conduction, convection, and linearised radiation 


can be reduced to the following form:

∆T = Rq (7.7)

Thermal resistance Here R is thermal resistance and ∆T is the temperature dierence.


The relations are summed up in Table 7.1. The corresponding impedances
are
∆T (s) 1
= (7.8)
Q(s) mCp s
∆T (s)
=R (7.9)
Q(s)
7.2.

Thermal system SI
eort e temperature T K or ◦ C
ow f heat ow rate q W
eort accumulator
R  
accumulated eort ea = e dt  
relation between accumulated eort and ow ea R= ϕ(f )  
accumulated energy Ee = ea df  
ow accumulator
R heat accumulator with mass Rm and specic heat Cp kg × J kg−1 K−1
accumulated ow fa = f dt heat H = q dt J
relation between accumulated ow and eort fa R= ϕ(e) heat H = m Cp T
accumulated energy Ef = fa de  
dissipator thermal resistance with resistance R K/W
relation between eort and ow e =R ϕ(f ) T = Rq
dissipated energy Ed = f de  
BASIC COMPONENTS OF A THERMAL SYSTEM

Table 7.1: Eort, ow, accumulators and dissipators in thermal systems


89
90 CHAPTER 7. MODELLING THERMAL SYSTEMS

Figure 7.1: A thermometer immersed in a uid.

Figure 7.2: Electrical circuit equivalent to the thermal system in Figure 7.1.

Tf Tt(0)

Tt(0) Tf

t t

Figure 7.3: Evolution of temperature Tt (t) in the system Figure 7.1 when Tt (t) <
Tf (left) and when Tt (t) > Tf (right).
7.2. BASIC COMPONENTS OF A THERMAL SYSTEM 91

Example 7.1. The reading of a mercury or alcohol thermometer at time t = 0


is Tt (0). At that instant, it is immersed in a uid at temperature Tf (see
Figure 7.1). Let the thermal resistance of the glass be R, and the specic heat
of mass m of mercury or alcohol be Cp . How does Tt (t) evolve?
This can be seen using an equivalent electrical circuit. Both temperatures
become tensions; the thermal resistance of the glass becomes a resistance R and
the heat accumulator becomes a capacitor C . So the equivalent circuit is the
one in Figure 7.2. The corresponding transfer function  remember (5.16) 
is
Vo (s) 1
= (7.10)
Vi (s) 1 + RCs
Temperature Tf is constant and is applied from t = 0 on, so the Laplace
transform of the input is
Tf
L [(Tf − Tt (0)) H(t)] = (7.11)
s
Notice that the amplitude of the temperature change is Tf − Tt (0); since the
Heaviside function begins at 0 and ends in 1, we must multiply it by Tf − Tt (0)
and take into account the dierent initial value also when calculating the output
we want to know:
 
Tf 1
Tt (t) − Tt (0) = L −1
s 1 + RmCp s
1
" #
1
 
RmCp − RmC t
= Tf L −1
1 = Tf 1 − e p (7.12)
s( RmC p
+ s)

We conclude that temperature Tt (t) begins at Tt (0), ends at Tf , and changes


exponentially with time. This is illustrated in Figure 7.3.

Glossary
Aber in Phantásien waren fast alle Wesen, auch die Tiere, min-
destens zweier Sprachen mächtig: Erstens der eigenen, die sie nur
mit ihresgleichen redeten und die kein Auÿenstehender verstand, und
zweitens einer allgemeinen, die man Hochphantäsisch oder auch die
Groÿe Sprache nannte. Sie beherrschte jeder, wenngleich manche sie
in etwas eigentümlicher Weise benützten.

Michael Ende (1929  „1995), Die unendliche Geschichte von A biz Z, 2

conduction condução
convection convecção
forced convection convecção forçada
free convection convecção livre
heat accumulator acumulador de calor
heat ow rate uxo de calor
radiation radiação
specic heat calor especíco
thermal resistance resistência térmica
temperature temperatura

Exercises
1. Consider the Wave Energy Converter of Figure 3.2, when submerged in
sea water at constant temperature Tsea . Assume that the air inside the
device has a homogeneous temperature Tair (t) and is heated by the de-
vice's Power Take-O (PTO) mechanism, at temperature TP T O (s), that
delivers an electrical power P (t) to the electrical grid with an eciency
η . (See Figure 7.4.) Heat transfer from the PTO to the air inside the
device takes place by convection with a constant coecient h and for area
A; neglect the thermal capacity of the metallic WEC itself; consider that
92 CHAPTER 7. MODELLING THERMAL SYSTEMS

Figure 7.4: The Wave Energy Converter of Exercise 1.

Cair and CP T O are the thermal capacities of the air and the PTO, that
have masses mair and mP T O . Let T (t) = Tair (t) − TP T O (t). Find transfer
function PT (s)
(s) .

2. Explain which of the electrical systems in Figure 5.23 have a thermal


equivalent and which do not, and why.
Chapter 8

Modelling interconnected and


non-linear systems
You are right in thinking that he is under the British government.
You would also be right in a sense if you said that occasionally he
is the British government.

My dear Holmes!


I thought I might surprise you. Mycroft draws four hundred and
fty pounds a year, remains a subordinate, has no ambitions of any
kind, will receive neither honour nor title, but remains the most
indispensable man in the country.
But how?
Well, his position is unique. He has made it for himself. There
has never been anything like it before, nor will be again. He has
the tidiest and most orderly brain, with the greatest capacity for
storing facts, of any man living. The same great powers which I
have turned to the detection of crime he has used for this particular
business. The conclusions of every department are passed to him,
and he is the central exchange, the clearinghouse, which makes out
the balance. All other men are specialists, but his specialism is
omniscience. We will suppose that a minister needs information as
to a point which involves the Navy, India, Canada and the bimetallic
question; he could get his separate advices from various departments
upon each, but only Mycroft can focus them all, and say ohand how
each factor would aect the other.

Sir Arthur Conan Doyle (1859  „1930), His last bow,


The Adventure of
the BrucePartington Plans

This chapter presents an overview of the modelling process.

8.1 Energy, eort and ow


Table 8.1 presents the impedances of all the ow accumulators, eort accumu- Impedances
lators, and energy dissipators, summing up Tables 4.1, 5.1, 6.1, and 7.1, and
showing clearly the existing parallelism between systems of dierent types.
This is the place to notice that eort variables are measured in relation to
an arbitrary value that serves as zero:
ˆ In Table 8.1 this is explicit for thermal systems, since temperature is
denoted as ∆T , as what matters is the temperature dierence.
ˆ In the case of electrical systems, what matters is always the electrical
tension at the extremities of the component.
ˆ In the case of pipe ow, resistance and inductance depend on the pressure
dierence at the extremities. Reservoirs with a free surface also depend
on a pressure dierence, between the pressure of the liquid at the bottom
and the atmospheric pressure.

93
94CHAPTER 8.

Type of system Mechanical, translation Mechanical, rotation Electrical Fluidic Thermal


eort e velocity ẋ angular velocity ω̇ voltage U pressure P temperature T
ow f force F torque τ current I volume ow rate Q heat ow rate q
eort accumulator spring angular spring inductor uidic inductance 
sX(s) s sΩ(s) s U (s) P (s)
impedance F (s) = K T (s) = κ I(s) = Ls Q(s) = Ls 
ow accumulator mass moment of inertia capacitor reservoir heat accumulator
sX(s) 1 sΩ(s) 1 U (s) 1 P (s) 1 ∆T (s) 1
impedance F (s) = M s T (s) = Js I(s) = Cs Q(s) = Cs Q(s) = mCp s
dissipator damper rotary damper resistor uidic resistance thermal resistance
sX(s) 1 sΩ(s) 1 U (s) P (s) ∆T (s)
impedance F (s) = b T (s) = b I(s) = R Q(s) = R Q(s) = R
Table 8.1: Eort, ow, accumulators and dissipators in dierent types systems
MODELLING INTERCONNECTED AND NON-LINEAR SYSTEMS
8.2. SYSTEM INTERCONNECTION 95

ˆ In the case of mechanical systems, the energy dissipated by a damper


depends on the relative velocities of its extremities, and the energy accu-
mulated by a spring depends on the relative position of its extremities.

Notice that there may be values of these variables that we can think of as ab-
solute zeros, such as temperature −273.15 ◦ C = 0 K, pressure 0 Pa of complete
vacuum, or position and velocity measured in an inertial frame of reference.
Still, it is often far more practical to use other values, such as atmospheric Dealing with initial condi-
pressure, room temperature, or resting position, as zero. tions

Example 8.1. A 300 kg dirigible balloon ies at constant altitude z = 200 m,


because its impulsion cancels its weight. It can move vertically thanks to two
electrical propulsors, each of which provides a force given by Fp (t) = γU (t),
where U (t) is the tension applied (control input) and the gain is γ = 15 N/V
(the force is upwards when U > 0). When the balloon moves, there is a viscous
drag force with coecient c = 30 N s/m. How does the altitude change with
time when a 20 V tension is applied during 10 s?
A balance of forces shows that

300z̈(t) = 2 × 15U (t) −30ż(t) (8.1)


| {z } | {z } | {z }
mass×acceleration propulsors drag force

We know that initial conditions are z(0) = 200 and ż(0) = 0, so we could be
tempted to apply the Laplace transform as

300 Z(s)s2 − 200s = 30U (s) − 30 (Z(s)s − 200) (8.2)




Rearranging terms,

(300s2 + 30s)Z(s) = 30U (s) + 6 × 104 s + 6 × 103 (8.3)


3 2
1 2 × 10 s 2 × 10 1 2 × 102
⇔ Z(s) = U (s) + + = U (s) +
(10s + 1)s (10s + 1)s (10s + 1)s (10s + 1)s s
Z(s)
Notice that it is impossible to nd a transfer function U (s) relating the (Laplace No transfer function if ini-
transforms of) the input and the output. To obtain a transfer function, make tial conditions are not zero
z ∗ (t) = z(t) − (0), and then

Z ∗ (s) 1
300Z ∗ (s) = 30U (s) − 30Z ∗ (s)s ⇔ = (8.4)
U (s) (10s + 1)s

The result will of course be the same, but this allows us to use many results
established for transfer functions, such as those in Chapters 9 and 11. It also
allows us to use Matlab to nd the answer as follows:

>> G = tf(1, [10 1 0]);


>> Ts = 0.001; t = 0 : Ts : 50;
>> U = zeros(size(t)); U(1:10/Ts) = 20*ones(1, 10/Ts);
>> z = lsim(G, U, t); z = z + 200;
>> figure, plot(t,Z)
>> xlabel('t [s]'), ylabel('z [m]')
Notice how we had to add 200 to the result (or else we would have to bear in
mind that the plot would show oscillations around 200 m). See Figure 8.1.

Remark 8.1. Remember that we already did something similar in Example 7.1.

8.2 System interconnection


Transfer functions are of great aid when modelling several interconnected sys-
tems, of the same or of dierent types.

Example 8.2. Consider the system in Figure 8.2. The force exerted by the
inductance in the handle that undergoes displacement x2 is given by F2 (t) =
αi(t), where i(t) is the current in the inductance. Find X 1 (s)
Vi (s) .
96CHAPTER 8. MODELLING INTERCONNECTED AND NON-LINEAR SYSTEMS

400

380

360

340

320

z [m]
300

280

260

240

220

200
0 10 20 30 40 50
t [s]

Figure 8.1: Results of Example 8.1.

Figure 8.2: System of Example 8.2.

We can of course write all the equations, and obtain the desired result with
successive replacements. Transfer functions allow us to model each system sep-
arately, making such replacements easier.
As to the electrical system, remembering (5.48),
n2 n
n1 Vi (s)
2
I(s) n1
R + Ls = ⇔ = (8.5)
I(s) Vi (s) R + Ls
As to the lever, and letting F1 be the force exerted on mass m1 ,
F1 (s) b
F1 (t)a = F2 (t)b ⇔ = (8.6)
F2 (s) a
As to the mass,
X1 (s) 1
F1 (t) − Kx1 (t) = m1 ẍ1 (t) ⇔ = 2
(8.7)
F1 (s) m1 s + K
Finally,
b n2
X1 (s) X1 (s) F1 (s) F2 (s) I(s) a α n1
= = (8.8)
Vi (s) F1 (s) F2 (s) I(s) Vi (s) (m1 s2 + K)(R + Ls)
This way, we are also able to study each transfer function separately, analysing
its inuence in the nal result.
Bond graphs Bond graphs are another tool that can be used to assist the modelling
of interconnected systems. They consist in a graphical representation of what
happens with energy in a system, based upon the concepts of eort and ux.
These are written above and below arrows (of which, by convention, only half the
8.3. DEALING WITH NON-LINEARITIES 97

Figure 8.3: Two bond graphs of systems where elements have the same ux and
there is an eort junction. Top: electrical circuit. Notice that V1 = (V1 − V2 ) +
(V2 − V3 ) + V3 . Bottom: uidic system. Since the pipe has both resistance and
inductance, the pressure change from the pump delivering a constant pressure
P to the bottom of the reservoir where the hydraulic head is h and the pressure
is ρgh is split into two, as if the uid would rst go through an inductance
without resistance and then through a resistance without inductance, so that
P = (P − P1 ) + (P1 − ρgh) + ρgh.

tip is drawn). Figure 8.3 shows two examples of bond graphs in which several
elements have the same ux and dierent eorts; the corresponding junction
of the several eorts in one system is by convention denoted by number 1.
Figure 8.4 shows two examples of bond graphs in which several elements have
the same eort and dierent ows; the corresponding junction of the several
ows in one system is by convention denoted by number 0. Also notice how
sources of energy are denoted by SE . Finally, the product of what is above and
below each arrow (the eort and the ow) will be that element's instantaneous
power, showing how energy is distributed over the system. We will not study
bond graphs more complicated than these, nor further explore the ability of this
graphical tool to assist in the modelling.

8.3 Dealing with non-linearities


Non-linearities can be classied as hard or soft, as they are respectively more
or less severe. Though no uniform denition is universally accepted, we will say
that

ˆ a soft non-linearity is one that is continuous and dierentiable, Soft non-linearity

ˆ a hard non-linearity is not dierentiable, or even not continuous. Hard non-linearity

Figure 8.5 presents two examples.


Non-linearities are very common. They may be part of the design of a
system, even of a control system. In Chapter 28 we will learn how to deal with
non-linearities in control systems. What is important here is to notice that
soft non-linearities can be approximated by a rst order approximation around
the operating point. Estimating how large the approximation error may be is
important; we will do that in Chapters 1214.

Example 8.3. In Figure 8.6, mass m = 10 kg rests on a non-linear spring and


is pulled by force F applied simultaneously on a linear spring with k = 103 N/m
and on a linear damper with b = 500 N s/m. The non-linear force of the spring
is given by Fk = 5000 − ∆y+0.1
500
(SI), where the ∆y is the variation of length
98CHAPTER 8. MODELLING INTERCONNECTED AND NON-LINEAR SYSTEMS

Figure 8.4: Two bond graphs of systems where elements have the same eort
and there is a ux junction. Top: electrical circuit. Notice that I = I1 + I2 + I3 .
Bottom: mechanical system. Notice that F − Kx − B ẋ = M ẍ ⇔ F = Kx +
B ẋ + M ẍ.

10 20

8
15
6
10
4
5
2
0
y

0
y

−5
−2
saturation
−10
−4

−6 −15
dead zone linear approximation around operation point x=6
−8 −20

−10 −25
−10 −5 0 5 10 −10 −5 0 5 10
x x

Figure 8.5: Left: hard non-linearities (dead zone and saturation; in practice
limits need not be symmetric for positive and negative values, though this is
assumption is frequent). Right: soft non-linearity and one of its linear approxi-
mations.

Figure 8.6: System of Example 8.3.


8.3. DEALING WITH NON-LINEARITIES 99

−1.9́ 10−3 m
3000

2000

1000

Fk [N]
0
−98 N

−1000

−2000

−3000
−0.03 −0.02 −0.01 0 0.01 0.02 0.03 0.04 0.05
D y [m]

Figure 8.7: Non-linear force of Example 8.3.

around the uncompressed value. We want a linear model for this system around
nominal conditions of rest when F = 0.
Figure 8.7 shows the non-linear force. When F = 0, the non-linear spring is
compressed by the weight of m, which is −9.8 × 10 N (notice the minus sign,
since the weight is downwards and the positive sign of y corresponds to an
upwards direction), corresponding to

500
−98 = 5000 − ⇔ ∆y = −1.9 × 10−3 m (8.9)
∆y + 0.1
The linearised law is

dFk 500
y = 5.2 × 104 y (SI)

Fk ≈ y= 2

d(∆y) ∆y=−1.9×10−3 m (∆y + 0.1) ∆y=−1.9×10−3 m
(8.10)

where y = ∆y + 1.9 × 10−3 m, or, if you prefer, the variation of length around
∆y = −1.9 × 10−3 m. Furthermore, the linear components are assumed to have
no mass, and hence transmit force F to mass m. Thus

mÿ = F − 5.2 × 104 (SI) (8.11)

It should be stressed that linear model (8.11) is only an approximation.

Glossary
Desejoso ainda o Fucarãdono, como mais douto q  os outros, de leuar
a sua auante cõ preg utas q
 embaraçasse o padre, lhe veyo arguindo
de nouo q  porq razão punha nomes torpes ao Criador de todas as
cousas, & aos Sãtos q  no ceo assistião em louuor seu, infamãdoo de
metiroso, pois elle, como todos criaõ, era Deos de toda a verdade ?
& para q  se entenda dõde naceo a este dizer isto, se ha de saber q 
na lingoa do Iapaõ se chama a metira diusa, & por q o padre quãdo
pregaua dezia q  a
qella ley q
 elle vinha denuciar era a verdadeira ley
de Deos, o qual nome elles pela grossaria da sua lingoa não podião
pronuciar taõ claro como nos & por dizere Deos dezião diùs, daquy
veyo que estes seruos do diabo tomaraõ motiuo de dizere aos seus
que o padre era demonio em carne q  vinha infamar a Deos põdo-lhe
nome de mentiroso: (. . . ) E porque tambem se saiba a razaõ porque
lhe este bonzo disse que punha nomes torpes aos santos, foy, porque
tinha o padre por custume quando acabaua de dizer missa rezar com
todos hua Ladaynha para rogar a N. Senhor pela augmetação da fé
Catholica, & nesta ladainha dezia sempre, como nella se custuma,
Sancte Petre ora pro nobis, Sancte Paule ora pro nobis, & assi dos
100CHAPTER 8. MODELLING INTERCONNECTED AND NON-LINEAR SYSTEMS

Figure 8.8: System of Exercise 1.

Figure 8.9: System of Exercise 3.

mais Santos. E por q tambem este vocablo santi na lingoa Iapoa


he torpe & infame, daquy veyo arguyr este ao padre q  punha maos
nomes aos Sãtos, (. . . ) & daly por diãte mãdou o padre q  se naõ
dissesse mais sancte, senaõ beate Petre, beate Paule, & assi aos outros
Santos, porque já dantes tinhaõ os bonzos todos perante el Rey feito
peçonha disto.

Fernão Mendes Pinto (1509?  „1583), Peregrinaçam, CCXIII

bond graph grafo de ligação


hard non-linearity não-linearidade severa
linearisation, linearization (US) linearização
soft non-linearity não-linearidade suave

Exercises
1. Draw the bond graph of the system in Figure 8.8.

2. Draw the bond graph of the balloon from Example 8.1.

3. The system in Figure 8.9 is fed by a water pump with a characteristic curve
given by P (t) = 105 − 2 × 106 Q(t), where P and Q are the pressure (Pa)
and the volumetric ow (m3 /s) provided.
The pipe has a 0.01 m2 cross section and a length of 50 m. Its ow
resistance is neglectable; its inertance is not.
The tank has a free surface and 1 m2 cross-section.
The valve is non-linear and veries relation

(8.12)
p
Qv (t) = 0.3 × 10−4 N (t) Pv (t)

where Qv is the ow through the valve (m3 /s), N is the opening of the
valve (dimensionless), and Pv is the pressure (Pa) at the entrance of the
valve, which is also the pressure at the bottom of the tank.
In nominal conditions, Pv = 8 × 104 Pa and Qv = 0.01 m3 /s.

(a) Show that the pipe's inertance is L = 5 × 106 kg m−4 .


(b) Show that in nominal conditions the height of water in the tank is
h̄ = 8.16 m.
8.3. DEALING WITH NON-LINEARITIES 101

Figure 8.10: System of Exercise 4.

(c) Show that the non-linear relation of the valve (8.12) can be linearised
as
   
Qv (t) = Qv (t) + 0.085 N (t) − N (t) + 6.25 × 10−7 Pv (t) − Pv (t)
(8.13)

(d) Show that the system can be modelled by (8.13) together with
 

 Pv (t) − Pv = ρg h − h̄
d(h−h̄)

(8.14)
 
Q(t) − Q̄ − Qv (t) − Qv = A dt
d(Q(t)−Q̄)

  
Pb (t) − Pb − Pv (t) − Pv = L

dt

∆Pv (s)
(e) Find transfer function ∆N (s) , relating variations around nominal
conditions.
4. In Figure 8.10, the lever with inertia I oscillates around the horizontal
position (i.e. θ(t) = 0) and is moved by torque τm . Mass m moves verti-
cally, at distance d from the fulcrum of the lever, inside a cylinder with
two springs of constant k , lled with incompressible oil. The pressure
dierence ∆p(t) between the two chambers of the cylinder moves the oil
through uidic resistance R. Thanks to oil lubrication, friction inside the
cylinder is neglectable.
(a) Write linearised equations for the dynamics of the system.
Θ(s)
(b) Find transfer function Tm (s) .

5. In Figure 8.11, the lever with inertia I is actuated by force F (t) and
supported on the other side by a spring and a damper. On the lever
there is a car with mass m, moving to sidewards due to gravity, without
friction. When F = 0 and the car is on the fulcrum (i.e. its position is
x = 0), the lever remains in the horizontal position. There is no friction
at the fulcrum.
(a) Write linearised equations for the dynamics of the system.
X(s)
(b) Find transfer function F (s) .

6. The lever in Figure 8.12, with neglectable mass, is moved by a torque τ


applied on the fulcrum, in the absence of which the lever is horizontal (i.e.
θ = 0). F is the force exerted on the fulcrum.
102CHAPTER 8. MODELLING INTERCONNECTED AND NON-LINEAR SYSTEMS

Figure 8.11: System of Exercise 5.

Figure 8.12: System of Exercise 6.

It is known that m1 = 1.5 kg, m2 = 2.0 kg, d1 = 0.6 kg, d2 = 0.4 m, and
b = 20 N s/m. The spring obeys the non-linear law in Figure 8.12, where δ
is the length variation in mm (with δ > 0 corresponding to compression),
and Fm is the resulting force in N.

(a) Show that, in nominal conditions, Fm = 1.63 N.


(b) Show from the plot in Figure 8.13 that the force of the spring can be
linearised as Fm = 1.63 + 3.26 × 103 x1 .
(c) Write linearised equations for the dynamics of the system.
F (s)
(d) Find transfer function T (s) .

7. In Figure 8.14, the fresh water (ρ = 1000 kg/m3 ) tank in the left is big
enough to keep a constant liquid height h0 = 5 m, while the tank in the
right has a 10 m2 cross-section and a variable liquid height h1 (t).
Flow qc (t) bleeds this tank and does not depend on pressure; ow q1 (t)
passes through a non-linear valve that veries

q1 (t) = 0.15xv (t) ∆p(t) (SI) (8.15)


p

where ∆p(t) is the pressure dierence on both sides of the valve and xv (t)
is mechanically actuated by h1 (t) through a rigid lever with a = 0.4 m
and b = 4 m.
In nominal conditions, qc (t) = q1 (t) = 0.2 m3 /s and h1 (t) = 3 m.
(a) Show that the model of the ow through the valve (8.15) can be
linearised around nominal conditions as

q1 (t) = 21xv (t) + 5.09 × 10−6 ∆p (SI) (8.16)

(b) Write linearised equations for the dynamics of the system.


∆H1 (s)
(c) Find transfer function Qc (s) .
8.3. DEALING WITH NON-LINEARITIES 103

Figure 8.13: Non-linear law of the spring of Exercise 6.

Figure 8.14: System of Exercise 7.


104CHAPTER 8. MODELLING INTERCONNECTED AND NON-LINEAR SYSTEMS
Part II

Systems theory

105
107

To study and not think is a waste. To think and not study is dan-
gerous.

Confucius (551 c. BC  „479 c. BC), Analects, II 15 (transl. A. Charles


Muller)

In this part of the lecture notes:

Chapter 9 develops the very important notion of transfer function of a system through
the representation of interconnected systems as blocks, in so-called block
diagrams.

Chapter 10 is dedicated to the study of time and frequency responses in general.

Chapter 11 studies the time and frequency responses of dierent systems.

Here is what you need to know beforehand to follow these chapters:

ˆ The Laplace and Fourier transforms, from Chapter 2;

ˆ Transfer functions, from Sections 4.1 and 4.2 of Chapter 4.


108
Chapter 9

Transfer functions and block


diagrams
In this chapter we will show how transfer functions can be used together with
a graphic representation of system interconnection called block diagram. We
conclude using block diagrams as a tool for a short introduction to control.

9.1 More on transfer functions


Remember Denition 4.1 about what is a transfer function of a SISO system
modelled by a dierential equation: it is the ratio between the Laplace transform
of the output and the Laplace transform of the input, assuming initial conditions
equal to zero. Also remember that behind each transfer function there is a
dierential equation, and that dierential equations are models of real things. Transfer functions are dif-
Using transfer functions, we can easily study the behaviour of a system ferential equations
abstracting from its physical reality. This is the approach we will take from now
on. This said, notice that remembering the actual system that is being studied
can be useful to check if results are possible or not. Remember that models
approximate reality, not the other way round. Also remember Example 4.1
about the spring stretched to an impossible length (it breaks, of course), or
Remark 6.5 about pipes where the ow cannot be negative. We would not have
found that just by looking at our models, which are linear.
Theorem 9.1. The transfer function of a SISO LTI continuous in time can be Ratio of polynomials in s
expressed as the ratio of two polynomials in s.
Proof. Let the input of the SISO system be u(t) and its output be y(t). Because
the system is LTI and continuous in time, it is modelled by a linear dierential
equation:
dy(t) d2 y(t) d3 y(t) du(t) d2 u(t) d3 u(t)
a0 y(t) + a1 + a2 + a3 + . . . = b 0 u(t) + b 1 + b 2 + b 3 + ...
dt dt2 dt3 dt dt2 dt3
n
X dk y(t) X dk u(t) m
⇔ ak = bk (9.1)
dtk dtk
k=0 k=0

In the last expression, n and m are the highest derivative orders of the equa-
tion. Assuming zero initial conditions and applying the Laplace transform, this
becomes
a0 Y (s) + a1 Y (s)s + a2 Y (s)s2 + a3 Y (s)s3 + . . . = b0 U (s) + b1 U (s)s + b2 U (s)s2 + b3 U (s)s3 + . . .
X n Xm
⇔ ak Y (s)sk = bk U (s)sk (9.2)
k=0 k=0

Rearranging terms,
m
X
bk sk
2 3
Y (s) b0 + b1 s + b2 s + b3 s + . . .
= = k=0
n (9.3)
U (s) a0 + a1 s + a2 s2 + a3 s3 + . . . X
k
ak s
k=0

109
110 CHAPTER 9. TRANSFER FUNCTIONS AND BLOCK DIAGRAMS

Remark 9.1. Some authors change the order of the coecients, and instead of
(9.3) write
m
X
bm−k sk
Y (s)
= k=0
n (9.4)
U (s) X
k
an−k s
k=0

This is a mere detail of notation.


Remark 9.2. (9.3) corresponds to an innite number of representations of a
same transfer function. It suces to multiply both numerator and denominator
Normalising transfer func- by a constant. But it is common to normalise coecients so that a0 = 1, or
tion coecients an = 1, or b0 = 1.
Example 9.1. Consider the microprecision control setup test in Figure 3.9 from
Example 3.19. The transfer function from one of the actuators to the position
of the mass has been identied as
9602
G(s) = (9.5)
s2 + 4.27s + 7627
This was normalised so that a2 = 1, n = 2. We could also normalise a0 or b0 :
1.2589
G(s) =
131.11 × 10−6 s2 + 559.85 × 10−6 s + 1
1
= (9.6)
104.14 × 10−6 s2 + 444.70 × 10−6 s + 0.7943
Getting the transfer func- It is easy to nd the dierential equation from a transfer function. When the
tion back transfer function is represented merely by a letter, meaning that it is a function
of s, as in (9.5) above, it still corresponds to the ratio of the Laplace transforms
of output and input.
Example 9.2. (9.5) can be rewritten as
Y (s) 9602
= 2 ⇔ Y (s)(s2 + 4.27s + 7627) = 9602U (s)
U (s) s + 4.27s + 7627
⇔Y (s)s2 + 4.27Y (s)s + 7627Y (s) = 9602U (s) (9.7)

which is the Laplace transform of the dierential equation governing the plant:

y 00 (t) + 4.27y 0 (t) + 7627y(t) = 9602u(t) (9.8)

Proper transfer function Denition 9.1. A transfer function is proper if the order of the polynomial
in the numerator is equal to or less than the order of the polynomial in the
denominator.
Strictly proper transfer A transfer function is strictly proper if the order of the polynomial in the
function numerator is less than the order of the polynomial in the denominator.
In the notation of (9.3), the transfer function is proper if m ≤ n, and strictly
proper if m < n.
For reasons we shall address in Chapters 11 and 25, we will be working almost
always with proper transfer functions, and most of the times with strictly proper
transfer functions.
Order of a transfer func- Denition 9.2. The order of a transfer function is the highest order of the
tion polynomials in the numerator and the denominator. If the transfer function is
proper, its order is the order of the denominator.
Remark 9.3. The order of a transfer function is also the order of the dierential
equation from which it was formed. In fact, sk corresponds to a derivative of
order k .
Remark 9.4. Notice that some transfer functions can be simplied because
numerator and denominator have common factors. Eliminating them reduces
the order of the transfer function.
9.1. MORE ON TRANSFER FUNCTIONS 111

Example 9.3. Here are examples of proper transfer functions of:


ˆ Order 0

Ga (s) = 20 (9.9)

ˆ Order 1
19
Gb (s) = (9.10)
s + 18
17s + 16
Gc (s) = (9.11)
s + 15
14
Gd (s) = (9.12)
s
13s + 12
Ge (s) = (9.13)
s
ˆ Order 2
11
Gf (s) = (9.14)
s2 + 10s + 9
8s + 7
Gg (s) = (9.15)
s2 + 6s + 5
4s2 + 3s + 2
Gh (s) = (9.16)
s2 + s − 1
2
s − 2s + 1
Gi (s) = (9.17)
s2
2
s − 3s − 4
Gj (s) = (9.18)
s2 − 5s − 6
They have all been normalised so that the coecient of the highest order mono-
mial in the denominator is 1 (i.e. an = 1). Transfer functions Gb (s), Gd (s),
Gf (s), Gg (s), and Gi (s) are strictly proper; the other ones are not.
Gj (s) is of order 2 but can be simplied and become of order 1:
s2 − 3s − 4 (s − 4)(s + 1) s−4
Gj (s) = = = (9.19)
s2 − 5s − 6 (s − 6)(s + 1) s−6
Transfer functions are often put in the following form, that explicitly shows
the zeros of the transfer function (i.e. the zeros of the polynomial in the nu- Zeros
merator) and the poles of the transfer function (i.e. the zeros of the polynomial Poles
in the denominator):
m
Y
bm (s − zk )
Y (s) bm (s − z1 )(s − z2 )(s − z3 ) . . . k=1
= = n (9.20)
U (s) an (s − p1 )(s − p2 )(s − p3 ) . . . X
an (s − pk )
k=0

Here the zeros are zk , k = 1, 2, . . . m and the poles are pk , k = 1, 2, . . . n. Be-


cause both inputs and outputs are real, transfer function coecients are real,
and consequently the poles and zeros are either real or pairs of complex conju-
gates. (Remember Remark 2.6.) So in (9.20) it is usual to multiply such pairs,
presenting a second order term instead of two complex terms.
Example 9.4. The second order transfer functions in Example 9.3 can be
rewritten as
11
Gf (s) = (9.21)
(s + 9)(s + 1)
8s + 7
Gg (s) = (9.22)
(s + 5)(s + 1)
 √  √ 
4 s + 3+ 8 23j s + 3− 8 23j 4s2 + 3s + 2
Gh (s) =  √  √  = √  √  (9.23)
s + 1+2 5 s + 1−2 5 s + 1+2 5 s + 1−2 5
(s − 1)2
Gi (s) = (9.24)
s2
112 CHAPTER 9. TRANSFER FUNCTIONS AND BLOCK DIAGRAMS

For Gj (s), see (9.19). Notice that, in the case of Gh (s), only the second expres-
sion is usual; the rst one, explicitly showing the two complex conjugate zeros,
is not.

Remark 9.5. From Denition 9.2 results that the order of a proper transfer
function is the number of its poles.

The following Matlab functions use transfer functions in this form:

Transfer function from ze- ˆ zpk creates a transfer function from its zeros, poles, and the ratio in
bm
an
ros, poles, gain (9.20), here called gain k , and also converts a transfer function created
with tf into this form;

ˆ pole nds the poles of a transfer function;

ˆ tzero nds the zeros of a transfer function.

Example 9.5. Transfer function (9.15) or (9.22)


ˆ has one zero, 8s + 7 = 0 ⇔ s = − 87 = −0.875,

ˆ has two poles, (s + 5)(s + 1) = 0 ⇔ s = −5 ∨ s = −1,

ˆ veries k = bm
an = 8
1 = 8.

Matlab's command zpk It can be created, converted to a ratio of two polynomials as in (9.3), and
converted back to the (9.20) form as follows:

>> G_g = zpk(-7/8, [-5 -1], 8)

G_g =

8 (s+0.875)
-----------
(s+5) (s+1)

Continuous-time zero/pole/gain model.

>> G_g = tf(G_g)

G_g =

8 s + 7
-------------
s^2 + 6 s + 5

Continuous-time transfer function.

>> G_g = zpk(G_g)

G_g =

8 (s+0.875)
-----------
(s+5) (s+1)

Continuous-time zero/pole/gain model.


Matlab's commands Its poles and zeros can be found as follows:
pole and tzero
>> tzero(G_g)
ans =
-0.8750
>> pole(G_g)
ans =
-5
-1
9.2. BLOCK DIAGRAMS 113

Figure 9.1: Generic block.

It does not matter whether a transfer function was created with tf or with
zpk (or with any other function to create transfer functions that we did not
study yet): pole and tzero work just the same.
Another way of nding the poles and the zeros is to access the numerator and
the denominator, and then using roots to nd the roots of these polynomials.
The transfer function must be in the tf form this time, the only one that has
the num and den elds:
>> G_g = tf(G_g);
>> G_g.num{1}
ans =
0 8 7
>> roots(ans)
ans =
-0.8750
>> G_g.den{1}
ans =
1 6 5
>> roots(ans)
ans =
-5
-1
Notice that the {1} is necessary since Matlab presumes that the transfer
function is MIMO and thus has many transfer functions relating the many inputs
with the many outputs. The cell array index accesses the rst transfer function,
which, as the system is SISO, is the only one.
A very important property of transfer functions for the rest of this chapter
has already been mention in Section 8.2 and illustrated in Example 8.2: if two
systems G1 (s) = uy11(s) y2 (s)
(s) and G2 (s) = u2 (s) are interconnected so that the output Multiplying transfer func-
of one is the input of the other, y1 (s) = u1 (s), then the resulting transfer tions
function is
y2 (s) y2 (s) y1 (s)
= = G1 (s) G2 (s) (9.25)
u1 (s) u2 (s) u1 (s)
Remark 9.6. Remember that the multiplication of two Laplace transforms Multiplication of L is con-
does not correspond to the multiplication of the original functions, but rather volution in t
to their convolution, as we have shown in (2.78). Operation convolution is
dened in (2.76). (This is sometimes a source of confusion, because the sum of
two Laplace transforms is the sum of the original functions, as L is linear.)

9.2 Block diagrams


Block diagrams are graphical representations of the relations between vari-
ables and functions. In our case, functions will be systems, and variables will
be signals (which are themselves, as you remember, functions of time, or space).
Figure 9.1 shows a generic system (represented by a block) relating two signals
(represented by lines with arrows).
The practical thing to do for LTI systems is to represent them using their
transfer functions, and consequently to represent signals by their Laplace trans-
forms. The block in Figure 9.2 means that Y (s) = G(s)U (s). This is yet another
advantage of using the Laplace transform: the (Laplace transform of the) out-
put is the product of the (transfer function of the) system and the (Laplace
transform of the) input.
114 CHAPTER 9. TRANSFER FUNCTIONS AND BLOCK DIAGRAMS

Figure 9.2: Linear block.

Figure 9.3: Block diagram of Example 9.6, corresponding to the mechatronic


system in Figure 8.2 from Example 8.2.

Example 9.6. The mechatronic system in Example 8.2 had four transfer func-
tions, as follows:
2n
I(s) n1
G1 (s) = = (9.26)
Vi (s) R + Ls
F2 (s)
G2 (s) = =α (9.27)
I(s)
F1 (s) b
G3 (s) = = (9.28)
F2 (s) a
X1 (s) 1
G4 (s) = = (9.29)
F1 (s) m1 s2 + K
The corresponding block diagram is shown in Figure 9.3. In fact,
I(s) = G1 (s)Vi (s) (9.30)
F2 (s) = G2 (s)I(s) (9.31)
F1 (s) = G3 (s)F2 (s) (9.32)
X1 (s) = G4 (s)F1 (s) (9.33)

The Example above shows that several interconnected systems correspond


to a sequence of blocks. By similarity with electrical circuits, blocks in such a
Blocks in series sequence are said to be in series or in cascade. This is because of the property
Blocks in cascade of transfer functions illustrated in (9.25). Clearly, two blocks A and B in series
are equivalent to one block AB .
Adding signals is represented as shown in Figure 9.4, where
y = y1 + y2 = Au + Bu = (A + B)u (9.34)
Blocks in parallel By similarity with electrical circuits, blocks A and B are said to be in parallel.
Clearly, they are equivalent to one block A + B . Signal subtraction is indicated
similarly.
Feedback The block congurations in Figure 9.5, wherein the input of a block depends
on its output, is called feedback loop or just feedback: feedback, because
Loop the output is fed back to the block it originates from; and loop, because of the
Direct branch conguration of the diagram. In that Figure, A is called direct branch and
Feedback branch B feedback branch. The two block diagrams only dier because of the sign
aecting signal d(s):
Negative feedback ˆ when b(s) = a(s) − d(s), there is negative feedback;
Positive feedback ˆ when b(s) = a(s) + d(s), there is positive feedback.
Negative feedback is far more common; when feedback is mentioned without
specifying whether it is positive or negative, you can safely presume it is nega-
tive. Notice that, for both:
9.2. BLOCK DIAGRAMS 115

Figure 9.4: Block diagram with two blocks in parallel.

Figure 9.5: Block diagrams with feedback loops. Left: negative feedback. Right:
positive feedback.

t of the feedback loop ˆ the input of the loop is a(s);

ˆ the output of the loop is c(s); Output of the feedback loop

ˆ the input of the direct branch is b(s) = a(s) ∓ d(s);

ˆ the output of the direct branch is a(s);

ˆ the input of the feedback branch is c(s);

ˆ the output of the feedback branch is d(s).

Consequently, for negative feedback,

c = Ab = A(a − d) = A(a − Bc) = Aa − ABc


A
⇒ c + ABc = Aa ⇒ c = a (9.35)
1 + AB
and, for positive feedback,

c = Ab = A(a + d) = A(a + Bc) = Aa + ABc


A
⇒ c − ABc = Aa ⇒ c = a (9.36)
1 − AB

Example 9.7. The centrifugal governor (see Figure 9.6) is a control system Centrifugal governor
which had widespread use to control the pressure in boilers. It rotates because
of the pressure of the steam. The faster it rotates, the more the two spheres
go up, thereby opening a valve relieving steam pressure. Consequently the
regulator spins slower, the balls go down, and this closes the valve, so pressure
is no longer relieved and goes up again. This is negative feedback: an increase
of any variable has as consequence the decrease of another variable that caused
the original increase, and vice-versa.

Example 9.8. Audio feedback (or howl) is an example of positive feedback.


Surely you must have heard it often, whenever there is a sound system amplify-
ing the sound detected by a microphone which is too close to the loudspeakers,
so that even background noise is amplied to the point of being received again
by the microphone and amplied further  see Figure 9.7. The amplitude of
the resulting sound does not become innite because at some point the amplier
and/or the loudspeakers saturate, but the howl can damage the equipment or,
more importantly, the listeners' auditory systems.
116 CHAPTER 9. TRANSFER FUNCTIONS AND BLOCK DIAGRAMS

Figure 9.6: Centrifugal governor of a boiler in the former Barbadinhos water


pumping station (currently the Water Museum), Lisbon.

Figure 9.7: How audio feedback occurs.


9.2. BLOCK DIAGRAMS 117

Example 9.9. Biological processes provide numerous examples of both positive


and negative feedback. We will go back to this in Chapter 43.
The best way to simplify block diagrams is to write the corresponding equa-
tions and do so analytically.
Example 9.10. In the block diagram of Figure 9.8 we make
G1 (s) = 2 (9.37)
s + 10
G2 (s) = 2
(9.38)
s + 0.5s + 5
1
G3 (s) = (9.39)
s+1
20(s − 0.5)
G4 (s) = (9.40)
(s − 1)(s − 3)
1
G5 (s) = (9.41)
s
(9.42)

(The block for G1 (s) is triangular because Simulink, which we will mention
below, uses triangles for constants, but this convention is unusual; when drawing
blocks by hand, they are all usually rectangles.) Then

e = G2 c = G2 G1 b = G2 G1 (a − d) = G1 G2 (a − G3 e) ⇒
G1 G2
⇒ (1 + G1 G2 G3 )e = G1 G2 a ⇒ e = a (9.43)
1 + G1 G2 G3
   
G1 G2 G1 G2 G5
h = G5 g = G5 (e + f ) = G5 a + G4 a = a + G4 G5
1 + G1 G2 G3 1 + G1 G2 G3
(9.44)
Finally, the whole block diagram corresponds to transfer function
s+10 1
h(s) 2 s2 +0.5s+5 s 20(s − 0.5)
= s+10 1 + (9.45)
a(s) 1 + 2 s2 +0.5s+5 s+1 s(s − 1)(s − 3)

It is usually a good idea to put the result in one of the forms (9.3) or (9.20).
Since calculations are rather complicated, we can use Matlab:
>> s = tf('s');
>> (2/s*(s+10)/(s^2+0.5*s+5))/(1+2/(s+1)*(s+10)/(s^2+0.5*s+5))+...
20*(s-0.5)/((s-1)*(s-3)*s)

ans =

22 s^7 + 45 s^6 + 200 s^5 + 617.5 s^4 + 380.5 s^3 + 1960 s^2 - 950 s

----------------------------------------------------------------------

s^9 - 2 s^8 + 8.25 s^7 - 10.75 s^6 - 55.25 s^5 + 33.75 s^4 - 350 s^3

+ 375 s^2

Continuous-time transfer function.

>> zpk(ans)

ans =

22 s (s+2.931) (s-0.4226) (s^2 + 0.5s + 5) (s^2 - 0.9629s + 6.972)


------------------------------------------------------------------
s^2 (s-3) (s+2.5) (s-1) (s^2 + 0.5s + 5) (s^2 - s + 10)

Continuous-time zero/pole/gain model.


118 CHAPTER 9. TRANSFER FUNCTIONS AND BLOCK DIAGRAMS

Figure 9.8: Block diagram of Example 9.10.

As you can see from the last result, it is possible to eliminate s and s2 +0.5∗s+5
from both the numerator and the denominator. So h(s) a(s) is of sixth order.

In this way it is possible to nd several generic equivalences in block dia-


grams, such as those of Figure 9.9, which may be used in block diagram sim-
plication. The analytical simplication of block diagrams is however normally
easier and less prone to errors.
Matlab has commands to combine transfer functions:

ˆ operators + and * add and multiply transfer functions (remember that two
blocks in series correspond to the product of their transfer functions);

ˆ feedback receives the direct and the feedback branches and gives the
transfer function of the negative feedback loop.

Matlab's command Example 9.11. We can verify our calculations of Example 9.10 as follows:
feedback
>> G1 = 2;
>> G2 = (s+10)/(s^2+0.5*s+5);
>> G3 = 1/(s+1);
>> G4 = 20*(s-0.5)/((s-1)*(s-3));
>> G5 = 1/s;
>> loop_from_a_to_e = feedback(G1*G2, G3)

loop_from_a_to_e =

2 s^2 + 22 s + 20
--------------------------
s^3 + 1.5 s^2 + 7.5 s + 25

Continuous-time transfer function.

>> from_a_to_g = loop_from_a_to_e + G4

from_a_to_g =

22 s^4 + 34 s^3 + 73 s^2 + 411 s - 190


-----------------------------------------------
s^5 - 2.5 s^4 + 4.5 s^3 - 0.5 s^2 - 77.5 s + 75

Continuous-time transfer function.

>> from_a_to_h = from_a_to_g * G5

from_a_to_h =

22 s^4 + 34 s^3 + 73 s^2 + 411 s - 190


---------------------------------------------------
s^6 - 2.5 s^5 + 4.5 s^4 - 0.5 s^3 - 77.5 s^2 + 75 s
9.2. BLOCK DIAGRAMS 119

Figure 9.9: Block diagram simplication.


120 CHAPTER 9. TRANSFER FUNCTIONS AND BLOCK DIAGRAMS

Figure 9.10: Commonly used blocks of Simulink.

Continuous-time transfer function.

>> zpk(from_a_to_h)

ans =

22 (s+2.931) (s-0.4226) (s^2 - 0.9629s + 6.972)


-----------------------------------------------
s (s+2.5) (s-3) (s-1) (s^2 - s + 10)

Continuous-time zero/pole/gain model.

This is the same transfer function we found above, with the poles and zeros
common to the numerator and denominator eliminated.

Simulink Matlab's most powerful tool for working with block diagrams is Simulink.
All the block diagrams above have been created with Simulink, and then
cropped so as not to show what Simulink calls source and sink, which are
not part of what is shown in standard block diagrams (you must not include
them when drawing block diagrams by hand). To use Simulink, access its
library in Matlab by clicking the corresponding button or typing simulink.
The library looks like very dierent in dierent versions of Matlab, but its
organisation is similar: the most commonly used blocks are in one of the several
subsets of the Simulink library; then there are libraries corresponding to the
Commonly used Simulink toolboxes you have installed. Figure 9.10 shows the blocks you will likely need:
blocks
ˆ The Transfer Fcn block, from the Continuous subset of the Simulink
library, creates a transfer function like function tf.

ˆ The Zero-Pole block, from the Continuous subset of the Simulink li-
brary, creates a transfer function like function zpk.

ˆ The LTI System block, from the Control System Toolbox library, cre-
ates a transfer function using function tf or function zpk. It is also pos-
sible just to put there a variable with a transfer function, created in the
command line.

ˆ The Sum block (name hidden by default), from the Math operations sub-
set of the Simulink library, is a sum point.

ˆ The Gain block, from the Math operations subset of the Simulink li-
brary, multiplies a signal by a constant.

ˆ The From Workspace block, from the Sources subset of the Simulink
library, provides a signal to run a simulation. The signal is either a struc-
ture (see the block's dialogue for details) or a matrix with time instants in
the rst column and the corresponding values of the signal in the second
column (these will be interpolated).

ˆ The Scope block, from the Sinks subset of the Simulink library, plots
the signal it receives. It can be congured to record the data to a variable
in the workspace, which is most practical to reuse it later.

ˆ The Mux block (from multiplexer, name hidden by default), from the
Signal Routing subset of the Simulink library, joins two (or more) sig-
nals into one. The result is a vector-valued signal. If two real-values signals
are multiplexed, the result is a vector-valued signal with dimension 2.
9.2. BLOCK DIAGRAMS 121

Figure 9.11: Simulink le of Example 9.12.

0.12

0.1

0.08
x1 [m]

0.06

0.04

0.02

0
0 0.5 1 1.5 2 2.5 3
time [s]

Figure 9.12: Output of Example 9.12.

To use a block, create an empty Simulink le and drag it there. Connect blocks
with arrows by clicking and dragging from one block's output to another's input.
Double-click a block to see a dialogue where you can ll in the arguments you
would use in a Matlab command written in the command line (i.e. after the
>>). Most of the times you can use numbers or variables; you just have to
create the variables before you create the model. Right-clicking a block shows a
context menu with many options, among which those of showing or hiding the
block's name, or rotating it. You can edit a block's name by clicking it, and
add a label to a signal by double-clicking it.
To run a simulation, choose its duration in the box on the top of the window,
or go to Simulation > Model Configuration Parameters. Then click the
Play button, or use command sim with the name of the (previously saved) le
with the block diagram model.

Example 9.12. Let us simulate the mechatronic system of Examples 8.2 and 9.6,
given by (9.26)(9.29). The Simulink le is as shown in Figure 9.11 and its
running time was set to 3 s; variables have been used and must be dened before
running the simulation, but this means that they are easier to change. Block
From Workspace has matrix [0 1], meaning that at time 0 it will output value
1, and since no other value is provided this one will be kept. So we are nding
the response of the system to a Heaviside function (2.5), or rather to a tension
of 1 V being applied when the simulation begins. Block Scope is congured to
save data to variable Data. The following commands create the variables, run
the simulation, and plot again the results which you could also see in the Scope
itself:

>> n2 = 200; n1 = 100; L = 1e-2; R = 100; alpha = 100; b = 0.3; a = 0.1;


m1 = 1; K = 100;
>> sim('prob3_ficha3_2011_modif_2')
>> figure, plot(Data.time,Data.signals.values(:,2)), xlabel('time [s]'),
ylabel('x_1 [m]')
See Figure 9.12. We could have expected these oscillations with constant am-
plitude, and you will know why in Chapter 11.
122 CHAPTER 9. TRANSFER FUNCTIONS AND BLOCK DIAGRAMS

Figure 9.13: Left: open loop control. Right: closed loop control.

Remark 9.7. Notice that the input signal was specied in time and the out-
put variable was obtained as a function of time, but the dierential equations
were specied as transfer functions, i.e. not as relations in variable t but in the
Laplace transform variable s. This is the way Simulink works. However, do not
forget that, since in a block diagram system dynamics is indicated by transfer
functions, signals too must be given by their Laplace transforms, as functions
of s. It is correct to say that y(s) = G(s)u(s); it makes no sense at all to write
y(t) = G(s)u(t) mixing t and s.
The dialogue Model Configuration Parameters, which can also be ac-
cessed through a button, allows specifying many other things, among which:
ˆ the numerical method used to solve the dierential equations;

ˆ the maximum and minimum time steps used by the numerical method;

ˆ a tolerance that will not be exceeded by the numerical method's estimate


of the errors incurred.
Notice that some numerical methods use xed time steps. These may be used
with dierential equations, but are the only ones that can be used with dierence
equations (corresponding to digital models).

9.3 Control in open-loop and in closed-loop


There are two generic congurations for control systems: open-loop control
and closed-loop control, shown in Figure 9.13. Every control system is a
variation of one of these two congurations, or a combination thereof. Both add,
to the system we want to control, another system called controller, intended
to make the controlled system's output y(t) follow some specied reference, or
desired output, r(t) (remember Section 3.1). In a perfectly controlled system,
y(t) = r(t), ∀t. The output of the controller is the system's input in the strict
sense (the input must be a manipulated variable).
Open-loop control In open-loop control the controller receives the reference that the system
should follow, and decides from this desired output what control action to take.
This control action will be the input of the system. It is not checked whether or
not the system's output does follow the reference. So, if there is some unexpected
deviation from the reference, this does not change the control action. Open-loop
control only uses blocks in series.
In open-loop control,

y(s) = G(s)u(s) = G(s)C(s)r(s) (9.46)

and since we want y(s) = r(s) then we should have C(s) = G−1 (s), i.e. the
controller should be an inverse model of the system to control. Notice that
if the model of the system is proper then the controller is not proper; you will
learn why this brings problems in Chapter 11.
Closed-loop control Closed-loop control uses negative feedback. The reference is compared with
the system output. Ideally, the error should be zero. What the controller
receives is this error, so the control action is based on the error.
The simplest closed-loop controller is proportional: C(s) = K ∈ R. With
Proportional control proportional control, if the error is small, the control action is small too; if
the error is large, the control action is also large. There are techniques to choose
an appropriate value of K , and also to develop more complex controllers, with
poles and zeros, which will be addressed in Part IV.
9.3. CONTROL IN OPEN-LOOP AND IN CLOSED-LOOP 123

Figure 9.14: Closed-loop control with disturbances and sensor dynamics.

Figure 9.15: The same as Figure 9.14, but with MIMO systems.

Actually, no control system is that simple. Figure 9.14 shows a more realistic
situation, including the following additions:

ˆ H(s) is the sensor that measures output y . A perfect sensor measures Sensor dynamics
the output exactly: ŷ(t) = y(t), ∀t; and hence H(s) = 1. No sensor is
perfect, but it is often possible to assume H(s) = 1 even so (in which case
the block does not need to be there). If this is not the case, H(s) must be
explicitly taken into account.

ˆ du (t) is a disturbance that aects the control action. This means that Control action disturbance
the control action is not precisely received by the controlled system. For
instance, if the control action is a force, this means that there are other
forces acting upon the system. Or, if the control action is a current, there
are unintended uctuations of the value determined by the controller.

ˆ dy (t) is a disturbance that aects the system output. This means that the System output disturbance
output is aected by something else other than the system. For instance,
if the output is a ow, there is some other source of uid, or some bleeding
of uid somewhere, that must be added or subtracted. Or, if the output
is a position, there may be vibrations that have to be superimposed.

ˆ dŷ (t) is a disturbance that aects the sensor's measurement of the system Output measurement dis-
output. Just like u(t) can suer a disturbance, so can ŷ(t). turbance

Remark 9.8. Disturbances in Figure 9.14 follow what is called an additive


model, since the disturbance is added to the signal it disturbs. Other models Additive disturbances
use multiplicative disturbances, that are multiplied rather than summed. Here Multiplicative disturbances
we will stick to additive disturbances, which result in linear models.

Remark 9.9. We saw in Chapter 3 that MIMO systems may have some inputs
in the general sense that are disturbances and others that are manipulated
variables. Figure 9.15 represents disturbances using MISO systems. The block
diagram in Figure 9.14 reects the same situation using only SISO systems. The
124 CHAPTER 9. TRANSFER FUNCTIONS AND BLOCK DIAGRAMS

Figure 9.16: The same as Figure 9.14, but using transfer functions (9.48)(9.51).

price to pay for using SISO systems is less freedom in establishing mathematical
relations between disturbances and outputs.

The output of the block diagram in Figure 9.14 is

ỹ = dy + y = dy + Gũ = dy + G(du + u) = dy + Gdu + GCe


ˆ = dy + Gdu + GCr − GC(dŷ + ŷ)
= dy + Gdu + GC(r − ỹ)
= dy + Gdu + GCr − GCdŷ − GCH ỹ
⇒ (1 + GCH)ỹ = dy + Gdu + GCr − GCdŷ (9.47)
1 G GC −GC
⇒ ỹ = dy + du + r+ dŷ
1 + GCH 1 + GCH 1 + GCH 1 + GCH
Because of the linearity of the relations involved, (9.47) gives the same result as
if four transfer functions were involved as seen in Figure 9.16:
ỹ GC
G1 = = (9.48)
r 1 + GCH
ỹ G
G2 = = (9.49)
du 1 + GCH
ỹ 1
G3 = = (9.50)
dy 1 + GCH
ỹ −GC
G4 = = (9.51)
dŷ 1 + GCH

Notice that each of the four transfer functions above can be obtained assuming
that all inputs but one of them are zero. If the system were not linear, that
would not be the case.

Glossary
D'altra parte gli aveva detto la sera prima che lui possedeva un'dono:
che gli bastava udire due che parlavano in una lingua qualsiasi, e
dopo un poco era capace di parlare come loro. Dono singolare, che
Niceta credeva fosse stato concesso solo agli apostoli.

Umberto Eco (1932  „2016), Baudolino, 2

block diagram diagrama de blocos


blocks in cascade blocos em cascata
blocks in parallel blocos em paralelo
closed-loop anel fechado, malha fechada
direct branch ramo direto
disturbance perturbação
feedback retroação
feedback branch ramo de retroação
feedback loop anel de retroação, malha de retroação
inverse model modelo inverso
9.3. CONTROL IN OPEN-LOOP AND IN CLOSED-LOOP 125

open-loop anel aberto, malha aberta


order ordem
proper transfer function função de transferência própria
proportional control controlo proporcional
strictly proper transfer function função de transferência estritamente própria

Exercises
1. For each of the transfer functions below, answer the following questions:
ˆ What are its poles?
ˆ What are its zeros?
ˆ What is its order?
ˆ Is it a proper transfer function?
ˆ Is it a strictly proper transfer function?
ˆ What is the dierential equation it corresponds to?

(a) s
s2 + 12s + 20
(b) ss +
−5
1
2
(c) 3 s + 22s + 10
s − 5s + 15.25s
(d) 10
(s + 1)2 (s2 + 5s + 6)
(e) s2 + 2
2
s (s + 3)(s + 50)
(s4 + 6s3 + 8.75s2 )
(f)
(s2 + 4s + 4)2

2. Find the following transfer functions for the block diagram in Figure 9.17:
y(s)
(a)
d(s)
y(s)
(b)
r(s)
y(s)
(c)
m(s)
y(s)
(d)
n(s)
u(s)
(e)
d(s)
u(s)
(f)
r(s)
u(s)
(g)
m(s)
u(s)
(h)
n(s)
e(s)
(i)
d(s)
e(s)
(j)
r(s)
e(s)
(k)
m(s)
e(s)
(l)
n(s)

3. Figure 9.18 shows a variation of closed-loop control called internal model


control (IMC). It has this name because it requires knowing a model of the
system to control, as well as an inverse model of the system to control.In
that block diagram:
126 CHAPTER 9. TRANSFER FUNCTIONS AND BLOCK DIAGRAMS

Figure 9.17: Block diagram of Exercise 2.

Figure 9.18: Internal model control (IMC).

ˆ G(s) is the plant to control,


ˆ G∗ (s) is the model of the plant to control,
ˆ G−1 (s) is the inverse model of the plant to control.

(a) Show that, if the model is perfect, i.e. if


G∗ (s) = G(s), then the error is given by E(s) = R(s) − D(s).
(b) Show that, if, additionally, the inverse model is perfect, i.e. G−1 (s)G(s) =
1, then the output is Y (s) = R(s).
(c) Show that, whether the models are perfect or not, the block diagram
of IMC in Figure 9.18 is equivalent to the block diagram of closed-
−1
loop control in Figure 9.13, if C(s) = 1−GG (s)
−1 (s)G∗ (s) .

4. Figure 9.19 shows a variation of closed-loop control called cascade control


(or masterslave control, though that designation is out of favour nowa-
days). In that block diagram, the plant to control is G(s) = G1 (s)G2 (s),
and it possible to measure both Y1 (s) and Y2 (s). Each of the two parts of
the system to control is controlled separately.
Y1 (s)
(a) Find transfer function .
U2 (s)
Y2 (s)
(b) Use that result to nd transfer function .
R(s)

5. Redraw the block diagram of Figure 9.11 from Example 9.12 as follows:

ˆ use the values of the variables given in Example 9.12,

Figure 9.19: Cascade (or masterslave) control.


9.3. CONTROL IN OPEN-LOOP AND IN CLOSED-LOOP 127

Figure 9.20: Block diagrams of Exercise 6.

ˆ let the input Vi (s) be a manipulated variable,


ˆ let there be some reference r(t) for x1 (t) to follow,
ˆ add proportional control K .
X1 (s)
Then nd transfer function R(s) as a function of K .

6. For each of the two block diagrams in Figure 9.20:


Y (s)
(a) Find transfer function R(s) .

(b) Let A(s) = 1s , B(s) = s+1 ,


10
C(s) = 2, D(s) = s+2 .
s+0.1
Find the value
Y (s)
of R(s) .

7. Prove all the equivalences of block diagrams shown in Figure 9.9.


128 CHAPTER 9. TRANSFER FUNCTIONS AND BLOCK DIAGRAMS
Chapter 10

Time and frequency responses


Schirm und Robert iegen dort
Durch die Wolken immer fort.
Und der Hut iegt weit voran,
Stöÿt zuletzt am Himmel an.
Wo der Wind sie hingetragen,
Ja! das weiÿ kein Mensch zu sagen.

Heinrich Hoffmann (1809  „1894), Der Struwwelpeter, Die Geschichte vom


iegenden Robert

We already know that we can use the Laplace transform (and its inverse)
to nd out the output of any transfer function for any particular input. In
this chapter we study several usual particular cases. This allows us to nd
approximate responses in many cases, and to characterise with simplicity more
complex responses. It also paves the way to the important concept of frequency
responses.

10.1 Time responses: steps and impulses as in-


puts
The following inputs are routinely used to test systems:
ˆ The impulse: Impulse

u(t) = δ(t) (10.1)


L [u(t)] = 1 (10.2)

ˆ The step, with amplitude d: Step

u(t) = d H(t) (10.3)


d
L [u(t)] = (10.4)
s

ˆ In particular, the unit step, with amplitude 1: Unit step

u(t) = H(t) (10.5)


1
L [u(t)] = (10.6)
s

ˆ The ramp, with slope d: Ramp

u(t) = d t (10.7)
d
L [u(t)] = 2 (10.8)
s

ˆ In particular, the unit ramp, with slope 1: Unit ramp

u(t) = t (10.9)
1
L [u(t)] = 2 (10.10)
s

129
130 CHAPTER 10. TIME AND FREQUENCY RESPONSES

ˆ The parabola, with second derivative 2d: Parabola

u(t) = d t2 (10.11)
2d
L [u(t)] = 3 (10.12)
s
Unit parabola ˆ In particular, the unit parabola, with second derivative 2:
u(t) = t2 (10.13)
2
L [u(t)] = 3 (10.14)
s
You can either nd the Laplace transforms above in Table 2.1, or calculate them
yourself.
Remark 10.1. Notice that:
Z t
ˆ the unit step is the integral of the impulse: δ(t) dt = H(t);
0
Z t
ˆ the unit ramp is the integral of the unit step: H(t) dt = t;
0
Z t
1 2
ˆ the unit parabola is not the integral of the unit ramp: t dt = t 6=
0 2
t2 .
Properties of δ(t) Remark 10.2. Remember that while the Heaviside function H(t) is a function,
δ(t) is not a function and so are t and t2 , the Dirac delta δ(t) is not. It is a generalised function, and
the limit of the following family of functions:
(
1
, if 0 ≤ t ≤ 
f (t, ) =  (10.15)
0, if t < 0 ∨ t > 
δ(t) = lim f (t, ) (10.16)
→0+

Since
Z +∞ Z  Z 
1
f (t, ) dt = f (t, ) dt = dt = 1, ∀ ∈ R+ (10.17)
−∞ 0 0 
Its integral in R is 1 we also have Z +∞
δ(t) dt = 1 (10.18)
−∞
Furthermore, for a continuous function g(t),
f (t, ) min g(t) ≤ f (t, )g(t) ≤ f (t, ) max g(t)
0≤t≤ 0≤t≤
 Z Z  Z 
⇒ f (t, ) min g(t) dt ≤ f (t, )g(t) dt ≤ f (t, ) max g(t) dt
0 0≤t≤ 0 0 0≤t≤
Z  Z  Z 
⇔ min g(t) f (t, ) dt ≤ f (t, )g(t) dt ≤ max g(t) f (t, ) dt
0≤t≤ 0 0 0≤t≤ 0
Z 
⇒ min g(t) ≤ f (t, )g(t) dt ≤ max g(t) (10.19)
0≤t≤ 0 0≤t≤

where we used (10.17). Making  → 0+ , we get


Z  Z 
g(0) ≤ f (t, )g(t) dt ≤ g(0) ⇔ f (t, )g(t) dt = g(0) (10.20)
0 0

Integral of δ(t) mutiplied A consequence of this is that


by a function Z b
g(t)δ(t) dt = g(0) (10.21)
a

as long as the integration inteval includes and overtakes 0, i.e. a ≤ 0 < b; in


particular,
Z +∞
L [δ(t)] = δ(t)e−st dt = e−s0 = 1 (10.22)
0
10.1. TIME RESPONSES: STEPS AND IMPULSES AS INPUTS 131

The reasons why (10.1)(10.13) are routinely used as inputs to test systems
are:
ˆ They are simple to create.

ˆ Calculations are simple, given their Laplace transforms.

ˆ They can be used to model many real inputs exactly, and even more as
approximations.
Example 10.1. The following situations can be modelled as steps:
ˆ A metal workpiece is taken from an oven and quenched in oil at a lower
temperature.
ˆ A sluice gate is suddenly opened, letting water into an irrigation canal.

ˆ A switch is closed and a tension is thereby applied to the motor that


rotates the joint of a welding robot.
ˆ A nished part is dropped onto a conveyer belt.

ˆ A car advancing at constant speed descends a sidewalk onto the street


pavement.
Example 10.2. The following situations can be modelled as ramps:
ˆ A deep space probe moves out of the solar system at constant speed along
a straight line in an inertial system of coordinates, due to inertia, far from
the gravitational inuence of any close celestial body.
ˆ A high-speed train moves from one station to another at cruiser speed.

ˆ A welding robot creates a welding joint at constant speed, to ensure a


uniform thickness.
Notice that, save for the rst example, the ramp is limited in time: sooner or
later, the train and the welding robot will have to stop. In fact, unlimited ramps
are seldom found.
Remark 10.3. The impulse is in fact impossible to create: there are no physical
quantities applied during no time at all, with an innite intensity. However, the
impulse is a good approximation of inputs that have a very short duration.
Figure 10.1 shows two inputs in that situation: a sequence of two steps
u1 (t) = k H(t − t0 ) − k H(t − t1 )
(
k, if t0 ≤ t ≤ t1
= ≈ k(t1 − t0 )δ(t − t0 ) (10.23)
0, if t < t0 ∨ t > t1

and, even more realistically, a sequence of two ramps, approximated by


Z +∞  
1 1
δ(t) u2 (t) dt = δ(t) k(t1 − t0 ) + k(t2 − t1 ) + k(t3 − t2 )
0 2 2
k
= δ(t − t0 ) (t3 + t2 − t1 − t0 ) (10.24)
2
Of course, any input with a form such as that of Figure 10.2 can be approximated
by an impulse (multiplied by the integral over time of the input).
Remark 10.4. Unit steps are almost exclusively used because amplitude 1
makes calculations easier. Since we are assuming linearity, if the amplitude of
the step is d instead of 1, the output will be that for the unit step, multiplied
by d. The same can be said for unit ramps and unit parabolas. When steps (or
ramps, or parabolas) are applied experimentally, amplitude 1 may be too big or
too small, and a dierent one will have to be used instead.
Example 10.3. Suppose you want to test a car's suspension, when the wheel
climbs or descends a step. Obviously nobody with a sound mind would apply a
1 m step for this purpose (see Figure 10.3). A 10 cm step would for instance be
far more reasonable. Of course, if our model is linear, we can apply a unit step,
knowing well that the result will be nonsense, and then simply scale down the
result.
132 CHAPTER 10. TIME AND FREQUENCY RESPONSES

u1(t)

t0 1
t
t

Figure 10.1: Two functions that can be approximated by an impulse if t0 ≈ t1


(left) or t0 ≈ t3 (right).

k
u3(t)

t0 t 1
t

Figure 10.2: General form of a function that can be approximated by an impulse


if t0 ≈ t1 .

Figure 10.3: Would you test a car's suspension like this? (Source: Wikimedia,
modied)
10.1. TIME RESPONSES: STEPS AND IMPULSES AS INPUTS 133

Figure 10.4: The Rasteirinho mobile robot, without the laptop computer with
which it is controlled.

Example 10.4. The Rasteirinho (see Figure 10.4) is a mobile robot, of which
about a dozen units are used at IST in laboratory classes of dierent courses.
It is controlled by a laptop computer, xed with velcro. Its maximum speed
depends on the particular unit; in most, it is around 80 cm/s. Consequently, it
is useless to try to make its position follow a unit ramp, which would correspond
to a 1 m/s velocity. Once more, we could simulate its behaviour with a linear
model for a unit ramp and then scale the output down.

Example 10.5. In the WECs of Figures 3.2 and 3.3, the air inside the device is
compressed by the waves. A change of air pressure of 1 Pa is ludicrously small;
it is useless even to try to measure it. But if our model of the WEC is linear we
can simulate how much energy it produces when a unit step is applied in the air
pressure and then scale the result up to a more reasonable value of the pressure
variation.

In what follows we will concentrate on the impulse and unit step responses,
and mention responses to unit ramps and steps with amplitudes which are not 1
whenever appropriate.

Theorem 10.1. The impulse response of a transfer function has a Laplace Impulse response of a sys-
transform which is the transfer function itself. tem

Proof. Since G(s) = YU (s)


(s)
, where G(s) is a transfer function, Y (s) is the Laplace
transform of the output, and U (s) is the Laplace transform of the input, and
since the Laplace transform of an impulse is 1, the result is immediate.

Remark 10.5. This allows dening a system's transfer function as the Laplace
transform of its output when the input is an impulse. This denition is an
alternative to Denition 4.1 found in many textbooks.

Corollary 10.1. The output of a transfer function G(s) for any input u(t)
is equal to the convolution of the input with the transfer function's impulse
response g(t):
Z t
y(t) = g(t) ∗ u(t) = g(t − τ )u(τ ) dτ (10.25)
0

Proof. This is an immediate result of Theorem 10.1 and of (2.78).

Remark 10.6. It is usually easier to calculate the Laplace transform of the


input U (s) to nd the Laplace of the output as Y (s) = G(s)U (s) and then nally
the output as y(t) = L −1 [G(s)U (s)], than to calculate the output directly as
y(t) = g(t) ∗ u(t).

The following Matlab functions are useful to nd time responses:


134 CHAPTER 10. TIME AND FREQUENCY RESPONSES

ˆ step plots a system's response to a unit step (and can return the values
plotted in vectors);
ˆ impulse does the same for an impulse input;

ˆ lsim, already studied in Section 4.2, can be used for any input.

Just like lsim, both step and impulse use numerical methods to nd the re-
sponses, rather than analytical computations.
Matlab's command Example 10.6. The impulse, unit step and unit ramp responses of a plant are
impulse shown in Figure 10.5 and obtained as follows:
Matlab's command step >> s = tf('s'); G = 1/(s+1);
>> figure, impulse(G), figure, step(G)
>> t = 0 : 0.01 : 6; figure, plot(t, lsim(G, t, t))
>> xlabel('t [s]'), ylabel('output')
>> title('response to a unit ramp')
The time range is chosen automatically by step and impulse.
Example 10.7. The response of the transfer function from Example 10.6 to a
step with amplitude 10 during 20 s can be found in two dierent manners, both
providing, of course, the same result:
>> [stepresp, timevector] = step(G, 20);
>> t = 0 : 0.01 : 20;
>> figure, plot(t, lsim(G, 10*ones(size(t)), t), timevector, 10*stepresp)
>> xlabel('t [s]'), ylabel('output'), title('Step response')
There is, in fact, a slight dierence in the two plots shown in Figure 10.6, because
function step chooses the sampling time automatically, and it is dierent from
the one explicitly fed to lsim.

10.2 Steady-state response and transient response


The impulse, unit step, and unit ramp responses of
1
G(s) = (10.26)
s+1
from Example 10.6, shown in Figure 10.5 as they are numerically calculated by
Matlab, can be found analytically as follows:
ˆ Impulse response:
 
1
yi (t) = L −1
= e−t (10.27)
s+1

ˆ Unit step response:


 
1 1
ys (t) = L −1 = 1 − e−t (10.28)
s+1 s

ˆ Unit ramp response:


   
1 1 1 1 1
yr (t) = L −1 = L −1
− + + = t − 1 + e−t
s + 1 s2 s s2 s+1
(10.29)

In each of them we can separate the terms that tend to zero as the time increases
Transient from those that do not. The rst make up what we call the transient response.
Steady-state The latter make up what we call the steady-state response.
yi (t) = 0
|{z} e−t
+ |{z} (10.30)
steady-state transient
ys (t) = 1
|{z} e−t
− |{z} (10.31)
steady-state transient
yr (t) = e−t
t − 1 + |{z} (10.32)
| {z }
steady-state transient
10.2. STEADY-STATE RESPONSE AND TRANSIENT RESPONSE 135

Impulse Response
1

0.9

0.8

0.7
Amplitude

0.6

0.5

0.4

0.3

0.2

0.1

0
0 1 2 3 4 5 6
Time (seconds)

Step Response
1

0.9

0.8

0.7
Amplitude

0.6

0.5

0.4

0.3

0.2

0.1

0
0 1 2 3 4 5 6 7 8 9
Time (seconds)

response to a unit ramp


6

4
output

0
0 1 2 3 4 5 6
t [s]

Figure 10.5: Impulse, unit step and unit ramp responses of G(s) = s+1 ,
1
from
Example 10.6.
136 CHAPTER 10. TIME AND FREQUENCY RESPONSES

Step response
10

output
5

0
0 5 10 15 20
t [s]

Figure 10.6: Response of G(s) = 1


s+1 for a step with amplitude 10, from Exam-
ple 10.7.

In other words, a time response y(t) can be separated into two parts, the tran-
sient response yt (t) and the steady-state response yss (t), such that

y(t) = yt (t) + yss (t) (10.33)


lim yt (t) = 0 (10.34)
t→+∞

lim yss (t) 6= 0 ∨ yss (t) = 0, ∀t (10.35)


t→+∞

We also call transient to the period of time in which the response is dominated
by the transient response, and steady-state to the period of time in which the
transient response is neglectable and the response can be assumed equal to the
steady-state response. Whether a transient response can or cannot be neglected
depends on how precise our knowledge of the response has to be. Below in
Sections 11.2 and 11.3 we will see usual criteria for this.
The steady-state response can be:

ˆ zero, as the impulse response of (10.26), shown in Figure 10.5;

ˆ a non-null constant, as the unit step response of (10.26), shown in Fig-


ure 10.5;

ˆ an oscillation with constant amplitude, as the step response of (s2 +1)(s+1) ,


1

shown in Figure 10.7;

ˆ innity, with the output increasing or decreasing monotonously, as the


unit ramp response of (10.26), shown in Figure 10.5;

ˆ innity, with the output oscillating with increasing amplitude, as the im-
pulse response of (s2 +1)2 ,
s
shown in Figure 10.7.

What the steady-state response is depends on what the system is and on what
its input is.

Remark 10.7. Most systems never reach innity. The probe of Example 10.2
can move away to outer space, but temperatures do no rise to innite values
(before that the heat source is exhausted, or something will burn), robots reach
the end of their workspace, high electrical currents will activate a circuit breaker,
etc.; in other words, for big values of the variables involved, the linear model of
the system usually ceases in one way or another to be valid.

Over the next sections we will learn several ways to calculate steady-state
responses without having to nd an explicit expression for the output, and then
calculating its limit. When the steady-state response is constant or innity, it
Final value theorem can be found from the nal value theorem (Theorem 2.4), i.e. applying (2.72).
10.2. STEADY-STATE RESPONSE AND TRANSIENT RESPONSE 137

Step Response
1.8

1.6

1.4

1.2
Amplitude

0.8

0.6

0.4

0.2

0
0 10 20 30 40 50 60
Time (seconds)

Impulse Response
20

15

10
Amplitude

−5

−10

−15

−20
0 5 10 15 20 25 30 35 40
Time (seconds)

Figure 10.7: Time responses with oscillations: unit step response of 1


(s2 +1)(s+1)
(top) and impulse response of (s2 +1)
s
2 (bottom).
138 CHAPTER 10. TIME AND FREQUENCY RESPONSES

Example 10.8. The steady-states of the impulse, step and ramp responses
(10.27)(10.29) are as follows:

lim yi (t) = lim e−t = 0 (10.36)


t→+∞ t→+∞

lim ys (t) = lim 1 − e−t = 1 (10.37)


t→+∞ t→+∞

lim yr (t) = lim t − 1 + e−t = +∞ (10.38)


t→+∞ t→+∞

They can be found without the inverse Laplace transform using (2.72):
1
lim yi (t) = lim s =0 (10.39)
t→+∞ s→0 s+1
1 1
lim ys (t) = lim s =1 (10.40)
t→+∞ s→0 s + 1 s
1 1
lim yr (t) = lim s = +∞ (10.41)
t→+∞ s→0 s + 1 s2
Example 10.9. Remember that (2.72) applies when the limit in time exists.
Figure 10.7 shows two cases where this limit clearly does not exist because of os-
cillations with an amplitude that does not decrease. But the two corresponding
limits are
1 1
lim y(t) = lim s =1 (10.42)
t→+∞ s→0 (s2
+ 1)(s + 1) s
s
lim y(t) = lim s 2 =∞ (10.43)
t→+∞ s→0 (s + 1)2

In the rst case we got the average value of the steady-state response; in the
second, innity. Neither case is a valid application of the nal value theorem.
We need to know rst if the time limit exists.
Static gain Denition 10.1. The constant steady-state output of the unit step response
of a stable system G(s) = YU (s)
(s)
is called the static gain of G(s):

b0 + b1 s + b2 s2 + b3 s3 + . . . 1 b0
lim y(t) = lim s 2 3
= (10.44)
t→+∞ s→0 a + a1 s + a2 s + a3 s + . . . |{z}s a0
|0 {z }
G(s) U (s)

10.3 Stability
The time responses from Section 10.2 illustrate the importance of the concept
of stability.
Bounded signal Denition 10.2. A signal x(t) is bounded if ∃K ∈ R+ : ∀t, |x(t)| < K .
BIBO stability Denition 10.3. A system is:
ˆ stable if, for every input which is bounded, its output is bounded too;
ˆ not stable if there is at least a bounded input for which its output is not
bounded.
This denition of stability is known as bounded input, bounded output stabil-
ity (BIBO stability).
All poles of stable transfer Theorem 10.2. A transfer function is stable if and only if all its poles are on
functions are on the left the left complex half-plane.
complex half-plane
Proof. We will prove this in two steps:
ˆ A transfer function G(s) is stable if and only if its impulse response g(t)
is absolutely integrable, i.e. i ∃M ∈ R+
Z +∞
|g(t)| dt < M (10.45)
0

ˆ A transfer function's impulse response is absolutely integrable if and only


if all its poles are on the left complex half-plane.
10.3. STABILITY 139

Lemma 10.1. A transfer function is stable if and only if its impulse response
is absolutely integrable.

Proof.Let us suppose that the impulse response g(t) is absolutely integrable,


and that Z +∞
|g(τ )| dτ = K (10.46)
0

Let us also suppose that the input u(t) is bounded, as required by the denition
of BIBO stability:
|u(t)| ≤ U, ∀t (10.47)
From (10.25) we get
Z t

|y(t)| = |g(t) ∗ u(t)| = g(τ )u(t − τ ) dτ
0
Z t
≤ |g(τ )u(t − τ )| dτ
0
Z t
≤ |g(τ )| |u(t − τ )| dτ
0
Z t
≤U |g(τ )| dτ ≤ U K (10.48)
0

So the output is bounded, proving that the condition (impulse response abso-
lutely integrable) is sucient.
Reductio ad absurdum proves that it is also necessary. Suppose that the
impulse response g(t) is not absolutely integrable; thus, there is a time instant
T ∈ R+ such that
Z T
|g(τ )| dτ = +∞ (10.49)
0

Now let the input u(t) be given by

u(T − t) = sign(g(t)) (10.50)

This is a bounded input, −1 ≤ u(t) ≤ 1, ∀t, and so, if the transfer function
were stable, the output would have to be bounded. But in time instant T
Z T Z T
y(T ) = g(τ )u(t − τ ) dτ = |g(τ )| dτ = +∞ (10.51)
0 0

and thus y(t) is not bounded. This shows that the condition is not only sucient
but also necessary.

Lemma 10.2. A transfer function's impulse response is absolutely integrable


if and only if all its poles are on the left complex half-plane.

Proof. A transfer function G(s) has an impulse response given by L −1 [G(s)].


Transfer function G(s) can be expanded into a partial fraction expansion, where
the fractions have the poles of G(s) in the denominator. Poles can be divided
into four cases.

ˆ The pole is real, p ∈ R, and simple. In this case the fraction (where k
s−p
k ∈ R is some real numerator) has the inverse Laplace transform k ept .

 If p = 0, then lim k ept = k. In this case the impulse response is


t→+∞
not absolutely integrable, since
Z +∞
|k| dt = lim |k|t = +∞ (10.52)
0 t→+∞

 If p > 0, the exponential tends to innity: lim k ept = ±∞ (de-


t→+∞
pending on the sign of k ). If in the last case the response was not
absolutely integrable, even more so in this one.
140 CHAPTER 10. TIME AND FREQUENCY RESPONSES

 If p < 0, the exponential tends to zero: lim k ept = 0. The impulse


t→+∞
response is absolutely integrable, since
Z +∞ Z +∞  +∞
pt 1 k k
k e dt = k pt
e dt = k ept = (0 − 1) = − ∈ R+
0 0 p 0 p p
(10.53)

ˆ The pole is real and its multiplicity n is 2 or higher. In this case there will
kn−1 kn−2
be, in the expansion, fractions of the form (s−p)
kn
n , (s−p)n−1 , (s−p)n−2 . . . s−p .
k1

(Here the ki ∈ R, i = 1 . . . n are the numerators in the expansion.) The


corresponding inverse Laplace transforms are of the form (i−1)! ki
ti−1 ept , i =
1 . . . n.

 If p = 0, then the exponential tends to 1, but the power does diverge


ki
to innity: lim ti−1 ept = ±∞ (depending on the sign of
(i − 1)!
t→+∞
k ), ∀i ≥ 2. So in this case the impulse response is not absolutely
integrable, as seen above.
ki
 If p > 0, then lim ti−1 ept = ±∞, ∀i. Again, the impulse
t→+∞ (i − 1)!
response is not absolutely integrable.
ki
 If p < 0, then lim ti−1 ept = 0, ∀i, since the eect of the
t→+∞ (i − 1)!
exponential prevails. For the same reason, the impulse response is
absolute integrable, just as in (10.53).

ˆ The pole is complex, p = a + bj ∈ C\R, a, b ∈ R, and simple. Remember


once more that complex poles must appear in pairs of complex conjugates,
since all polynomial coecients are real (otherwise real inputs would case
complex outputs). In this case the fraction s−p k k
= s−(a+bj) (where k ∈ C
is some complex numerator) has the inverse Laplace transform
   
k k
L −1
=L −1
= k ept = k eat ebjt = k eat ebjt
s−p s − (a + bj)
= k eat (cos bt + j sin bt) (10.54)

and the fraction s−k̄


p̄ = s−(a−bj) (where z̄ is the complex conjugate of

z ∈ C) has the inverse Laplace transform


   
k̄ k̄
L −1
=L −1
= k̄ ep̄t = k̄ eat e−bjt = k̄ eat e−bjt
s − p̄ s − (a − bj)
= k̄ eat (cos(−bt) + j sin(−bt)) = k̄ eat (cos bt − j sin bt)
(10.55)

Their eect on the impulse response is their sum:


   
k k̄
L −1
+L −1
= k eat (cos bt + j sin bt) + k̄ eat (cos bt − j sin bt)
s−p s − p̄
= (k + k̄) eat cos bt = 2<(k) eat cos bt
(10.56)

Notice that the imaginary parts cancel out, and we are left with oscillations
having:

 period 2π
b , where b is the positive imaginary part of the poles;
 amplitude 2<(k)eat , where a is the real part of the poles. The ex-
ponential is the important term, since it is the exponential that may
cause this term to vanish or diverge.

So:

 If a = 0, then the amplitude of the oscillations remains constant; they


do not go to zero neither do they diverge to an innite amplitude.
10.3. STABILITY 141

This means that the impulse response is not absolutely integrable,


since
Z +∞ Z +∞
|2<(k) cos bt| dt = 2|<(k)| |cos bt| dt
0 0
Z 2π
b
= 2|<(k)| lim n |cos bt| dt
n→+∞ 0
Z π
b
= 4|<(k)| lim n sin bt dt
n→+∞ 0
4|<(k)| π
= lim n [− cos bt]0b
b n→+∞
8|<(k)|
= lim n = +∞ (10.57)
b n→+∞

 If a > 0, the amplitude of the oscillations tends to innity. Conse-


quently the impulse response will not be absolutely integrable.
 If a < 0, the exponential tends to zero, and so will the oscillations.
In this case the impulse response is absolutely integrable, since
Z +∞ Z +∞
2<(k) eat cos bt dt ≤ 2|<(k)| eat dt (10.58)

0 0

and we end up with a case similar to (10.53).

ˆ The pole is complex and its multiplicity n is 2 or higher. This case is a


mixture of the last two. There will be terms of the form ki
(s−(a+bj))i +

(s−(a−bj))i , i = 1 . . . n. The corresponding inverse Laplace transform is


ki

2<(ki ) i−1 at
(i−1)! t e cos bt. So:

 If a = 0, then eat = 1 but the amplitude of the oscillations still grows


to innity, because of the power function, if i ≥ 2. So in this case
the impulse response will not be absolutely integrable.
 If a > 0, the amplitude of the oscillations tends to innity. The same
conclusion follows.
 If a < 0, the exponential tends to zero, and for large times its eect
prevails; so the the impulse response will be absolutely integrable.

It is clear that one single term not tending exponentially to zero suces to
prevent the impulse response from being absolutely integrable. Consequently,
the only way for the impulse response to tend to zero is that all poles should
have negative real parts; in other words, that all poles should lie on the left
complex half-plane.

While some authors call unstable to all systems that are not stable, the
following distinction is current.

Denition 10.4. A system is: Unstable and marginally


stable systems
ˆ unstable if its impulse response is not bounded;
Unstable systems
ˆ marginally stable if it is not stable and its impulse response is bounded. Marginal stability

Theorem 10.3. Marginally stable systems have no poles on the right complex Marginally stable systems
half-plane, and one or more simple poles on the imaginary axis. have simple poles on the
imaginary axis
Proof. It is clear from the proof of Lemma 10.2 that simple poles on the imag-
inary axis correspond to:

ˆ impulse responses which are bounded:

 a pole at the origin has a constant impulse response;


 a pair of complex conjugate imaginary poles has constant amplitude
sinusoidal oscillations as impulse response;
142 CHAPTER 10. TIME AND FREQUENCY RESPONSES

ˆ responses to bounded inputs which are not bounded, since systems with
such poles are not stable.

A single pole p on the right complex half-plane makes a system unstable, since,
whatever the input may be, in the partial fraction expansion of the output there
will be a fraction of the form s−p
k
, and the proof of Lemma 10.2 shows that such
terms always diverge exponentially to innity.
The same happens with multiple poles on the imaginary axis:

ˆ multiple poles at the origin cause a polynomial impulse response, which


diverges to innity (check Table 2.1, or (11.206) below);

ˆ pure imaginary multiple poles also cause the impulse response to diverge,
as argued in the proof of Lemma 10.2.

The eect of each pole on the stability of a system justies the following
nomenclature.

Stable, marginally stable, Denition 10.5. Poles are:


and unstable poles
ˆ stable, when located on the left complex half-plane;
ˆ marginally stable, when simple and located on the imaginary axis, i.e.
s = jω, ω ∈ R, and notice that this includes the origin, s = 0;

ˆ unstable, when multiple and located on the imaginary axis, or when


located on the right complex half-plane.

Stability depends on pole A system is:


location
ˆ stable, when all its poles are stable;
ˆ marginally stable, when it has no unstable poles, and one or more of its
poles are marginally stable;

ˆ unstable, when it has one or more unstable poles.


Example 10.10. From the location of the poles, we can conclude the following
about the stability of these transfer functions:
s+4
ˆ ; poles: −1, −2, −3; stable transfer function
(s + 1)(s + 2)(s + 3)
s−5 √
ˆ 2
; poles: ± 6j ; marginally stable transfer function
s +6
s+7 √
ˆ 2 2
; poles: ± 8j (double); unstable transfer function
(s + 8)
(s − 12)(s + 13)
ˆ ; poles: −9 (double), −10 ± 11j ; stable transfer
(s + 9)2 (s2 + 20s + 221)
function
14
ˆ ; poles: 15; unstable transfer function
s − 15
16
ˆ ; poles: 0; marginally stable transfer function
s
−17
ˆ ; poles: 0 (double); unstable transfer function
s2
18 √
ˆ 2
; poles: 0, ± 19j ; marginally stable transfer function
s(s + 19)
24
ˆ ; poles: −20, −21, −22, 23; unstable trans-
(s + 20)(s + 21)(s + 22)(s − 23)
fer function

Poles, not zeros, deter- Remark 10.8. Never forget that zeros have nothing to do with stability.
mine stability
10.4. TIME RESPONSES: PERIODIC INPUTS 143

Figure 10.8: A weaving loom (source:Wikimedia).

10.4 Time responses: periodic inputs


Consider the weaving loom in Figure 10.8. The shuttle that carries the yarn
that will become the weft thread moves without cease from the left to the right
and then back. Meanwhile, half the warp threads are pulled up by a harness,
which will then lower them while the other half goes up, and this too without
cease. The corresponding references are similar to those in Figure 10.9. They
are called square wave and triangle wave, and are examples of periodic Square wave
signals. Triangle wave

Denition 10.6. A periodic signal is one for which ∃T ∈ R+ Periodic signals

f (t + T) = f (t), ∀t (10.59)

T = min T is the period of signal f (t).


Remark 10.9. Notice that the dierent values of T are in fact the integer
multiples of T , i.e.

f (t + T ) = f (t), ∀t ⇒ f (t + nT ) = f (t), ∀t, n ∈ N (10.60)

Triangle waves are also a useful alternative to ramps, since they avoid the
inconvenience of an innitely large signal. Square waves are useful in experi-
mental settings for another reason: they allow seeing successive step responses,
and consequently allow measuring parameters several times in a row. For this
purpose, the period must be large enough for the transient regime to disappear.
Example 10.11. We can nd the output of G(s) = 15
s+20 to a square wave with Matlab's command
period 1 s and amplitude 1 using Matlab as follows: square
>> t = 0 : 0.001 : 3;
>> u = square(t*2*pi);
>> figure, plot(t,u, t,lsim(15/(s+20),u,t))
>> axis([0 3 -1.5 1.5])
>> xlabel('t [s]'), ylabel('input and output'), legend({'input','output'})
144 CHAPTER 10. TIME AND FREQUENCY RESPONSES

1.5
square wave
triangle wave
1

0.5

signal
0

−0.5

−1

−1.5
0 1 2 3 4 5
t [s]

Figure 10.9: A square wave and a triangle wave (both with period 1 and ampli-
tude 1).

Notice that the amplitude of the rst step is 1 and the amplitude of the following
steps is the peak to peak amplitude, twice as big, viz. 2. Also notice that there
is a step every half period, i.e. every 0.5 s.
The period was appropriately chosen since (as we shall see in Section 11.2)
the transient response is practically gone after 0.5 s. A period four times smaller
would not allow seeing a complete step response. Both cases are shown in
Figure 10.10.
Another useful periodic signal is the sinusoid, which appears naturally with
any phenomena that are the projection onto a plane of a circular movement on
a perpendicular plane. In practice, sinusoids are found (at least as approxima-
tions) when working with such dierent things as tides, motor vibrations, or
daily thermal variations.
Sinusoidal inputs cause si- Theorem 10.4. The stationary response y(t) of a stable linear plant G(s)
nusoidal outputs in steady subject to a sinusoidal input u(t) = sin(ωt) is
state
y(t) = |G(jω)| sin(ωt + ∠G(jω)) (10.61)

where ∠z is the phase, or argument, of z ∈ C (also notated often as arg z ), so


that z = |z|ej∠z .
Proof. The output is

y(t) = L −1 [Y (s)] (10.62)

and
ω ω
Y (s) = G(s)U (s) = G(s)L [sin(ωt)] = G(s) = G(s)
s2 + ω 2 (s + jω)(s − jω)
(10.63)

If all poles pk , k = 1, . . . , n of G(s) are simple, we can perform a partial fraction


expansion of Y (s) as follows:
n
b0 b0 X bk
Y (s) = + +
s + jω s − jω s − pk
k=1
n
X
⇒ y(t) = b0 e−jωt + b0 ejωt + bk e p k t (10.64)
| {z }
k=1
steady-state response yss (t) | {z }
transient

response yt (t)

We know that all terms in the transient response yt (t) belong there because the
exponentials are vanishing, since the poles are on the left complex half-plane.
10.4. TIME RESPONSES: PERIODIC INPUTS 145

1.5
input
output
1
input and output

0.5

−0.5

−1

−1.5
0 0.5 1 1.5 2 2.5 3
t [s]

1.5
input
output
1
input and output

0.5

−0.5

−1

−1.5
0 0.5 1 1.5 2 2.5 3
t [s]

15
Figure 10.10: Response of G(s) = to two square waves with dierent
s + 20
periods.
146 CHAPTER 10. TIME AND FREQUENCY RESPONSES

If there are multiple poles, the only dierence is that there will be terms of the
form (i−1)!
bk
ti−1 epk t , i ∈ N in the transient response yt (t), which will still, of
course, be vanishing with time. In either case, the steady-state response is the
same.
From (10.63) we know that Y (s) = G(s) (s+jω)(s−jω)
ω
, and from (10.64) we
know that Y (s) = b0
s+jω + b0
s−jω + L [yt (t)]. We can multiply both by s + jω
and obtain
 
ω b0
G(s) = b0 + + L [yt (t)] (s + jω) (10.65)
s − jω s − jω

Now we evaluate this equality at s = −jω :


ω
G(−jω) = b0 (10.66)
−2jω

Replacing b0 = G(−jω) −2j


1
and b0 = G(jω) 2j
1
in yss (t) = b0 e−jωt + b0 ejωt , we
obtain
1 −jωt 1
yss (t) = G(−jω) e + G(jω) ejωt
−2j 2j
1 −jωt 1
= |G(−jω)|ej∠G(−jω) e + |G(jω)|ej∠G(jω) ejωt
−2j 2j
1 1
= −|G(jω)|ej(∠G(−jω)−ωt) + |G(jω)|ej(∠G(jω)+ωt)
2j 2j
1  
= |G(jω)| ej(∠G(jω)+ωt) − ej(∠G(−jω)−ωt)
2j

1
= |G(jω)| cos (∠G(jω) + ωt) + j sin (∠G(jω) + ωt)
2j
   
− cos − (∠G(jω) + ωt) − j sin − (∠G(jω) + ωt)
1 
= |G(jω)| cos (∠G(jω) + ωt) + j sin (∠G(jω) + ωt)
2j

− cos (∠G(jω) + ωt) + j sin (∠G(jω) + ωt)
1
= |G(jω)|2j sin (∠G(jω) + ωt)
2j
= |G(jω)| sin (ωt + ∠G(jω)) (10.67)

Corollary 10.2. Since G(s) is not only stable but also linear, if the input is
u(t) = A sin(ωt) instead, the output is

y(t) = A|G(jω)| sin(ωt + ∠G(jω)) (10.68)

Example 10.12. Figure 10.11 shows the simulated vertical position of the Wave
Energy Converter of Figure 3.2, the Archimedes Wave Swing, when subject to
sinusoidal waves of dierent amplitudes. The device is in steady-state, as is clear
both from the regularity of its movements and from the time already passed since
the beginning of the simulation. As the input is sinusoidal, if the model were
linear, the output should be sinusoidal too. But the shape of the output is not
sinusoidal; it is not even symmetrical around its mean value; its amplitude does
not increase linearly with the amplitude of the input. The model used to obtain
these simulation results is obviously non-linear.

10.5 Frequency responses and the Bode diagram


(10.68) shows that, if a stable system G(s) has a sinusoidal input, the steady-
state output is related to the input through G(jω), which is the Fourier trans-
form (2.87) of the dierential equation describing the system's dynamics:

Frequency, amplitude, and ˆ if the input is sinusoidal, the steady-state output is sinusoidal too;
phase of output for sinu-
soidal inputs ˆ if the input has frequency ω , the steady-state output has frequency ω too;
10.5. FREQUENCY RESPONSES AND THE BODE DIAGRAM 147

4
2.0 m
3

x / m
1

0
0.5 m
−1

−2

300 310 320 330 340 350


time / s

Figure 10.11: Vertical position of the AWS from Figure 3.2, simulated assuming
sinusoidal sea waves of dierent amplitudes between 0.5 m and 2.0 m.

ˆ if the input has amplitude A (or peak-to-peak amplitude 2A), the steady-
state output has amplitude A|G(jω)| (or peak-to-peak amplitude 2A|G(jω)|);

ˆ if the input has phase θ at some time instant t, the steady-state output
has phase θ + ∠G(jω) at that time instant t.

Remember that:

ˆ the steady-state output is sinusoidal, but the transient is not: you must The transient is not sinu-
wait for the transient to go away to have a sinusoidal output; soidal

ˆ unstable systems have transient responses that do not go away, so you will
never have a sinusoidal output;

ˆ ω is the frequency in radians per second. ω is in rad/s

Denition 10.7. Given a system G(s):


ˆ its frequency response is G(jω), a function of ω ; Frequency response

ˆ its gain at frequency ω is |G(jω)|; Gain

ˆ its gain in decibel (denoted by symbol dB) is 20 log10 |G(jω)| (gain Gain in dB
|G(jω)| is often called gain in absolute value, to avoid confusion with the Gain in absolute value
gain in decibel);

ˆ its phase at frequency ω is ∠G(jω). Phase

Remark 10.10. These denitions are used even if G(s) is not stable. If the
system is stable:

ˆ the gain is the ratio between the amplitude of the steady-state output and
the amplitude of the input;

ˆ the phase is the dierence in phase between the steady-state output sinu-
soid and the input sinusoid.

Example 10.13. Figure 10.12 shows the output of G(s) = 300(s+1)


(s+10)(s+100) for a
sinusoidal input of frequency 1 rad/s, found as follows:

>> s = tf('s');
>> G = 300*(s+1)/((s+10)*(s+100));
>> t = 0 : 0.001 : 30;
>> figure, plot(t,sin(t), t,lsim(G,sin(t),t))
>> xlabel('time [s]'), ylabel('output'), grid
148 CHAPTER 10. TIME AND FREQUENCY RESPONSES

The amplitude of the input is 1, by construction; the amplitude of the out-


put is 0.4219. So the gain at 1 rad/s is 0.4219
1 = 0.4219 in absolute value, or
20 log10 0.4219 = −7.50 dB. This maximum value is taking place at 26 s, while
the corresponding maximum of the input takes place later, at 4×2π+ π2 = 26.7 s.
As the period is 2π = 6.28 s, the phase is 26.7−26
6.28 × 360◦ = 40◦ .
Figure 10.12 also shows the output of G(s) when the frequency is 200 rad/s:
>> t = 0 : 0.0001 : 0.2;
>> figure, plot(t,sin(200*t), t,lsim(G,sin(200*t),t))
>> xlabel('time [s]'), ylabel('output'), grid
In that case, the amplitude of the input is still 1 and the amplitude of the
output is 1.313. So the gain at 200 rad/s is 1.313 1 = 1.313 in absolute value, or
20 log10 1.313 = 2.37 dB. This maximum value is taking place at 0.1703 s, while
π
the corresponding maximum of the input takes place earlier, at 5 × 200 2π
+ 200
2
=
0.1649 s. As the period is 200 = 0.0314 s, the phase is
2π 0.1649−0.1703
0.0314 × 360 ◦
=
−62◦ .
In both cases, it is visible that the rst oscillations are not sinusoidal, because
of both their shape and their varying amplitudes. In other words, the transient
has not yet disappeared by then.
In the example above, the amplitude of the output was larger than that of
the input in one case, and smaller in the other. Also in one case the extremes
of the output sinusoid took place earlier than those of the input sinusoid, while
in the other case it was the other way round.
Denition 10.8. Given
ˆ a stable system G(s),

ˆ with sinusoidal input of frequency ω and amplitude Au ,

ˆ with steady-state sinusoidal output also of frequency ω and amplitude


Ay = Au |G(jω)|,
then:
ˆ If the amplitude of the output is larger than the amplitude of the input,
Amplication Ay > Au , the system is amplifying its input:

Ay
Ay > Au ⇒ |G(jω)| = > 1 ⇒ 20 log10 |G(jω)| > 0 dB (10.69)
Au
That is to say:
 the gain in absolute value is larger than 1;
 the gain in decibel is larger than 0 dB.
ˆ If the amplitude of the output is smaller than the amplitude of the input,
Attenuation Ay > Au , the system is attenuating its input:

Ay
Ay < Au ⇒ |G(jω)| = < 1 ⇒ 20 log10 |G(jω)| < 0 dB (10.70)
Au
That is to say:
 the gain in absolute value is smaller than 1;
 the gain in decibel is smaller than 0 dB.
ˆ If the amplitude of the output and the amplitude of the input are the same,
Ay = Au , the system is neither amplifying nor attenuating its input:

Ay
Ay = Au ⇒ |G(jω)| = = 1 ⇒ 20 log10 |G(jω)| = 0 dB (10.71)
Au
That is to say:
 the gain in absolute value is 1;
 the gain in decibel is 0 dB.
10.5. FREQUENCY RESPONSES AND THE BODE DIAGRAM 149

0.8

0.6 X: 26
Y: 0.4219

0.4

0.2
output

−0.2

−0.4

−0.6

−0.8

−1
0 5 10 15 20 25 30
time [s]

X: 0.1703
1.5 Y: 1.313

1
output

0.5

−0.5

−1

−1.5
0 0.05 0.1 0.15 0.2
time [s]

300(s + 1)
Figure 10.12: Response of G(s) = to two sinusoids with dif-
(s + 10)(s + 100)
ferent periods.
150 CHAPTER 10. TIME AND FREQUENCY RESPONSES

Table 10.1: Gain values; Au is the amplitude of the input sinusoid and Ay is
the amplitude of the steady-state output sinusoid
Gain in absolute value Gain in decibel Amplitudes
Minimum value |G(jω)| = 0 20 log 10|G(jω)| = −∞ dB Ay = 0
Attenuation 0 < |G(jω)| < 1 20 log 10|G(jω)| < 0 dB Ay < Au
Input and output with same amplitude |G(jω)| = 1 20 log 10|G(jω)| = 0 dB Ay = Au
Amplication |G(jω)| > 1 20 log 10|G(jω)| > 0 dB Ay > Au

Furthermore:
ˆ If the extremes of the output take place earlier than the corresponding
Phase lead extremes of the input, the output leads in relation to the input; this
means that

∠G(jω) > 0 (10.72)

ˆ If the extremes of the output take place later than the corresponding
Phase lag extremes of the input, the output lags in relation to the input; this means
that

∠G(jω) < 0 (10.73)

ˆ If the extremes of the output and the corresponding extremes of the input
take place at the same time, the output and the input are in phase; this
means that

∠G(jω) = 0 (10.74)

ˆ If the maxima of the output and the minima of the input take place at
Phase opposition the same time, and vice versa, the output and the input are in phase
opposition; this means that
∠G(jω) = ±180◦ = ±π rad (10.75)

Remark 10.11. Notice that, since sinusoids are periodic, the phase is dened
up to 360◦ shifts: a 90◦ phase is undistinguishable from a −270◦ phase, or for
that matter from a 3690◦ phase or any 90◦ + k360◦ , k ∈ Z phase. While each
of these values can be in principle arbitrarily chosen, it usual to make the phase
vary continuously (as much as possible) with frequency, starting from values for
low frequencies determined as we will see below in Section 11.4.
Gain crossover frequency Denition 10.9. Frequencies ωgc at which the frequency response of a plant
G(s) veries

|G(jω)| = 1 ⇔ 20 log10 |G(jω)| = 0 dB (10.76)

are called gain crossover frequencies.


Phase crossover frequency Denition 10.10. Frequencies at which input and output are in phase oppo-
sition are called phase crossover frequencies.
Gain values can be summed up as shown in Table 10.1.
Example 10.14. Consider the responses to sinusoidal inputs of G(s) = 1
s2 +0.5s+1
in Figure 10.13.
ˆ For ω = 0.5 rad/s:

 The amplitude of the output is larger than that of the input, so we


must have

|G(j0.5)| > 1 ⇔ 20 log10 |G(j0.5)| > 0 dB (10.77)

 In fact, the gain is



1 1
= √ 1
|G(j0.5)| =

2
=
= 1.26
(j0.5) + 0.5j0.5 + 1 1 − 0.25 + j0.25 0.75 + 0.252
2

⇒ 20 log10 G(j0.5) = 20 log10 1.26 = 2 dB (10.78)


10.5. FREQUENCY RESPONSES AND THE BODE DIAGRAM 151

 The output is delayed in relation to the input, so we must have


∠G(j0.5) < 0.
 In fact, the phase is
 
1 0.25
∠G(j0.5) = ∠ = ∠1 − ∠(0.75 + j0.25) = 0◦ − arctan = −18◦
0.75 + j0.25 0.75
(10.79)

ˆ For ω = 1 rad/s:

 The amplitude of the output is even larger now, so


|G(j)| > |G(j0.5)| = 1.26 ⇔ 20 log10 |G(j)| > 20 log10 G(j0.5) = 2 dB
(10.80)

 In fact, the gain is



1 1
= 1 =2
|G(j)| = 2
=

j + 0.5j + 1 j0.5 0.5
⇒ 20 log10 G(j) = 20 log10 2 = 6 dB (10.81)

 The output is delayed in relation to the input. Furthermore, the


output crosses zero as the input is already at a peak or at a through.
So the phase is negative, and equal to −90◦ .
 In fact,
 
1
∠G(j) = ∠ = ∠1 − ∠(j0.5) = 0◦ − 90◦ = −90◦ (10.82)
j0.5

ˆ For ω = 2 rad/s:

 The amplitude of the input is larger than that of the output, so we


must have

|G(j2)| < 1 ⇔ 20 log10 |G(j2)| < 0 dB (10.83)

 In fact, the gain is



1 1
= √ 1
|G(j2)| =

2
=
= 0.316
(2j) + 0.5j2 + 1 −3 + j 9+1
1
⇒ 20 log10 G(j2) = 20 log10 10− 2 = −10 dB (10.84)

 The output is delayed in relation to the input. Furthermore, input


and output are almost in phase opposition, but not yet. So we must
have 0◦ < ∠G(j2) < −180◦ , but close to the latter value.
 In fact, the phase is
 
1 1
∠G(j2) = ∠ = ∠1 − ∠(−3 + j) = 0◦ − arctan = −162◦
−3 + j −3
(10.85)

The Bode diagram, or Bode plot, is a graphical representation of the fre- Bode diagram
quency response of a system, as a function of frequency. This diagram comprises
two plots:
ˆ a top plot, showing the gain in dB (y axis) as a function of frequency in
a semi-logarithmic scale (xaxis);
ˆ a bottom plot, showing the phase in degrees (y axis) as a function of
frequency in a semi-logarithmic scale (xaxis).
Frequency is usually given in rad/s, but sometimes in Hz.
Denition 10.11. A frequency variation corresponding to a 10fold increase Decade
or decrease is called decade. In a Bode diagram, since the frequency is shown
in a logarithmic scale, decades are equally spaced.
152 CHAPTER 10. TIME AND FREQUENCY RESPONSES

Figure 10.13: Responses of G(s) = s2 +0.5s+1


1
(blue) to input sinusoids (red)
with 0.5 rad/s (top), 1 rad/s (centre) and 2 rad/s (bottom).
10.5. FREQUENCY RESPONSES AND THE BODE DIAGRAM 153

Bode Diagram
10

Magnitude (dB)
0

−10

−20

−30
90

Phase (deg)
45

−45

−90
−2 −1 0 1 2 3 4
10 10 10 10 10 10 10
Frequency (rad/s)

300(s + 1)
Figure 10.14: Bode diagram of G(s) = .
(s + 10)(s + 100)

In the following sections we will learn how to plot by hand the Bode diagram
of any plant (or at least a reasonable approximation thereof); meanwhile, the
following Matlab commands can be used instead:
ˆ bode plots the Bode diagram of a system;

ˆ freqresp calculates the frequency response of a system.

Example 10.15. The Bode diagram in Figure 10.14 of G(s) = 300(s+1)


(s+10)(s+100) Matlab's command bode
from Example 10.13 is found as follows:
>> s = tf('s');
>> G = 300*(s+1)/((s+10)*(s+100));
>> figure, bode(G), grid
The gains and phases at ω = 1 rad/s and ω = 200 rad/s found in Example 10.13
can be observed in the diagram.
This way we rst nd the frequency response and then use it to plot the Matlab's command
Bode diagram: freqresp
>> [Gjw, w] = freqresp(G); % Gjw returned as a 3-dimensional tensor...
>> Gjw = squeeze(Gjw); % ...must now be squeezed to a vector
>> figure, subplot(2,1,1), semilogx(w, 20*log10(abs(Gjw)))
>> grid, xlabel('frequency [rad/s]'), ylabel('gain [dB]'), title('Bode diagram')
>> subplot(2,1,2), semilogx(w, rad2deg(unwrap(angle(Gjw))))
>> grid, ylabel('phase [degrees]') % unwrap avoids jumps of 360 degrees
To nd the gains and phases to conrm those found found in Example 10.13:
>> Gjw = freqresp(G, [1 200])
Gjw(:,:,1) =
0.3294 + 0.2640i
Gjw(:,:,2) =
0.6525 - 1.1704i
>> gains = 20*log10(abs(Gjw))
gains(:,:,1) =
-7.4909
gains(:,:,2) =
2.5420
>> phases = rad2deg(unwrap(angle(Gjw)))
phases(:,:,1) =
38.7165
phases(:,:,2) =
-60.8590
Here's another way of nd the same values:
154 CHAPTER 10. TIME AND FREQUENCY RESPONSES

>> w = [1 200];
>> Gjw = 300*(1i*w+1)./((1i*w+10).*(1i*w+100));
Gjw =
0.3271 + 0.2676i 0.6186 - 1.2212i
>> gains = 20*log10(abs(Gjw))
gains =
-7.4909 2.5420
>> phases = rad2deg(unwrap(angle(Gjw)))
phases =
38.7165 -60.8590
Notice the small dierences due to numerical errors.
Example 10.16. The Bode diagram in Figure 10.15 of G(s) = from
1
s2 +0.5s+1
Example 10.14 shows the gains and phases found in that example, that can also
be found as follows:
>> G = tf(1,[1 .5 1])

G =

1
---------------
s^2 + 0.5 s + 1

Continuous-time transfer function.

>> figure,bode(G),grid on
>> Gjw = squeeze(freqresp(G, [.5 1 2]))
Gjw =
1.2000 - 0.4000i
0.0000 - 2.0000i
-0.3000 - 0.1000i
>> gains = 20*log10(abs(Gjw))
gains =
2.0412
6.0206
-10.0000
>> phases = rad2deg(unwrap(angle(Gjw)))
phases =
-18.4349
-90.0000
-161.5651

Example 10.17. From the Bode diagram in Figure 10.16, even without know-
ing what transfer function it belongs to, we can conclude the following:
20
ˆ At ω = 0.1 rad/s, the gain is 20 dB (i.e. 10 20 = 10 in absolute value) and
the phase is 0 = 0 rad. So, if the input is

π
u(t) = 5 sin(0.1t + ) (10.86)
6
the steady-state output will be
π π
y(t) = 5 × 10 sin(0.1t + ) = 50 sin(0.1t + ) (10.87)
6 6
17
ˆ At ω = 10 rad/s, the gain is 17 dB (i.e. 10 20 = 7.1 in absolute value) and
the phase is −45◦ = − π4 rad. So, if the input is
π
u(t) = 5 sin(10t + ) (10.88)
6
the steady-state output will be
π π π
y(t) = 5 × 7.1 sin(10t + − ) = 35.5 sin(10t − ) (10.89)
6 4 12
10.5. FREQUENCY RESPONSES AND THE BODE DIAGRAM 155

Bode Diagram
10

Magnitude (dB)
0

−10

−20

−30

−40
0

Phase (deg)
−45

−90

−135

−180
−1 0 1
10 10 10
Frequency (rad/s)

1
Figure 10.15: Bode diagram of .
s2 + 0.5s + 1

ˆ At ω = 100 rad/s, the gain is 0 dB (i.e. 100 = 1 in absolute value) and


the phase is −85◦ = −1.466 rad. So, if the input is
π
u(t) = 5 sin(100t + ) (10.90)
6
the steady-state output will be

y(t) = 5 × 1 sin(100t + 0.524 − 1.466) = 5 sin(100t − 0.942) (10.91)

−20
ˆ At ω = 1000 rad/s, the gain is −20 dB (i.e. 10 = 0.1 in absolute value) 20

and the phase is −90◦ = − π2 rad. So, if the input is


π
u(t) = 5 sin(1000t + ) (10.92)
6
the steady-state output will be
π π π
y(t) = 5 × 0.1 sin(1000t + − ) = 0.5 sin(1000t − ) (10.93)
6 2 3

ˆ The system is linear. So, if the input is

π π
u(t) = 0.5 sin(0.1t + ) + 25 sin(1000t + ) (10.94)
6 6
the steady-state output will be
π π π
y(t) = 0.5 × 10 sin(0.1t + ) + 25 × 0.1 sin(1000t + − )
6 6 2
π π
= 5 sin(0.1t + ) + 2.5 sin(1000t − ) (10.95)
6 3
Notice how the frequency with the largest amplitude in the input now has
the smallest.

Glossary
And he said: Behold, it is one people, and one tongue is to al: and
they haue begunne to doe this, neyther wil they leaue of from their
determinations, til they accomplish them indede. Come ye therfore,
let vs goe downe, and there confound their tongue, that none may
heare is neighbours voice.

Moses (attrib.; 13th c. BC?), Genesis, xi 67, Douay-Rheims version (1609)


156 CHAPTER 10. TIME AND FREQUENCY RESPONSES

Bode Diagram
20

Magnitude (dB)
10

−10

−20
0

Phase (deg)
−45

−90
−1 0 1 2 3
10 10 10 10 10
Frequency (rad/s)

Figure 10.16: Bode diagram of Example 10.17.

amplication amplicação
attenuation atenuação
bounded signal sinal limitado
decade década
frequency response resposta em frequência
gain ganho
gain crossover frequency frequência de cruzamento de ganho
impulse impulso
marginally stable marginalmente estável
phase fase
phase crossover frequency frequência de cruzamento de fase
phase lag atraso de fase
phase lead avanço de fase
phase opposition oposição de fase
phase crossover frequency frequência de cruzamento de fase
ramp rampa
square wave onda quadrada
static gain ganho estacionário
steady-state estado estacionário
steady-state response resposta estacionária
stable estável
step degrau, escalão
transient response resposta transiente
triangle wave onda triangular
unit ramp rampa unitária
unit step degrau unitário
unstable instável

Exercises
1. For each of the following pairs of a transfer function and an input:

ˆ nd the Laplace transform of the input;


ˆ nd the Laplace transform of the output;
ˆ nd the value of the output for t  1 without using the inverse
Laplace transform;
ˆ nd the output as a function of time;
ˆ separate that function of time into a transient and a steady state;
ˆ conrm the value of the output for t  1 found previously.

10
(a) G(s) = and u(t) = 0.4, t > 0
s2 + 21s + 20
10.5. FREQUENCY RESPONSES AND THE BODE DIAGRAM 157

5
(b) G(s) = and u(t) = 2t, t > 0
s + 0.1
s
(c) G(s) = 2 and u(t) = δ(t)
s +s+1
s
(d) G(s) = 2 and u(t) = 0.4, t > 0
s +s+1
7
(e) G(s) = and u(t) = 0.4, t > 0
s
2. From the poles of the transfer functions of Exercise 1 of Chapter 9, explain
which of them are stable, unstable, or marginally stable.
3. Figure 10.17 shows the Bode diagrams of some transfer functions. For each
of them, read in the Bode diagram the values from which you can calculate
the transfer function's steady state response to the following inputs:
ˆ u(t) = sin(2t)
π
ˆ u(t) = sin(2t + 2)
ˆ u(t) = sin(1000t)
ˆ u(t) = 10 sin(1000t)
1
ˆ u(t) = 3 sin(0.1t − π4 ) + sin(2t + π2 )10 sin(1000t)

4. For each of the following transfer functions:


ˆ nd the corresponding Fourier transform;
ˆ nd the gain (both in absolute value and in decibel) and the phase
(in radians or degrees, as you prefer) at the indicated frequencies.
5
(a) G(s) = and ω = 0.01, 0.1, 1 rad/s
s + 0.1
s
(b) G(s) = 2 and ω = 0.1, 1, 10 rad/s
s +s+1
7
(c) G(s) = and ω = 1, 10, 100 rad/s
s
5. In naval and ocean engineering it is usual to call Response Amplitude RAO
Operator (RAO) to what we called gain. It is often represented in abso-
lute value in a linear plot as a function of frequency. Figure 10.18 shows
the RAO of four dierent heaving buoys. Suppose that each of them is
subject to waves with an amplitude of 2 m and a frequency of 2π rad/s.
What will be the amplitude of the oscillation of each buoy?
10
6. Consider transfer function G(s) = .
10s + 1
(a) When the input is a unit step, what will the steady state response
be?
(b) When the input is a step with amplitude 3, what will the steady state
response be?
(c) Without computing an expression for the output, give a rough esti-
mate of how long it takes for the output to reach 20, when the input
is a step with amplitude 3.
(d) Without computing an expression for the output, give a rough esti-
mate of the 2% settling time, when the input is a step with amplitude
3.
(e) Calculate the output as a function of time, using an inverse Laplace
transform, and nd the exact values of the estimations from the last
two questions.
(f) Suppose that the input is now a unit step again. What will the
new value of the 2% settling time be? Hint: is the system linear or
non-linear?
7. Show that, in (10.21), if 0 ∈
/ [a, b] and function g(t) is bounded, the inte-
grand is zero everywhere, and the integral is zero.
158 CHAPTER 10. TIME AND FREQUENCY RESPONSES

Bode Diagram
Magnitude (dB) 200

100

−100
−180
Phase (deg)

−225

−270

−315

−360
−3 −2 −1 0 1 2 3
10 10 10 10 10 10 10
Frequency (rad/s)

Bode Diagram
−30
Magnitude (dB)

−40

−50

−60

−70
180
Phase (deg)

90

−90
−2 −1 0 1 2 3 4
10 10 10 10 10 10 10
Frequency (rad/s)

Figure 10.17: Bode diagrams of Exercise 3.


10.5. FREQUENCY RESPONSES AND THE BODE DIAGRAM 159

Figure 10.18: RAO of four heaving buoys of Exercise 5 (source:


http://marine-eng.ir/article-1-80-en.pdf).
160 CHAPTER 10. TIME AND FREQUENCY RESPONSES
Chapter 11

Finding time and frequency


responses
A rocket-driven spaceship can, obviously, only accelerate along its
major axis  that is, `forwards'. Any deviation from a straight
course demands a physical turning of the ship, so that the motors
can blast in another direction. Everyone knows that this is done by
internal gyros or tangential steering jets: but very few people know
just how long this simple manoeuvre takes. The average cruiser,
fully fuelled, has a mass of two or three thousand tons, which does
not make for rapid footwork. But things are even worse than this,
for it is not the mass, but the moment of inertia that matters here
 and since a cruiser is a long, thin object, its moment of inertia
is slightly colossal. The sad fact remains (though it is seldom men-
tioned by astronautical engineers) that it takes a good ten minutes to
rotate a spaceship through 180 degrees, with gyros of any reasonable
size. Control jets are not much quicker, and in any case their use is
restricted because the rotation they produce is permanent and they
are liable to leave the ship spinning like a slow-motion pin-wheel, to
the annoyance of all inside.
In the ordinary way, these disadvantages are not very grave. One
has millions of kilometres and hundreds of hours in which to deal
with such minor matters as a change in the ship's orientation. It is
denitely against the rules to move in ten-kilometre-radius circles,
and the commander of the Doradus felt distinctly aggrieved.

Arthur C. Clarke (1917  „2008), Hide-and-seek, Astounding Science


Fiction, September 1949

In this chapter we systematically study the time and frequency responses of


dierent systems, beginning with the simplest cases.

11.1 Time and frequency responses of a pole at


the origin
In this section we study the behaviour of transfer functions of the form
b
G(s) = , b>0 (11.1)
s
which have one pole at the origin, s = 0.
As to time responses, it is clear that, for any input u(t), the output is 1
s integrates its input
  Z t
b
y(t) = L −1
U (s) =b u(t) (11.2)
s 0

where we have used (2.47). In particular, the unit step response of (11.1) is b
Unit step response of s
 
b
y(t) = L −1
= bt (11.3)
s2

161
162 CHAPTER 11. FINDING TIME AND FREQUENCY RESPONSES

Figure 11.1: Response of b


s to a step of amplitude K .

Figure 11.2: Bode diagram of sb .

and in the more general case of a step with amplitude K the response is b
Step response of s

y(t) = bKt (11.4)

This response will go up if K > 0 and go down if K < 0, as seen in Figure 11.1.
As we know, (11.1) is marginally stable, since
ˆ its response to a step, which is a bounded input, is not bounded;

ˆ its impulse response, y(t) = L −1 = b H(t), is bounded.


b
s
b
Frequency response of s The frequency response of (11.1) is
b
G(jω) = (11.5)


b b
|G(jω)| = = (11.6)
jω ω

b
20 log10 |G(jω)| = = 20 log10 b + 20 log10 ω (11.7)

∠b − ∠(jω) = −90◦
∠G(jω) = |{z} (11.8)

| {z }
0 90◦

As to the gain of (11.8), notice that:


ˆ it is shown as a straight line in a Bode diagram, since it has a linear
variation with log10 ω ;
11.2. TIME AND FREQUENCY RESPONSES OF A FIRST-ORDER SYSTEM WITHOUT ZEROS 163

ˆ the slope of that straight line is always −20 dB per decade;

ˆ there is one gain crossover frequency, which is b rad/s;

ˆ when compared with the case b = 1, the gain is shifted up if b > 1 or down
if 0 < b < 1, by 20 log10 b in either case.

11.2 Time and frequency responses of a rst-order


system without zeros
In this section we study the behaviour of transfer functions of the form
b
G(s) = , a 6= 0, b > 0 (11.9)
s+a
which have one pole, s = −a 6= 0.
The unit step response of (11.9) is, as can be seen in Table 2.1, b
Step response of s+a

b
1 − e−at (11.10)

y(t) =
a
and, in the more general case of a step with amplitude K , the step response is
bK
1 − e−at (11.11)

y(t) =
a
As we know, (11.9) is
ˆ stable if a > 0 (pole in −a ∈ R− ), in which case the exponential in
(11.10)(11.11) vanishes when t increases;
ˆ unstable if a < 0 (pole in −a ∈ R+ ), in which case the exponential in
(11.10)(11.11) keeps increasing when t increases.
Figure 11.3 shows the general shape of the step response. Notice that, if the
system is stable:
ˆ the steady-state response is constant and given by yss = a ;
bK

ˆ the response will go up if K > 0 and down if K < 0;

ˆ the response is monotonous, and thus the output is never larger than the
steady-state value;
ˆ the response changes with time as follows:
 
0.7 bK
1 − e−0.7 = 0.50 yss (11.12)

y =
a a
 
1 bK
1 − e−1 = 0.63 yss (11.13)

y =
a a
 
2 bK
1 − e−2 = 0.86 yss (11.14)

y =
a a
 
2.3 bK
1 − e−2.3 = 0.90 yss (11.15)

y =
a a
 
3 bK
1 − e−3 = 0.95 yss (11.16)

y =
a a
 
4 bK
1 − e−4 = 0.98 yss (11.17)

y =
a a
 
4.6 bK
1 − e−4.6 = 0.99 yss (11.18)

y =
a a

ˆ 1
a is called time constant; Time constant

ˆ the slope of the response at t = 0 is given by

bK −a0
y 0 (0) = ae = bK (11.19)
a
164 CHAPTER 11. FINDING TIME AND FREQUENCY RESPONSES

Figure 11.3: Response of b


s+a to a step with amplitude K .

Denition 11.1. The x%settling time ts,x% is the minimum value of time Settling time
for which
 x   x 
∀t > ts,x% , yss 1 − ≤ y(t) ≤ yss 1 + (11.20)
100 100
The most usual are the 5%settling time and the 2%settling time, though other
values, such as 10%, 15%, or 1%, are sometimes found.

Since step response (11.11) is monotonous, (11.12)(11.18) show that

ˆ its 10% settling time is ts,10% = a ;


2.3

ˆ its 5% settling time is ts,5% = a;


3

ˆ its 2% settling time is ts,2% = a;


4

ˆ its 1% settling time is ts,1% = a .


4.6

b
Frequency response of s+a The frequency response of (11.9) is

b b(a − jω)
G(jω) = = 2 (11.21)
jω + a a + ω2

b
|G(jω)| = = √ b (11.22)
jω + a a2 + ω 2
20 log10 |G(jω)| = 20 log10 b − 10 log10 a2 + ω 2
[dB] (11.23)


ω
∠G(jω) = 0 − ∠(jω + a) = − arctan (11.24)
a
and thus, if a > 0 (stable system):

ˆ for low frequencies,

b
ω  a ⇒ G(jω) ≈ (11.25)
a
b b
ω  a ⇒ |G(jω)| ≈ = (11.26)
a a
b
ω  a ⇒ 20 log10 |G(jω)| ≈ 20 log10 [dB] (11.27)
  a
b
ω  a ⇒ ∠G(jω) ≈ ∠ =0 ◦
(11.28)
a
11.2. TIME AND FREQUENCY RESPONSES OF A FIRST-ORDER SYSTEM WITHOUT ZEROS 165

ˆ for frequency ω = a,

b
G(ja) = (11.29)
a(j + 1)

b = b √1 (11.30)

|G(ja)| =
a(j + 1) a 2
b √ b
20 log10 |G(ja)| = 20 log10 − 20 log10 1 + 1 ≈ 20 log10 − 3 [dB]
a a
(11.31)
∠G(ja) = 0 − ∠(j + 1) = −45◦ (11.32)

ˆ for high frequencies,

b
ω  a ⇒ G(jω) ≈ (11.33)


b b
ω  a ⇒ |G(jω)| ≈ = (11.34)
jω ω
ω  a ⇒ 20 log10 |G(jω)| ≈ 20 log10 b − 20 log10 ω [dB] (11.35)
 
b
ω  a ⇒ ∠G(jω) ≈ ∠ = −90◦ (11.36)

In other words:

ˆ for low frequencies, b


s+a is similar to constant a,
b
with a zero phase and a
constant gain;

ˆ for high frequencies, is similar to sb , with a −90◦ phase and a linear


b
s+a
gain with a slope of −20 dB per decade;

ˆ the phase goes down from 0◦ for low frequencies to −90◦ for high frequen-
cies.

If a < 0, the gain is the same; only the phase changes:

ˆ for low frequencies,


 
b
ω  a ⇒ ∠G(jω) ≈ ∠ = −180◦ (11.37)
a

ˆ for frequency ω = |a|,

b b
G(j|a|) = = (11.38)
a + j|a| |a|(−1 + j)
ω = |a| ⇒ ∠G(jω) = 0 − ∠(−1 + j) = −135◦ (11.39)

ˆ for high frequencies,


 
b b
ω  a ⇒ G(jω) ≈ ω  a ⇒ ∠G(jω) ≈∠ = −90◦ (11.40)
jω jω

So in this case the phase goes up from −180◦ for low frequencies to −90◦ for
high frequencies. Remember that since the system is unstable in this case the
output will not be a sinusoid in steady state (it will have diverged exponentially
to innity).
Figure 11.4 shows the Bode diagram of (11.9).

Remark 11.1. The Bode diagram of (11.9) in Figure 11.4 is often approximated
by its asymptotes shown in Figure 11.5.

Remark 11.2. The steady-state response of (11.9) to a unit step ab can be found
from the gain for low frequencies (11.25). In fact, a low frequency corresponds
to a large time span.
166 CHAPTER 11. FINDING TIME AND FREQUENCY RESPONSES

Figure 11.4: Bode diagram of s+a .


b
11.2. TIME AND FREQUENCY RESPONSES OF A FIRST-ORDER SYSTEM WITHOUT ZEROS 167

Figure 11.5: Asymptotes of the Bode diagram of b


s+a in Figure 11.4.
168 CHAPTER 11. FINDING TIME AND FREQUENCY RESPONSES

11.3 Time and frequency responses of a second-


order system without zeros
In this section we study the behaviour of transfer functions of the form

b
G(s) = , b>0 (11.41)
s2 + a1 s + a2

which have two poles. It is usual to rewrite (11.41) as

b b
G(s) = = , b>0
s2 2
p p
+ 2ξωn s + ωn (s + ξωn + ξ + 1)(s + ξωn − ξ 2 + 1)
2

(11.42)

For reasons we will see later, ξ is called damping coecient and ωn natural
frequency. The poles of (11.42) are
p
p1 = −ξωn + ωn ξ 2 − 1 (11.43)
p
p2 = −ξωn − ωn ξ 2 − 1 (11.44)

Notice that:

ˆ (11.41) can always be put in the form (11.42), save

 when a2 < 0, in which case the transfer function is unstable, as can


be easily seen in a way we will learn below in section 11.5),
 or when a2 = 0 6= a1 , in which case there is one pole at the origin,
and the transfer function is marginally stable;

ˆ if either ξ or ωn are negative the real part of the poles is positive and
(11.42) is unstable too;

ˆ if both ξ or ωn are negative, (11.42) is just as if both were positive, so


that case need not be considered;

ˆ if ωn = 0 then G(s) = s2 ,
b
which is unstable;

ˆ if ξ ≥ 1 then the poles are real, in which case we can write

b
G(s) = , b > 0, p1 , p2 ∈ R (11.45)
(s + p1 )(s + p2 )

ˆ if, in particular, ξ = 1 then the two real poles are both equal to −ωn :

b
G(s) = , b, ωn > 0 (11.46)
(s + ωn )2

ˆ if 0 ≤ ξ < 1 the system is not unstable and the complex conjugate poles
are given by
p
p1 = −ξωn + jωn 1 − ξ2 (11.47)
p
p2 = −ξωn − jωn 1 − ξ 2 (11.48)

Five cases will be treated separately:

Overdamped system ˆ ξ > 1, i.e. the two poles are real and dierent, as in (11.45): in this case
the system is called overdamped;

Critically damped system ˆ ξ = 1, i.e. there is a double pole, as in (11.46): in this case the system is
called critically damped;

Underdamped system ˆ 0 < ξ < 1, i.e. the two poles are complex conjugate and stable: in this
case the system is called underdamped;

System with no damping ˆ ξ = 0, i.e. the two poles are complex conjugate and marginally stable: in
this case the system is said to have no damping;
11.3. TIME AND FREQUENCY RESPONSES OF A SECOND-ORDER SYSTEM WITHOUT ZEROS 169

Figure 11.6: Location of the poles in the complex plane of second order systems.
Notice that the bottom left undamped system is marginally stable, as well as
the bottom centre one with a pole at the origin.

ˆ G(s) = b b
= s(s+a
s2 +a1 s 1)
, b > 0, i.e. the transfer function is marginally
stable with one pole in the origin.

See Figure 11.6.


response of If both poles of (11.41) are real, dierent from zero and dierent from each
damped system other, then its response to a step of amplitude K is, as can be seen in Table 2.1,
b
1 )(s+p2 )  
bK 1
p2 e−p1 t − p1 e−p2 t (11.49)

y(t) = 1+
p1 p2 p1 − p2

As expected, should either p1 or p2 be negative, (11.49) would diverge to innity.


Assuming that both p1 and p2 are positive, we can rewrite (11.49) using (11.43)
(11.44):
p p
p1 p2 = (−ξωn + ωn ξ 2 − 1)(−ξωn − ωn ξ 2 − 1)
= ξ 2 ωn2 − ωn2 (ξ 2 − 1) = ωn2 (11.50)
p p p
p1 − p2 = −ξωn + ωn ξ 2 − 1 − (−ξωn − ωn ξ 2 − 1) = 2ωn 1 − ξ 2 (11.51)
√ √
!
bK 1  p
2 (ξ− ξ 2 −1)ωn t
p
2 (ξ+ ξ 2 −1)ωn t

y(t) = 2 1 + p (−ξ − ξ − 1) e − (−ξ + ξ − 1) e
ωn 2 1 − ξ2
(11.52)

Then:

ˆ the steady-state response is constant and given by yss = bK


p1 p2 = 2 ;
bK
ωn

ˆ the response begins at 0 with a horizontal slope, since

bK 1
y 0 (t) = −p1 p2 e−p1 t + p1 p2 e−p2 t (11.53)

p1 p2 p1 − p2

and thus y 0 (0) = 0;

ˆ the response is monotonous, since y 0 (t) = bK


p1 −p2 (e−p2 t − e−p1 t ) = 0 admits
only one solution which is t = 0;
170 CHAPTER 11. FINDING TIME AND FREQUENCY RESPONSES

ˆ there is an inection point, which can be found as follows:

bK
y 00 (t) = p1 e−p1 t − p2 e−p2 t = 0

p1 − p2
⇒ log p1 − p1 t = log p2 − p2 t
p1 − p2
⇔t= (11.54)
log p1 − log p2

(11.52) shows that y(t) depends on t only in the argument of the two exponen-
tials, where t appears multiplied by ωn . This product ωn t has no dimensions.
This step response can be put as a function of ωn t, as in Figure 11.7, and then
will vary only with ξ as shown.
Frequency response The frequency response in this case can be easily found thanks to the fol-
of overdamped system lowing result:
b
(s+p1 )(s+p2 ) Theorem 11.1. The gain in dB and the phase of the product of two transfer
functions are the sum of their separate gains and phases:

20 log10 |G1 (s)G2 (s)| = 20 log10 |G1 (s)| + 20 log10 |G2 (s)| (11.55)
∠ [G1 (s)G2 (s)] = ∠ [G1 (s)] + ∠ [G2 (s)] (11.56)

So we can just add the frequency responses of two rst-order systems and
obtain the responses in Figure 11.8. Again, it usual to plot the corresponding
asymptotes, shown in Figure 11.9, instead. Notice that:
ˆ For low frequencies, the gain is 20 log10 = 20 log10 ωb2 dB, and the
b
p1 p2 n
phase is 0 . Indeed, when ω ≈ 0, the system is similar to a constant:

b b
G(jω) = ≈ 2 (11.57)
−ω 2 + 2ξωn jω + ωn2 ωn

ˆ For high frequencies, the gain is linear with a slope of −40 dB per decade,
and the phase is −180◦ . Indeed, when ω  p1 , p2 , the system is similar
to a double integrator:
b b
G(jω) = ≈− 2 (11.58)
−ω 2 2
+ 2ξωn jω + ωn ω

ˆ When the two poles do not have the same order of magnitude, it is possible
to notice their eect on gain and phase separately. But, if the values of
p1 and p2 are close to each other, their eects on the frequency response
merge.
Step response of critically If both poles of (11.41) are equal, (11.50) shows that p1 = p2 = ωn , and the
b
damped system (s+ω )2
n
system's response to a step of amplitude K is, as can be seen in Table 2.1,
bK
1 − e−ωn t − ωn te−ωn t (11.59)

y(t) = 2
ωn
As expected, should p be negative, (11.59) would diverge to innity. Assuming
that p is positive, then:
ˆ the steady-state response is constant and given by yss = 2 ;
bK
ωn

ˆ the response begins at 0 with a horizontal slope, since

bK
y 0 (t) = ωn e−ωn t − ωn e−ωn t + ωn2 te−ωn t = bKte−ωn t (11.60)

2
ωn
and thus y 0 (0) = 0;
ˆ the response is monotonous, since y 0 (t) = bKte−ωn t = 0 admits only one
solution which is t = 0;
ˆ there is an inection point, which can be found as follows:

y 00 (t) = bK e−ωn t − ωn te−ωn t = bK(1 − ωn t)e−ωn t = 0




1
⇔t= (11.61)
ωn
11.3. TIME AND FREQUENCY RESPONSES OF A SECOND-ORDER SYSTEM WITHOUT ZEROS 171

Figure 11.7: Response of s2 +2ξωbn s+ω2 , b, ωn > 0 to a step with ampli-


n
tude K > 0. Top: ξ = 1, 1.2, 1.4, 1.6, 1.8, 2, 2.2, 2.4, 2.6, 2.8, 3. Bottom:
ξ = 0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.
172 CHAPTER 11. FINDING TIME AND FREQUENCY RESPONSES

Figure 11.8: Bode diagram of b


2 ,
s2 +2ξωn s+ωn b, ωn > 0, ξ > 1, i.e. of
b
(s+p1 )(s+p2 ) , b, p1 , p2 > 0. Top: ξ  1. Bottom: ξ > 1.
11.3. TIME AND FREQUENCY RESPONSES OF A SECOND-ORDER SYSTEM WITHOUT ZEROS 173

Figure 11.9: Asymptotes of the Bode diagram of b


2 ,
s2 +2ξωn s+ωn b, ωn > 0, ξ > 1,
i.e. of b
b, p1 , p2 > 0, in Figure 11.8. Top: ξ  1. Bottom: ξ > 1.
(s+p1 )(s+p2 ) ,
Notice how in the latter case the aporoximation is poorer.
174 CHAPTER 11. FINDING TIME AND FREQUENCY RESPONSES

Figure 11.10: Bode diagram of s2 +2ξωbn s+ω2 , b, ωn > 0, for ξ =


n
0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.

Once more, (11.59) shows that y(t), given in Figure 11.7, depends on t only, by
product ωn t.
Frequency response of The frequency response in this case can be found thanks to (11.55)(11.56)
critically damped system and is shown in Figure 11.10. Again, it usual to plot instead the corresponding
b
(s+ωn )2 asymptotes, shown in Figure 11.11. Notice that:

ˆ For low and high frequencies, the frequency response is the same as in the
overdamped case.

ˆ For ω = ωn , the gain and phase are

b b b
G(jωn ) = = 2 = (11.62)
(jωn + ωn )2 ωn (j + 1)2 2jωn2
b
20 log10 |G(jωn )| = 20 log10 2 − 20 log10 2
ωn
b
= 20 log10 2 − 6 dB (11.63)
ωn
∠G(jωn ) = −90◦ (11.64)

Step response of un- If the poles of (11.41) are complex conjugate, the system's response to a step
derdamped system of amplitude K is, as can be seen in Table 2.1,
b
s2 +2ξωn s+ωn
2 p !!
bK 1 −ξωn t
p
2
1 − ξ2
y(t) = 2 1 − p e sin ωn 1 − ξ t + arctan
ωn 1 − ξ2 ξ
(11.65)

See Figure 11.7. As already mentioned, should product −ξω be positive (i.e. if
either ξ or ωn are negative), (11.65) would diverge to innity. Assuming that
the system is stable, i.e. that 0 < ξ < 1, then:

ˆ the steady-state response is again constant and given by yss = 2 ;


bK
ωn

ˆ the response has oscillations caused by the sinusoid, and so is not monotonous;

ˆ the frequency of the oscillations, ωn 1 − ξ 2 , is called damped frequency


p
Damped frequency
(as opposed to natural frequency ωn );

ˆ the oscillations have decreasing amplitudes thanks to the exponential;


11.3. TIME AND FREQUENCY RESPONSES OF A SECOND-ORDER SYSTEM WITHOUT ZEROS 175

Figure 11.11: Asymptotes of the Bode diagram of b


2 ,
s2 +2ξωn s+ωn b, ωn > 0, for

ξ = 0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, and 2
2
≤ ξ ≤ 1.

ˆ indeed, since −1 ≤ sin x ≤ 1, ∀x, the response y(t) is limited by two


exponential curves ȳ(t) and y(t) that converge to yss (see Figure 11.12):
! !
bK e−ξωn t bK e−ξωn t
y(t) = 2 1− p ≤ y(t) ≤ 2 1+ p = ȳ(t)
ωn 1 − ξ2 ωn 1 − ξ2
(11.66)

ˆ the oscillations take place around the steady-state, which means that the Overshoot
response will exceed that value at some instants: this is called overshoot
(see Figure 11.12);

ˆ the response begins at 0 with a horizontal slope, since


" p !
0 bK −ξωn t
p
2
1 − ξ2
y (t) = − p −ξωn e sin ωn 1 − ξ t + arctan
ωn2 1 − ξ 2 ξ
p !#
p p 1 − ξ2
+ωn 1 − ξ 2 e−ξωn t cos ωn 1 − ξ 2 t + arctan
ξ
" p !
bK e−ξωn t p 1 − ξ2
= p ξ sin ωn 1 − ξ 2 t + arctan
ωn 1 − ξ 2 ξ
p !#
p p 1 − ξ 2
− 1 − ξ 2 cos ωn 1 − ξ 2 t + arctan (11.67)
ξ

and thus
sin sin
zp }| { z }| { !
p
2 1 − ξ2
  p  
bK 1−ξ
y 0 (0) = p ξ sin arctan − 1 − ξ 2 cos arctan
ωn 1 − ξ2 ξ ξ
|{z} |{z}
cos cos
bK  p p 
= p ξ 1 − ξ2 − 1 − ξ2ξ = 0 (11.68)
ωn 1 − ξ 2

ˆ that the response is not monotonous can also be seen equalling (11.67) to
176 CHAPTER 11. FINDING TIME AND FREQUENCY RESPONSES

zero, so as to nd the time instants in which (11.65) changes direction:


y 0 (t) = 0 (11.69)
p !
p 1 − ξ2
⇔ ξ sin ωn 1 − ξ 2 t + arctan
ξ
p !
p
2
p
2
1 − ξ2
− 1 − ξ cos ωn 1 − ξ t + arctan =0
ξ
p ! p
p 1 − ξ 2 1 − ξ2
⇔ tan ωn 1 − ξ 2 t + arctan =
ξ ξ
p p
p
2
1 − ξ2 1 − ξ2
⇒ ωn 1 − ξ t + arctan = arctan + kπ
ξ ξ

⇔t= p , k∈Z
ωn 1 − ξ 2

ˆ because the upper exponential curve in (11.66) limiting the response de-
creases with time, the rst overshoot is the largest, and its maximum value
is thus the the maximum overshoot;
Maximum overshoot ˆ the value Mp of this maximum overshoot is usually given as a percentage
of the steady-state value:
max y(t) − yss
Mp = ⇔ max y(t) = yss (1 + Mp ) (11.70)
yss

Peak time ˆ the time instant at which the maximum overshoot takes place is the peak
time tp , which is consequently given by (11.69) for k = 1:
π
tp = p (11.71)
ωn 1 − ξ 2

ˆ Mp can be found replacing (11.71) in (11.65) and then using (11.70):


p !!
bK 1 −ξωn √π p π 1 − ξ2
y(tp ) = 2 1− p e ωn 1−ξ2 sin ωn 1− ξ2 p + arctan
ωn 1 − ξ2 ωn 1 − ξ 2 ξ
sin
zp }| { !
1 − ξ2
−ξπ
 
bK 1 √
= 2
1− p e 1−ξ2 sin π + arctan
ωn 1−ξ 2 ξ
|{z}
cos
√ 2
| {z }
sin arctan 1−ξ /ξ
 
bK √−ξπ
= 1+e 1−ξ2
ωn2
 
√−ξπ
bK bK
2
ωn 1+e 1−ξ2 − 2
ωn √−ξπ
⇒ Mp = bK
=e 1−ξ2 (11.72)
2
ωn

Settling time ˆ (11.65) could be used to nd settling times for dierent percentages of
the steady-state value, but the sinusoid makes calculations dicult (the
response is not monotonous) and numerical results have abrupt variations
with ξ ; it is easier to obtain approximated (by excess) settling times from
the (monotonous) limiting exponential curves in (11.66), e.g. ȳ(t):
 x 
ȳ(ts,x% ) = yss 1 +
! 100
bK e−ξωn t  x  bK
⇔ 2 1+ p = 1+
ωn 1 − ξ2 100 ωn2
x p
⇔ e−ξωn t = 1 − ξ2
100  
x
p
− log 100 1 − ξ2 x
− log 100
⇒t= ≤ (11.73)
ξωn ξωn
11.3. TIME AND FREQUENCY RESPONSES OF A SECOND-ORDER SYSTEM WITHOUT ZEROS 177

Figure 11.12: Overshoots, maximum overshoot Mp , peak time tp , 5% settling


time ts,5% approximated from the envelope exponentials.

and thus, replacing x with dierent values, we nd that (compare these
values with those of a rst order system without poles):

 the 10% settling time is ts,10% = ξωn ;


2.3

 the 5% settling time is ts,5% = ξωn ;


3

 the 2% settling time is ts,2% = ξωn ;


4

 the 1% settling time is ts,1% = ξωn .


4.6

The frequency response in this case cannot be found from that of rst order Frequency response of
transfer functions using Theorem 11.1. We can write underdamped system
b
s2 +2ξωn s+ωn
2
b
G(jω) = (11.74)
−ω 2 + 2ξωn jω + ωn2
b
|G(jω)| = p (11.75)
(ωn2 − ω 2 )2 + 4ξ 2 ωn2 ω 2
(11.76)
p
20 log10 |G(jω)| = 20 log10 b − 20 log10 (ωn2 − ω 2 )2 + 4ξ 2 ωn2 ω 2
2ξωn ω
∠G(jω) = − arctan 2 (11.77)
ωn − ω 2

and see that for high and low frequencies the frequency response is the same as
in the overdamped and critically damped cases. But this time the gain (11.75)
may not be monotonous. To nd this, because numerator b is constant and the
square root is monotonous, we only need to calculate the derivative

d  2
(ωn − ω 2 )2 + 4ξ 2 ωn2 ω 2 = 2(ωn2 − ω 2 )(−2ω) + 8ξ 2 ωn2 ω (11.78)


Equalling to zero we nd that the gain has a maximum at a frequency called Resonance frequency
resonance frequency ωr given by

ωr2 − ωn2 + 2ξ 2 ωn2 = 0


⇔ ωr2 = ωn2 − 2ξ 2 ωn2
p
⇒ ωr = ωn 1 − 2ξ 2 (11.79)
178 CHAPTER 11. FINDING TIME AND FREQUENCY RESPONSES

Figure 11.13: Pole location in the complex plane, unit step responses and Bode
ω2
diagrams of three transfer functions given by s2 +2ξωnn s+ω2 , when ωn is constant.
n

as long as the square root is real, i.e. 1 − 2ξ 2 > 0 ⇔ ξ < √12 ≈ 0.707. In this
case, the corresponding maximum value of the gain is found from (11.76):
p
20 log10 |G(jωr )| = 20 log10 b − 20 log10 (ωn2 − ωn2 (1 − 2ξ 2 ))2 + 4ξ 2 ωn4 (1 − 2ξ 2 )
p
= 20 log10 b − 20 log10 ωn4 (1 − 1 + 2ξ 2 )2 + ωn4 (4ξ 2 − 8ξ 4 )
p
= 20 log10 b − 20 log10 ωn2 4ξ 4 + 4ξ 2 − 8ξ 4
∈[0,1]
z p}| {
= 20 log10 b − 20 log10 ωn2 −20 log10 2ξ 1 − ξ 2 (11.80)
| {z }| {z }
gain at low frequencies >0
√ √ q
(Notice that since 0 < ξ < 22 we have 2ξ 1 − ξ 2 < 2 1 − 12 = 1.) This
p

frequency behaviour is shown in Figure 11.10, and the corresponding asymptotes


in Figure 11.11, where the resonant peak of −20 log10 2ξ 1 − ξ 2 dB is marked.
p

Step response of undamped For ξ = 0, system (11.41) is marginally stable, and step response (11.65)
b
system s2 +ω 2 simplies to
n

bK   π 
y(t) = 2 1 − sin ωn t + (11.81)
ωn 2
and thus the steady-state response consists in oscillations with frequency ωn
Why ωn is the natural fre- that are not damped; that is why ωn is called natural frequency: it is the
quency frequency of the system's step response (or impulse response; check Table 2.1
Why ξ is the damping co- again) oscillations when the damping coecient is 0. The reason why ξ is the
ecient damping coecient, by the way, is that it is proportional to the coecient of the
damper in (4.9); more generically, it is related to the coecient of the energy
dissipator, as can be seen e.g. in (5.24).
Frequency response of un- The corresponding frequency response is given by (11.74) with ξ = 0:
b
damped system s2 +ω 2
n b
G(jω) = (11.82)
ωn2 − ω2
This is always a real number, and so the phase jumps from 0◦ (when ω < ωn )
to −180p ◦
(when ω > ωn ) as seen in Figure 11.10. At resonance frequency
ωr = ωn 1 − 2ξ 2 = ωn , the peak is
b
lim |G(jωr )| = lim |G(jωn )| = lim p = +∞ (11.83)
ω→ωr ω→ωn ω→ωn (ωn − ω 2 )2
2
11.3. TIME AND FREQUENCY RESPONSES OF A SECOND-ORDER SYSTEM WITHOUT ZEROS 179

Figure 11.14: Pole location in the complex plane, unit step responses and Bode
ω2
diagrams of three transfer functions given by s2 +2ξωnn s+ω2 , when ξ is constant.
n

Of course, in practice systems have some residual damping, and in any case
sinusoidal outputs never have innite amplitudes (reread Example 4.1 and Re-
mark 10.7 if you need).
We can now sum up what happens for the four dierent cases of (11.41) we
have studied (check Figure 11.6 again):

ˆ step responses begin at 0 with a horizontal slope;

ˆ step responses have a bK


2
ωn steady-state (where K is the amplitude of the
step);

√−ξπ
ˆ step responses have no overshoot if ξ ≥ 1, and have a Mp = e 1−ξ2

overshoot if 0 ≤ ξ < 1;

ˆ frequency responses have a phase that goes from 0◦ (at low frequencies)
to −180◦ (at high frequencies) as the frequency increases;

ˆ frequency responses have a gain that goes from (i.e. 20 log10 ωb2 dB, at
b
ωn2
n
low frequencies) to 0 (i.e. −∞ dB, at high frequencies) as the frequency
increases;

ˆ the slope of the gain is −40 dB per decade at high frequencies;



ˆ the gain has no resonant peak if ξ ≥ 2 ,
and has a −20 log10 2ξ 1 − ξ 2 >
2
p

0 resonant peak if 0 ≤ ξ < 22 (so in this case, for a sinusoidal input with Meaning of the resonant
a frequency equal to or around ωr , the system's sinusoidal steady-state peak
output will have an amplitude larger than the input's).

Figures 11.1311.16 illustrate how step and frequency responses of (11.42) with
complex conjugate poles change according to their position on the complex
plane.
To conclude this section, the fth and nal case of a second order transfer b
Step response of s(s+a)
function to be considered is that in which either pole is 0: (11.49) cannot be
applied; (11.41) will integrate the output of the other pole.
180 CHAPTER 11. FINDING TIME AND FREQUENCY RESPONSES

Figure 11.15: Pole location in the complex plane, unit step responses and Bode
diagrams of three transfer functions given by (s+pp11)(s+p
p2
2)
, when the real part of
the poles is constant.

Figure 11.16: Pole location in the complex plane, unit step responses and Bode
diagrams of three transfer functions given by (s+pp11)(s+p
p2
2)
, when the imaginary
part of the poles is constant.
11.4. SYSTEMS WITH MORE ZEROS AND POLES: FREQUENCY RESPONSES 181

11.4 Systems with more zeros and poles: fre-


quency responses
The frequency response of systems with more than two poles, or with zeros, or
with a negative gain, can be found from the frequency response of

ˆ each real pole and zero,

ˆ each pair of complex conjugate poles and zeros, and

ˆ its gain

found separately. To nd the frequency response of any transfer function, write
it as a product of smaller transfer functions found therein, and sum the corre-
sponding gains and phases, according to Theorem 11.1.

Remark 11.3. It is often easier to write those smaller transfer functions so


that they have a 0 dB gain at low frequencies (excepting poles or zeros at the
origin, of course, which have no constant gain at low frequencies), and then add
the eect of a gain as needed. This is how we will proceed in what follows.
But it is possible to have dierent low-frequency gains for the dierent smaller
transfer functions.

We already know the following:

ˆ the frequency response of a pole at the origin 1


s was studied in section 11.1:

1
G(jω) = (11.84)

20 log10 |G(jω)| = −20 log10 ω dB (11.85)
∠G(jω) = −90 ◦
(11.86)

ˆ the frequency response of a real pole in the left complex half-plane a


s+a , a>
0 was studied in section 11.2:
a
G(jω) = (11.87)
jω + a
20 log10 |G(jω)| ≈ 0 dB, ω  a (11.88)

∠G(jω) ≈ 0 , ω  a (11.89)
a 1
20 log10 |G(ja)| = 20 log10 √ = 20 log10 √ = −3 dB (11.90)
2
a +a 2 2
a

∠G(ja) = 0 − arctan = −45 ◦
(11.91)
a
20 log10 |G(jω)| ≈ 20 log10 a − 20 log10 ω dB, ω  a (11.92)
∠G(jω) ≈ −90◦ , ω  a (11.93)

ˆ the frequency response of a real pole in the right complex half-plane


a
s−a , a > 0 was also studied in section 11.2:

a
G(jω) = (11.94)
jω − a
20 log10 |G(jω)| ≈ 0 dB, ω  a (11.95)
∠G(jω) ≈ −180 , ω  a ◦
(11.96)
a 1
20 log10 |G(ja)| = 20 log10 √ = 20 log10 √ = −3 dB (11.97)
2
a +a 2 2
a

∠G(ja) = 0 − arctan = −135 ◦
(11.98)
−a
20 log10 |G(jω)| ≈ 20 log10 a − 20 log10 ω dB, ω  a (11.99)

∠G(jω) ≈ −90 , ω  a (11.100)

ˆ the frequency response of a pair of complex conjugate poles in the left


2
ωn
complex half-plane 2 ,
s2 +2ξωn s+ωn ωn > 0, 0 ≤ ξ < 1 was studied in sec-
182 CHAPTER 11. FINDING TIME AND FREQUENCY RESPONSES

tion 11.3:
ωn2
G(jω) = (11.101)
ωn2 − ω 2 + j2ξωn ω
20 log10 |G(jω)| ≈ 0 dB, ω  ωn (11.102)

∠G(jω) ≈ 0 , ω  ωn (11.103)
p p
20 log10 |G(jωn 1 − 2ξ 2 )| = −20 log10 2ξ 1 − ξ 2 dB > 0 dB (11.104)
20 log10 |G(jωn )| = −20 log10 2ξ dB (11.105)
1
∠G(jωn ) = ∠ = −90◦ (11.106)
j2ξ
20 log10 |G(jω)| ≈ 20 log10 ωn2 − 40 log10 ω dB, ω  ωn
(11.107)
∠G(jω) ≈ −180◦ , ω  ωn (11.108)

We will now nd the following frequency responses:


ˆ A positive gain k > 0:

G(jω) = k (11.109)
20 log10 |G(jω)| = 20 log10 k dB (11.110)
∠G(jω) = 0◦ (11.111)

ˆ A negative gain k > 0:

G(jω) = −k (11.112)
20 log10 |G(jω)| = 20 log10 |k| dB (11.113)
∠G(jω) = −180 ◦
(11.114)

ˆ A zero at the origin s:

G(jω) = jω (11.115)
20 log10 |G(jω)| = 20 log10 ω dB (11.116)
∠G(jω) = 90 ◦
(11.117)

ˆ A zero in the left complex half-plane s+b


b , b > 0:

jω + b
G(jω) = (11.118)
b
20 log10 |G(jω)| ≈ 0 dB, ω  b (11.119)
∠G(jω) ≈ 0◦ , ω  b (11.120)

b2 + b2 √
20 log10 |G(jb)| = 20 log10 = 20 log10 2 = 3 dB (11.121)
b
b
∠G(jb) = arctan = 45◦ (11.122)
b
20 log10 |G(jω)| ≈ 20 log10 ω − 20 log10 b dB, ω  b (11.123)

∠G(jω) ≈ 90 , ω  b (11.124)

ˆ A zero in the right complex half-plane s−b


b , b > 0:

jω − b
G(jω) = (11.125)
b
20 log10 |G(jω)| ≈ 0 dB, ω  b (11.126)
∠G(jω) ≈ 180 , ω  b ◦
(11.127)

b2 + b2 √
20 log10 |G(jb)| = 20 log10 = 20 log10 2 = 3 dB (11.128)
b
b
∠G(jb) = arctan = 135◦ (11.129)
−b
20 log10 |G(jω)| ≈ 20 log10 ω − 20 log10 b dB, ω  b (11.130)
∠G(jω) ≈ 90◦ , ω  b (11.131)
11.4. SYSTEMS WITH MORE ZEROS AND POLES: FREQUENCY RESPONSES 183

2
ωn
ˆ A pair of complex conjugate poles in the right complex half-plane 2 ,
s2 −2ξωn s+ωn ωn >
0, 0 ≤ ξ < 1:
ωn2
G(jω) = (11.132)
ωn2 − ω 2 − j2ξωn ω
20 log10 |G(jω)| ≈ 0 dB, ω  ωn (11.133)

∠G(jω) ≈ 0 ≡ −360 , ω  ωn ◦
(11.134)
p p
20 log10 |G(jωn 1 − 2ξ )| = −20 log10 2ξ 1 − ξ dB > 0 dB (11.135)
2 2

20 log10 |G(jωn )| = −20 log10 2ξ dB (11.136)


1
∠G(jωn ) = ∠ = −270◦ (11.137)
−j2ξ
20 log10 |G(jω)| ≈ 20 log10 ωn2 − 40 log10 ω dB, ω  ωn
(11.138)
∠G(jω) ≈ −180◦ ≡ −180◦ , ω  ωn (11.139)
(Notice that the phase goes up from −360◦ to −180◦ , or from 0◦ to 180◦ .)
s2 +2ξωn s+ωn
2
ˆ A pair of complex conjugate zeros in the left complex half-plane 2
ωn , ωn >
0, 0 ≤ ξ < 1:
ωn2 − ω 2 + j2ξωn ω
G(jω) = (11.140)
ωn2
20 log10 |G(jω)| ≈ 0 dB, ω  ωn (11.141)

∠G(jω) ≈ 0 , ω  ωn (11.142)
p p
20 log10 |G(jωn 1 − 2ξ 2 )| = 20 log10 2ξ 1 − ξ 2 dB < 0 dB (11.143)
20 log10 |G(jωn )| = 20 log10 2ξ dB (11.144)
∠G(jωn ) = ∠j2ξ = 90 ◦
(11.145)
20 log10 |G(jω)| ≈ 40 log10 ω − 20 log10 ωn2 dB, ω  ωn
(11.146)
∠G(jω) ≈ 180◦ , ω  ωn (11.147)

s2 −2ξωn s+ωn
2
ˆ A pair of complex conjugate zeros in the right complex half-plane 2
ωn , ωn >
0, ξ ≥ 0:
ωn2 − ω 2 − j2ξωn ω
G(jω) = (11.148)
ωn2
20 log10 |G(jω)| ≈ 0 dB, ω  ωn (11.149)

∠G(jω) ≈ 0 , ω  ωn (11.150)
p p
20 log10 |G(jωn 1 − 2ξ 2 )| = 20 log10 2ξ 1 − ξ 2 dB < 0 dB (11.151)
20 log10 |G(jωn )| = 20 log10 2ξ dB (11.152)
∠G(jωn ) = ∠ − j2ξ = −90 ◦
(11.153)
20 log10 |G(jω)| ≈ 40 log10 ω − 20 log10 ωn2 dB, ω  ωn
(11.154)
∠G(jω) ≈ −180◦ , ω  ωn (11.155)

All the corresponding frequency responses are summed up in Figures 11.17


11.19.
Example 11.1. Find the Bode diagram of
s + 100 1 s + 100 1
G(s) = 100 ×
= |{z} × × 2 (11.156)
s(s2 + 0.5s + 1) s 100 s + 0.5s +1
G1 (s) |{z} | {z } | {z }
G2 (s) G3 (s) G4 (s)

First plot the asymptotes of the Bode diagrams of the four transfer function
G1 (s) to G4 (s), in Figure 11.20. Add them to obtain the asymptotes of the Bode
diagram of G(s) in the same Figure, which also shows the actual Bode diagram.
The asymptotes are seen to be a rather√fair approximation, especially when the
resonance peak given by −20 log10 (0.5 1 − 0.252 ) = 6.3 dB is added.
184 CHAPTER 11. FINDING TIME AND FREQUENCY RESPONSES

Figure 11.17: Bode diagrams of k , −k , 1


s and s, k > 0.

Remark 11.4. Since the phase is determined up to shifts of 360◦ , there are
dierent ways of choosing which particular values are used. In all of them the
phase is, of course, continuous with frequency ω whenever possible.
ˆ Choose the low frequency phase value to be closest to zero. In this way
a low frequency value of, say, −90◦ is preferred to 270◦ , no matter what.
This criterion presents no reason to choose among 180◦ and −180◦ , when
input and output are in phase opposition at low frequencies.
ˆ Choose the low frequency phase value that better shows the number of
3
zeros and poles at the origin. In this way, transfer function sD(s)
N (s)
, where
polynomials N (s) and D(s) have no roots at the origin, will have a low
frequency phase beginning at 270◦ and not at −90◦ , since a zero pushes the
phase up and so this phase shows that the three zeros are responsible for
N (s)
the low frequency behaviour. The phase of transfer function sD(s) would
begin at −90 for a similar reason. This criterion may or may not give

the same result as the one above. It allows choosing among low frequency
phases of 180◦ and −180◦ if there are two poles or two zeros at the origin,
but not if there is a negative gain.
ˆ Choose the closest values to zero in the entire frequency range of concern,
as long as the phase is continuous with ω .
Example 11.2. Find the Bode diagram of
−s + 5 1 1 10 s−5
G(s) = = − × 2 × × (11.157)
s2 (s
+ 10) 2
|{z} s
|{z} s + 10 5
| {z } | {z }
G1 (s) G2 (s) G3 (s) G4 (s)

First plot the asymptotes of the Bode diagrams of the four transfer function
G1 (s) to G4 (s), in Figure 11.21. Add them to obtain the asymptotes of the
Bode diagram of G(s) in the same Figure, which also shows the actual Bode
diagram.

Remark 11.5. Given the way Bode diagrams can be built from smaller transfer
functions, this can be used to identify a model for a plant from its frequency
response. The Bode diagram is split into the sum of several Bode diagrams
corresponding to frequency responses in Figures 11.1711.19; the model will be
the product of the respective transfer functions.
11.4. SYSTEMS WITH MORE ZEROS AND POLES: FREQUENCY RESPONSES 185

Figure 11.18: Bode diagrams (left) and corresponding asymptotes (right) of


s+a , s−a , a and a , a > 0.
a a s+a s−a
186 CHAPTER 11. FINDING TIME AND FREQUENCY RESPONSES

Figure 11.19: Bode diagrams (left) and corresponding asymptotes (right) of


2 2
ωn ωn s2 +2ξωn s+ωn
2
s2 −2ξωn s+ωn
2

s2 +2ξωn s+ω 2 , s2 −2ξωn s+ω 2 ,


n n n ω2 and
n ω2 , ωn > 0, ξ ≥ 0.
11.4. SYSTEMS WITH MORE ZEROS AND POLES: FREQUENCY RESPONSES 187

Figure 11.20: Building the Bode diagram of (11.156) from Example 11.1.
188 CHAPTER 11. FINDING TIME AND FREQUENCY RESPONSES

Figure 11.21: Building the Bode diagram of (11.157) from Example 11.2.
11.4. SYSTEMS WITH MORE ZEROS AND POLES: FREQUENCY RESPONSES 189

Example 11.3. Given either Bode diagram of Examples 11.1 or 11.2, rst
draw its asymptotes, as shown in Figures 11.20 or 11.21; then divide them into
the separate asymptotes, also shown in the Figures, corresponding to transfer
functions from Figures 11.1711.19. The product of these transfer functions is
the desired transfer function, either (11.156) or (11.157).

Denition 11.2. A zero on the left complex half-plane (i.e. a zero with a Minimum and non-
negative real part) is a minimum phase zero. A zero on the right complex minimum phase zeros
half-plane (i.e. a zero with a positive real part) is a non-minimum phase
zero.
Remark 11.6. The reason for these names can be seen in Figure 11.18: the
a , a > 0 is between 0 and 90 , while the phase of a , a > 0 is
phase of s+a ◦ ◦ s−a

between 180 and 90 . The latter is thus, for low frequencies and in average,
◦ ◦

larger than the former.

A system's frequency response at high frequencies depends on the dierence Gain slope at high frequen-
between the number of poles n and the number of zeros m. More precisely, it cies
is clear from Figures 11.1711.19 that, at high frequencies, each pole will con-
tribute to the gain with a slope of −20 dB/decade, and each zero will contribute
to the gain with a slope of 20 dB/decade. Consequently, the slope of the gain in
a Bode diagram will be 20(m−n) dB per decade. So, as the frequency increases:

ˆ the gain of strictly proper transfer functions goes to −∞ dB, i.e. to 0 in


absolute value;

ˆ the gain of transfer functions with the same number of poles and zeros
goes to a constant value (in both dB and absolute value);

ˆ the gain of transfer functions which are not proper goes to +∞ dB, i.e. to
+∞ in absolute value.

In the latter case, if the transfer function is stable, this means that inputs with Why transfer functions
oscillations of smaller and smaller periods correspond to steady-state output should be strictly proper
oscillations with larger and larger amplitudes (and with the same small period
of the input). The velocity with which the output would be changing would
become innite. This is clearly impossible in practice, as it would require an
arbitrarily large energy. (For instance, in the case of an output which is a
position, there would a frequency above which this position would be changing
faster than light speed.) The same happens even in the case of a constant
gain for high frequencies, since the amplitude may not be increasing, but the
period decreases. The only models which are physically possible are those with
more poles than zeros, i.e. strictly proper models. That is why most models are
strictly proper. Of course, all models are only valid for a range of parameters.
So we can use models with as many zeros as poles, or more zeros than poles,
being aware that for frequencies high enough they cannot be valid: there have
to be unmodelled poles changing the behaviour of the plant so that it does not
require an impossibly innite energy.
On the other hand, the slope of the gain at (very) low frequencies depends on Gain slope at low frequen-
the number of zeros or poles at the origin: it is clear from Figures 11.1711.19 cies
only they cause a slope at such frequencies:

ˆ n poles at the origin cause a low frequency gain slope of −20n dB/decade;

ˆ m zeros at the origin cause a low frequency gain slope of 20m dB/decade.

Example 11.4. Consider the Bode diagram of Example 11.1 in Figure 11.20.
At low frequencies, the gain has a −20 dB/decade slope. Consequently, there
has to be a pole at the origin. At high frequencies, the gain has a −40 dB/decade
slope. Consequently, the number of poles exceeds the number of zeros by 2 (we
could have 0 zeros and 2 poles, or 1 zero and 3 poles, or 2 zeros and 4 poles,
etc.).

Example 11.5. Consider the Bode diagram of Example 11.2 in Figure 11.21.
At low frequencies, the gain has a −40 dB/decade slope. Consequently, there
have to be 2 poles at the origin. At high frequencies, the gain has a −40 dB/decade
slope. Consequently, the number of poles exceeds the number of zeros by 2.
190 CHAPTER 11. FINDING TIME AND FREQUENCY RESPONSES

Remark 11.7. Notice that the gain of a stable system at low frequencies is Stable system's gain a
constant and equal to its static gain: frequencies



b0 + b1 s + b2 s2 + b3 s3 + . . . b0
lim G(jω) = lim = (11.158)
ω→0 a0 + a1 s + a2 s2 + a3 s3 + . . . a0

ω→0
| {z }
G(s)

s=jω

Type of a transfer function Denition 11.3. The type of a transfer function is its number of poles at the
origin.
Example 11.6. Here are transfer functions of type 0, type 1, type 2 and type
3:
1
G0 (s) = (11.159)
s+1
1
G1 (s) = (11.160)
s(s + 1)
1
G2 (s) = 2 (11.161)
s (s + 1)
1
G3 (s) = 3 (11.162)
s (s + 1)

11.5 Systems with more zeros and poles: stabil-


ity
We know that a stable plant's poles have real negative parts. If there are only
one or two poles, this is not dicult to verify. Things change if there are three or
more poles, since nding the roots of polynomials of such orders is not trivial.
There are of course numerical algorithms to do so eciently. But there is a
way of knowing if all the roots of a polynomial are on the left complex half-
plane without having to compute them: the Routh-Hurwitz criterion. It
even lets us know how many poles there are on the right complex half-plane.
Demonstrating the validity of this criterion is not trivial and we will not do
that. We will only present it together with examples of application.
Routh-Hurwitz criterion Consider a polynomial in s given by
n
X
p(s) = ak sk = an sn + an−1 sn−1 + . . . + a1 s + a0 (11.163)
k=0

For each of if its n roots, we want to know if it has positive real parts, negative
real parts, or lies on the imaginary axis.
ˆ If a0 = 0, there is a root at the origin. Divide the polynomial by s and
start again.
ˆ If not all the ak have the same sign, there is at least one root with positive
real part. If all the ak are negative, we can multiply p(s) by −1, which
does not change its roots, and all coecients become positive; so we can
say instead that all the ak must be positive if all the roots are to have
negative real parts. This is a necessary condition, but it is not sucient (it
may happen that all the ak are positive and still some roots have positive
real parts).
Routh-Hurwitz table ˆ The number of roots with positive real parts is equal to the number of
sign changes of the rst column of the Routh-Hurwitz table, which has n
lines, n2 columns, and is built as follows:
 

sn an an−2 an−4 ... 0


s n−1
an−1 an−3 an−5 ... 0
sn−2 b1 b2 ...
sn−3 c1 c2 ...
.. .. .. (11.164)
. . .
s2 d1 d2
s1 e1
s0 f1
11.5. SYSTEMS WITH MORE ZEROS AND POLES: STABILITY 191

Here,
an−1 an−2 − an−3 an an−3 an
b1 = = an−2 − (11.165)
an−1 an−1
an−1 an−4 − an−5 an an−5 an
b2 = = an−4 − (11.166)
an−1 an−1
..
. (11.167)
b1 an−3 − b2 an−1 b2 an−1
c1 = = an−3 − (11.168)
b1 b1
b1 an−5 − b3 an−1 b3 an−1
c2 = = an−5 − (11.169)
b1 b1
..
. (11.170)

This pattern goes on in each line until all further elements are necessarily
zero, and goes down until all n lines are lled in.

Example 11.7. Consider a transfer function given by N (s)


s3 +6s2 +11s+6 . We build
the Routh-Hurwitz table
s3 1 11
s2 6 6
6×11−6×1 (11.171)
s 6 = 10
10×6−0×6
1 10 =6

and verify that all elements in the rst column are positive. So the transfer
function is stable (whatever the numerator may be, since, remember, zeros have
nothing to do with stability).

If there is a zero in the left column, replace it with a vanishing ε. If when Zero in the left column
ε −→ 0+ all the coecients below are positive, there is a pair of complex
conjugate pure imaginary poles (which are marginally stable), and the other
poles are stable. If some are negative, the transfer function is unstable.

Example 11.8. Consider a transfer function given by N (s)


s3 +s2 +4s+4 . We build
the Routh-Hurwitz table
s3 1 4
s2 1 4
ε (11.172)
1×4−4×1
s 1 = 0
ε×4−0×1
1 ε =4

and verify that when ε −→ 0+ the rst column is entirely positive. So there are
no unstable poles. The zero is caused by a pair of complex conjugate poles on
the imaginary axis: s3 + s2 + 4s + 4 = (s + 1)(s + 2j)(s − 2j).

Example 11.9. Consider a transfer function given by N (s)


s6 +4s5 +9s4 +13s3 −17s−10 .
Since there are both positive and negative coecients in the denominator, we
can tell right away that the transfer function is not stable. (Remember that if
they were all positive the transfer function might still be unstable.) To know
how many of the 6 poles are unstable, we build the Routh-Hurwitz table

s6 1 9 0 −10
s5 4 13 −17
13
s4 9− 4 = 23
4
17
4 −10
17
s3 13 − 23 = 231 −17 + 40
23 = − 231
231 17
+
4
231 23
23 4
23
(11.173)
s2 23 4
231
23 4
= 10 -10
23
ε
(
10 − 231 )
23 +10 23
231
s 10 = 0

1 −10

and verify that when ε −→ 0+ there is one sign change in the rst column, from
line s to line 1. So there are ve stable poles, and one unstable pole (in fact
s6 + 4s5 + 9s4 + 13s3 − 17s − 10 = (s − 1)(s + 1)2 (s + 2)(s2 + s + 5)).
192 CHAPTER 11. FINDING TIME AND FREQUENCY RESPONSES

Figure 11.22: Control system from Examples 11.12 and 11.13.

Example 11.10. Consider a transfer function given by N (s)


s3 −12s+16 . We build
the Routh-Hurwitz table
s3 1 −12
ε
s2 0 16 (11.174)
16
s −12 − ε
1 16

and verify that when ε −→ 0+ the element below in the rst column is negative.
So there are two sign changes, and two unstable poles (in fact s3 − 12s + 16 =
(s − 2)2 (s + 4)).
Remark 11.8. Notice that coecient a0 always turns up in the last line of the
table. If it does not, you got your calculations wrong.
Line of zeros If there is a whole line of zeros, build a polynomial with the coecients of the
line above, dierentiate it, and use the coecients of the derivative to replace
the zeros.
Example 11.11. Consider a transfer function given by N (s)
s6 −4s4 +4s2 −16 . We
begin the Routh-Hurwitz table
s6 1 −4 4 −16
(11.175)
s5 0 0 0
and verify that the second line only has zeros. So we look at the line above,
which is line s6 , and use the coecients of that line with powers s6 , s4 , s2 and
s to obtain ds s − 4s + 4s − 16 = 6s5 − 16s3 + 8s.
d
0 6 4 2


s6 1 −4 4 −16
s5 6 −16 8
s4 − 34 8
3 −16
s3 −4 −64 (11.176)
s2 24 −16
s − 200
3
1 −16

There are three sign changes, and so three unstable poles (in fact s6 − 4s4 +
4s2 − 16 = (s − 1 + j)(s − 1 − j)(s + 1 + j)(s − 1 − j)(s + 2)(s − 2)).
The Routh-Hurwitz criterion is important not only because it allows nding
out by hand whether or not a system is stable, but above all because it lets us
nd analytical conditions for the stability of a closed-loop that depends on a
parameter.
Example 11.12. Plant
y(s) 1
= (11.177)
u(s) (s − 1)(s + 2)
is controlled in closed loop with a proportional controller K , as seen in Fig-
ure 11.22. We rst assume that the sensor is perfect, i.e. H(s) = 1. To know
what values of K ensure that the closed loop is stable, we nd the closed loop
transfer function
K
y(s) 2 K
= s +s−2K
= 2 (11.178)
r(s) 1 + s2 +s−2 s +s+K −2
11.5. SYSTEMS WITH MORE ZEROS AND POLES: STABILITY 193

and build the corresponding Routh-Hurwitz table

s2 1 K −2
s 1 (11.179)
1 K −2

from which we see that all coecients are positive in the left column if

K −2>0⇔K >2 (11.180)

Example 11.13. If in the last example we make K = 10 and then nd out that
the sensor has dynamics given by H(s) = s+a a
, we want to know what values
of pole a still let the closed loop be stable. So the closed loop transfer function
becomes
10
y(s) s2 +s−2 10
= 10a = (11.181)
r(s) 1+ (s2 +s−2)(s+a)
s3 + (a + 1)s2 + (a − 2)s + 8a

From the Routh-Hurwitz table


s3 1 a−2
s2 a+1 8a
(a+1)(a−2)−8a a2 −9a+2 (11.182)
s a+1 = a+1
1 8a

we see that all coecients are positive in the left column if


 
a2+ 1 > 0
 a > −1

a −9a+2
a+1 > 0 ⇒ (a+0.22)(a−9.22)
a+1 >0 (11.183)
 
8a > 0 a>0
 

For the second condition,

−0.22 −1 9.22
a2 − 9a + 2 + 0 − − 0 +
a+1 − − 0 + +
− 0 + ∞ − 0 +
(11.184)

and so we conclude that the system is stable if a > 9.22.


Remark 11.9. In the example above, it would obviously have been wiser to
choose the sensor rst, and only then the proportional controller K . And there
are surely other requirements for the control system other than ensuring stabil-
ity; we will address them in Part IV.
Example 11.14. Plant
1.5s2 + 22.5s + 66
G(s) = (11.185)
s2 − 8s − 9
is controlled in closed loop with a proportional controller K and a perfect sensor.
To know what values of K ensure that the closed loop is stable, we nd the closed
loop transfer function
2
y(s) K 1.5ss2+22.5s+66
−8s−9
= 2
r(s) 1 + K 1.5ss2+22.5s+66
−8s−9
K(1.5s2 + 22.5s + 66)
=
s2
− 8s − 9 + K(1.5s2 + 22.5s + 66)
1.5Ks2 + 22.5Ks + 66K
= (11.186)
(1.5K + 1)s2 + (22.5K − 8)s + (66K − 9)
and build the Routh-Hurwitz table
s2 1.5K + 1 66K − 9
s 22.5K − 8 (11.187)
1 66K − 9
194 CHAPTER 11. FINDING TIME AND FREQUENCY RESPONSES

We are tempted to write


 
2
1 + 1.5K > 0
 K > − 3 = −0.6667

22.5K − 8 > 0 ⇔ K > 22.5 8
= 0.3556 ⇒ K > 0.3556 (11.188)
9
 
66K − 9 > 0 K > 66 = 0.1364
 

and conclude that only these values of K make the closed loop stable. But if
we make K = −1 in (11.186) we get

y(s) −1.5s2 − 22.5s − 66 3s2 + 45s + 132


= = (11.189)
r(s) −0.5s2 − 30.5s − 75 s2 + 61s + 150

which has poles in −58.4330 and −2.5670 and is thus stable. Why is this so?
Because the number of unstable poles is the number of sign changes in the rst
column of the Routh-Hurwitz table; if they are all negative, there are no sign
changes, and no unstable poles. Since G(s) is not strictly proper, the entire
rst column depends on K , while in all previous examples there was always a
positive number in that column, meaning that all others had to be positive too
for all poles to be stable. In this case, we can let the entire column be negative,
i.e.
 
2
1 + 1.5K < 0
 K < − 3 = −0.6667

22.5K − 8 < 0 ⇔ K < 22.5 8
= 0.3556 ⇒ K < −0.6667 (11.190)
9
 
66K − 9 < 0 K < 66 = 0.1364
 

and thereby conclude that if K ∈] − ∞, −0.6667[∪]0.3556, +∞[ the closed loop


is stable.

11.6 Systems with more zeros and poles: time


responses
We know from the proof of Theorem 10.2, or from
 
1
L −1
= e−at (11.191)
s+a

together with a reasoning similar to that of Remark 2.6, that each pole of a
transfer function will originate an exponential in its time response:
Eects of the real part of a ˆ if the pole is stable, its (negative) real part corresponds to how fast this
pole part of the time response is vanishing;
The imaginary parts of ˆ if there is an imaginary part, it will correspond (together with that of the
poles cause oscillations pole's complex conjugate) to oscillations in time (which, the pole being
stable, are vanishing).
Consequently,
Faster poles ˆ the more negative the real part of a stable pole is, the faster its eect in
the time responses vanishes;
Slower poles ˆ the closer a stable pole is to the imaginary axis, the slower its eect in the
time responses vanishes.
Meaning of the residues When a transfer function is written as a partial fraction expansion, the
residues show the weight that each pole will have in the time responses. Slower
poles, even when weighted with a residue which is relatively small when com-
pared with the others, have a lasting eect in time responses just for being slow,
i.e. for the slowly vanishing eect of their contribution, and are thus called dom-
Dominant poles inant poles.
When a transfer function has only one dominant pole, or one pair of complex
conjugate dominant poles, clearly far from the other ones, its time response will
be mostly determined by that pole or poles. This is not so when there are
several dominant poles, or, rather, when there are none, since no pole has an
eect clearly dominating that of the others.
11.6. SYSTEMS WITH MORE ZEROS AND POLES: TIME RESPONSES 195

Figure 11.23: Unit step responses of the transfer functions of Example 11.15.

Figure 11.24: Poles and zeros of the transfer functions of Example 11.15.
196 CHAPTER 11. FINDING TIME AND FREQUENCY RESPONSES

Figure 11.25: Bode plots of the transfer functions of Example 11.15.


11.6. SYSTEMS WITH MORE ZEROS AND POLES: TIME RESPONSES 197

Figure 11.26: Unit step responses of the transfer functions of Example 11.16.

Example 11.15. Transfer function


2s + 11 1 1
G1 (s) = = + (11.192)
s2 + 11s + 10 s + 1 s + 10
has a dominant pole, −1, since pole −10 is faster. Residues are the same, so we
can expect the step response of G1 (s) to be rather similar to that of
11
Ĝ1 (s) = 10
(11.193)
s+1
(which has the same static gain) as is indeed the case: see Figure 11.23. But
transfer function
101s + 110 1 100
G2 (s) = = + (11.194)
s2 + 11s + 10 s + 1 s + 10
which has the same poles has a much larger residue for the slower pole, and
consequently its step response is not really that similar to that of
11
Ĝ2 (s) = (11.195)
s+1
(with the same static gain) as seen in Figure 11.23. This can also be seen from
the position of the zero in both cases, given in Figure 11.24. G1 (s) has the zero
far from the dominant pole, in a location where its eect is faster, and so does
not interfere signicantly with its behaviour. Gs (s) has the zero quite close to
the dominant pole, thus interfering with its eect in time responses.
Remark 11.10. This can be more clearly understood looking at the Bode
diagrams of G1 (s) and G2 (s) in Figure 11.25 in comparison with those of Ĝ1 (s)
and Ĝ2 (s). The zero does not aect the frequency response of low frequency pole
−1 for G1 (s), but does so for G2 (s). Remember that low frequencies correspond
to larger time intervals, so in the case of G1 (s) step responses will be, for large
time intervals, similar to those of Ĝ1 (s).

Example 11.16. Transfer function


3s2 + 6s + 4 1 1 1
G(s) = = + + (11.196)
s3 + 3s2 + 4s + 2 s+1 s+1+j s+1−j
has three poles, none of which is dominant. So the step response of G(s) is
inbetween those of
2
Ĝ1 (s) = (11.197)
s+1
2 2
Ĝ2 (s) = + (11.198)
s+1+j s+1−j
as seen in Figure 11.26.
198 CHAPTER 11. FINDING TIME AND FREQUENCY RESPONSES

Figure 11.27: Unit step and unit ramp responses of the transfer functions of
Example 11.17.

A time response can also be studied with respect to:


ˆ its steady-state response, using the nal value theorem, as seen above in
Section 10.2;
ˆ its transient response, from the dierence between the number of zeros
and poles, and the eventual presence of non-minimum phase zeros.
Undershoot The latter cause an undershoot in time responses, i.e. the response begins with
a negative derivative if the steady-state value is positive, or a positive derivative
if the steady-state value is negative. In simpler terms, the response goes down
before going up, or up before going down.
Example 11.17. Figure 11.27 shows the unit step and unit ramp responses of
−s + 1
G1 (s) = (11.199)
(s + 1)(s + 2)
s−1
G2 (s) = 2 (11.200)
s + 0.5s + 2
The responses illustrate the following:
ˆ undershoots exist in all time responses of the plant;

ˆ there may or may not be further oscillations in the plant's response;

ˆ undershoots exist whether the steady-state corresponds to a positive or a


negative value.
Theorem 11.2. A non-minimum phase zero or pole causes an undershoot in
time responses.
11.6. SYSTEMS WITH MORE ZEROS AND POLES: TIME RESPONSES 199

Proof. Let us consider the case of a transfer function with a non-minimum phase
zero and a positive steady-state value; other cases are similar:
−s + b
G(s) = n (11.201)
Y
(s + pk )
k=1

The beginning of the transient response corresponds to very small time values,
i.e. to very high frequencies. At high frequencies,
−jω + b −jω
G(jω) = n ≈ (11.202)
Y (jω)n
(jω + pk )
k=1

which is also the frequency response of


1
Ĝ(jω) = − (11.203)
sn−1
So, for small values of t, G(s) and Ĝ(s) behave similarly. Ĝ(s) has negative time
responses to positive inputs (and positive time responses to negative inputs),
and consequently so does G(s) for t ≈ 0.
Theorem 11.3. A transfer function with n poles and m zeros will have step Dierence between the
responses y(t) with n − m − 1 continuous derivatives, i.e. number of poles and the
number of zeros
ˆ if n = m, then n − m − 1 = −1: the step response y(t) itself with be
discontinuous;
ˆ if n = m + 1, then n − m − 1 = 0: the step response y(t) is continuous,
dy(t)
but its rst derivative dt is discontinuous (and thus so are higher order
derivatives);
ˆ if n = m+2, then n−m−1 = 1: the step response y(t) and its rst deriva-
d2 y(t)
tive dy(t)
dt are continuous, but its second derivative dt2 is discontinuous
(and thus so are higher order derivatives);
ˆ if n = m + 3, then n − m − 1 = 2: the step response y(t), its rst derivative
dy(t) d2 y(t)
dt and its second derivative dt2 are continuous, but its third deriva-
d3 y(t)
tive dt3 is discontinuous (and thus so are higher order derivatives);
ˆ and so on.

Proof. The line of reasoning is similar to that of the theorem above, and because
the system is linear we can consider the unit step response of a transfer function
with an arbitrary gain:
 m 
Y
 (jω + zk ) 
−1  1 k=1
y(t) = L  (11.204)
 

n
s Y 
(jω + pk )
 
k=1

Again we are interested in the beginning of the transient response, which cor-
responds to very small time values, i.e. to very high frequencies, at which
1 (jω)m 2
Y (jω) ≈ = (11.205)
jω (jω)n (jω)n−m+1

This is the Fourier transform corresponding to sn−m+11


. It is easily proved by
mathematical induction, from the Laplace transform of the Heaviside function
and the Laplace transform of the integral, that
tk−1
 
1
L −1
= , t ≥ 0, k ∈ N (11.206)
sk (k − 1)!
n−m
Thus, for small values of t, y(t) and (n−m)!
t
will behave similarly. The number
of continuous derivatives of t n−m
, n − m ∈ N, is as stated.
200 CHAPTER 11. FINDING TIME AND FREQUENCY RESPONSES

Figure 11.28: Step responses of three dierent transfer functions, for which the
dierence between the number of poles and zeros is 0, 1 and 2.

Remark 11.11. It is straightforward to nd, in a similar manner, how many


continuous derivatives will have the response of a system to a ramp, a parabola,
or another input given by a polynomial. For inputs which are not polynomial,
a polynomial approximation of the input may be used.

Example 11.18. Consider these transfer functions, with the following dier-
ences between the number of poles and zeros:

s+1
G0 (s) = , n−m=0 (11.207)
s+2
1
G1 (s) = , n−m=1 (11.208)
s+1
s+1
G2 (s) = , n−m=2 (11.209)
(s + 0.5)(s + 1.5)(s + 2)

According to Theorem 11.3 and its proof, at t = 0 their outputs will be similar
to
 
−1 1
y0 (s) = L = H(t) (11.210)
s
 
1
y1 (s) = L −1 2 = t (11.211)
s
t2
 
1
y2 (s) = L −1 3 = (11.212)
s 2

in what concerns the number of continuous derivatives. Figure 11.28 shows


visually that it is so.

Glossary
`To you I may seem a vulgar robber, but I bear on my shoulders
the destiny of the human race. Your tribal life with its stone-age
weapons and beehive huts, its primitive coracles and elementary
social structure, has nothing to compare with our civilization 
with our science, medicine and law, our armies, our architecture, our
commerce, and our transport system which is rapidly annihilating
space and time. Our right to supersede you is the right of the higher
over the lower. Life '
`Half a moment,' said Ransom in English. `That's about as much
as I can manage at one go.' Then, turning to Oyarsa, he began
translating as well as he could. The process was dicult and the
11.6. SYSTEMS WITH MORE ZEROS AND POLES: TIME RESPONSES 201

result  which he felt to be rather unsatisfactory  was something


like this:
`Among us, Oyarsa, there is a kind of hnau who will take other
hnaus ' food and  and things, when they are not looking. He says
he is not an ordinary one of that kind. He says what he does now
will make very dierent things happen to those of our people who
are not yet born. He says that, among you, hnau of one kindred all
live together and the hrossa have spears like those we used a very
long time ago and your huts are small and round and your boats
small and light and like our old ones, and you have one ruler. He
says it is dierent with us. He says we know much. There is a thing
happens in our world when the body of a living creature feels pains
and becomes weak, and he says we sometimes know how to stop it.
He says we have many bent people and we kill them or shut them
in huts and that we have people for settling quarrels between the
bent hnau about their huts and mates and things. He says we have
many ways for the hnau of one land to kill those of another and
some are trained to do it. He says we build very big and strong huts
of stones and other things  like the ptriggi. And he says we
exchange many things among ourselves and can carry heavy weights
very quickly a long way. Because of all this, he says it would not be
the act of a bent hnau if our people killed all your people.'

C. S. Lewis (1868  „1963), Out of the silent planet, 20

critically damped criticamente amortecido


damped frequency frequência amortecida
damping amortecimento
dominant pole polo dominante
maximum overshoot máximo sobreimpulso
minimum phase fase mínima
natural frequency frequência natural
non-minimum phase fase não-mínima
overdamped sobreamortecido
overshoot sobreimpulso
peak time tempo de pico
resonance frequency frequência de ressonância
resonant peak pico de ressonância
settling time tempo de estabelecimento
time constant constante de tempo
underdamped subamortecido
undamped sem amortecimento
undershoot subimpulso

Exercises
1. Sketch the following step responses, marking, whenever they exist,
ˆ the settling time according to the 5% criterion,
ˆ the settling time according to the 2% criterion,
ˆ the steady-state value.

15
(a) G(s) = , for input u(t) = 4H(t)
s+5
10
(b) G(s) = , for input u(t) = H(t)
s−1
1
(c) G(s) = , for input u(t) = −H(t)
2s + 1
−2
(d) G(s) = , for input u(t) = 10H(t)
4s + 1
10
(e) G(s) = , for input u(t) = H(t)
s
202 CHAPTER 11. FINDING TIME AND FREQUENCY RESPONSES

Table 11.1: Unit step response of Exercise 4.


time output time output
0.0 0.0000 0.8 0.0950
0.1 0.0000 0.9 0.0982
0.2 0.0000 1.0 0.0993
0.3 0.0000 1.1 0.0998
0.4 0.0000 1.2 0.0999
0.5 0.0000 1.3 0.1000
0.6 0.0632 1.4 0.1000
0.7 0.0865 1.5 0.1000

2. Sketch the Bode diagrams of the following transfer functions, indicating

ˆ the gain for low frequencies,


ˆ the frequency at which the gain is 3 dB below the gain for low fre-
quencies,
ˆ the slope of the gain for high frequencies,
ˆ the phase for low frequencies,
ˆ the phase for high frequencies,
ˆ the frequency at which the phase is the average of those two values.

Hint: draw the asymptotes of the Bode diagram. Mark the frequency at
which the gain has decreased 3 dB.
15
(a) G(s) =
s+5
1
(b) G(s) =
s + 10
1
(c) G(s) =
2s + 1
2
(d) G(s) =
4s + 1
10
(e) G(s) =
s
1
3. Let G(s) = .
s+1
(a) Consider the unit step response of G(s). What is the settling time,
according to the 5% criterion?
(b) Find analytically the unit ramp response y(t) of G(s).
(c) Find the analytical expression of the steady-state yss (t) of that re-
sponse y(t).

(d) How long does it take for yss (t)−y(t) to be less than 5%? In other

y(t)
words, nd how long it takes for the unit ramp response to be within
a 5% wide band around its steady state.

4. A rst order system s+p K


has the response tabulated in Table 11.1, when
its input is a unit step applied at instant t = 0.5 s. Find the gain K and
the pole p. Hint: subtract the response from the steady state value; you
should now have an exponential with a negative power. Plot its logarithm
and adjust a straight line.

5. For each of the transfer functions below, and for the corresponding step
input, nd, if they exist:

ˆ the natural frequency ωn and the damping factor ξ ,


ˆ the steady state value yss ,
ˆ the peak time tp and the maximum overshoot Mp (expressed in per-
centage),
11.6. SYSTEMS WITH MORE ZEROS AND POLES: TIME RESPONSES 203

ˆ the 5% and the 2% settling times (use the expressions for the expo-
nential envelope of the oscillations),
ˆ the location of the poles,
ˆ the time instant at which the response rst reaches 2 ,
yss

and sketch the step response.


7
(a) G(s) = , for input u(t) = 0.1H(t)
s2 + 0.4s + 1
1
(b) G(s) = 2
, for input u(t) = 18H(t)
s + 5.1s + 9
1
(c) G(s) = , for input u(t) = H(t)
2s2 + 8
10
(d) G(s) = 2
, for input u(t) = 2H(t)
s −s+1
0.3
(e) G(s) = , for input u(t) = 15H(t)
s2 + 4s − 1
6. Sketch the Bode diagrams of the following transfer functions, indicating

ˆ the gain for low frequencies,


ˆ the resonant peak value, and the frequency at which it is located, if
indeed there is one,
ˆ the slope of the gain for high frequencies,
ˆ the phase for low frequencies,
ˆ the phase for high frequencies,
ˆ the frequency at which the phase is the average of those two values.

Hint: draw the asymptotes of the Bode diagram. If there is a resonant


peak mark it in your plot.
1
(a) G(s) =
s2 + 20s + 100
7
(b) G(s) = 2
s + 0.4s + 1
1
(c) G(s) = 2
s + 5.1s + 9
1
(d) G(s) = 2
2s + 8
7. Find the second order transfer functions that, for a unit step input, have:

(a) tp = 0.403 s, Mp = 16.3%, yss = 0.8


(b) tp = 0.907 s, y(tp ) = 11.63, yss = 10
(c) tr = 0.132 s, ts2% = 2.0 s, yss = 0.5

8. Consider the mechanical system in Figure 11.29. When f (t) = 8.9 N,


t ≥ 0, the output has tp = 1 s, Mp = 9.7%, and yss = 3 × 10−2 m.

(a) Find the values of mass M , viscous damping coecient B , and spring
stiness K .
(b) Suppose we want the same steady-state regime and the same settling
time, but a maximum overshoot of 0.15%. What should the new
values of M , B and K be?

9. Plot the Bode diagrams of the following plants:


−4s + 20
(a) G(s) =
s3
+ 0.4s2 + 4s
3
d y(t) d2 y(t) dy(t) du(t)
(b) + 16 + 65 + 50y(t) = 100 + 50u(t)
dt3 dt2 dt dt
120(s + 1)
(c) G(s) =
s(s + 2)2 (s + 3)
204 CHAPTER 11. FINDING TIME AND FREQUENCY RESPONSES

Figure 11.29: System of Exercise 8.

s2
(d) G(s) =
(s + 0.5)(s + 10)
10s
(e) G(s) =
(s + 10)(s2 + s + 2)
(s + 4)(s + 20)
(f) G(s) =
(s + 1)(s + 80)
10. Establish a correspondence between the three Bode diagrams and the three
unit step responses in Figure 11.30.
11. Establish a correspondence between the three Bode diagrams and the three
unit step responses in Figure 11.31.
12. Find the transfer functions corresponding to the Bode diagrams in Fig-
ure 11.32.
13. Use the Routh-Hurwitz criterion to nd how many unstable poles each of
the following transfer functions has, and classify each system as stable,
marginally stable, or unstable.
5
s2 + s − 10
(a) 7
s4 − 2s3 − 13s2 + 14s + 24
(b) s+2
s4 − 2s3 − 13s2 + 14s + 24
(c) s+2 Hint: can you put anything in evi-
s6 − 2s5 − 13s4 + 14s3 + 24s2
dence in the denominator?
(d) s3 + 2s2 + s
s + 4s3 + 4s + 5
4

(e) s3 + 2s2 + s
s5 + 4s4 + 4s2 + 5s
(f) s3 + 2s2 + s
2s3 − 6s + 4
14. Find the ranges of values of K1 , K2 ∈ R for which the systems having
transfer functions with the following denominators are stable.
(a) s3 + 3s2 + 10s + K1
(b) s3 + K2 s2 + 10s + 5
(c) s3 + 2s2 + (K1 + 1)s + K2
15. How many unstable, marginally stable, and stable poles do the following
transfer functions have?
s−2
(a) G1 (s) =
s4 + 2s3 + 5s2 + 8s + 4
11.6. SYSTEMS WITH MORE ZEROS AND POLES: TIME RESPONSES 205

Figure 11.30: Bode diagrams and unit step responses of Exercise 10.
206 CHAPTER 11. FINDING TIME AND FREQUENCY RESPONSES

Figure 11.31: Bode diagrams and unit step responses of Exercise 11.

Bode Diagram
200
Magnitude (dB)

100

−100

−200

−300
−90
Phase (deg)

−135

−180

−225

−270
−4 −2 0 2 4 6
10 10 10 10 10 10
Frequency (rad/s)

Bode Diagram
100
Magnitude (dB)

−100

−200
−90
Phase (deg)

−120

−150

−180
−2 −1 0 1 2 3 4
10 10 10 10 10 10 10
Frequency (rad/s)

Figure 11.32: Bode diagrams of Exercise 12.


11.6. SYSTEMS WITH MORE ZEROS AND POLES: TIME RESPONSES 207

Figure 11.33: The Sopwith Camel, a ghter aircraft from World War I.

s4 + 20s3 + 150s2 + 500s + 625


(b) G2 (s) =
s5 − 2s4 + 5s3 − 8s2 + 4s
s2 − 4
(c) G3 (s) = 3
s + 5s2 + 6s + 8
100
16. Find analytically the unit step responses of G1 (s) = and
s + 10
s + 100
G2 (s) = . Sketch them both in the same plot, marking the 5%
s + 10
settling time for each. Plot separately the dierence between them. Then
8 s+8
do the same for G3 (s) = and G4 (s) = .
s + 12 s + 12
17. Consider the following transfer functions:
5050s + 10000
G1 (s) = (11.213)
s2 + 101s + 100
101s + 10000
G2 (s) = 2 (11.214)
s + 101s + 100
(a) Find their poles.
(b) Which pole is faster? Why?
(c) Which of the two transfer functions will respond faster to a unit step?
Why?

18. The roll angle φ (rotation around the x-axis) of the aircraft in Figure 11.33
is given by
ṗ = −2p + 12δA (11.215)
where p = φ̇ and δA is the aileron deection. All variables are given in SI
units.
p(s)
(a) Find transfer function Gp (s) = .
δA (s)
φ(s)
(b) Find transfer function Gφ (s) = .
δA (s)
(c) Sketch p(t) when the ailerons undergo a 1◦ = π
180 rad step.
(d) What is p(t) when t = 0.5 s?
(e) Sketch φ(t) for the same input.
(f) Suppose you want φ = 30◦ after 20 s. Without calculating φ(t), nd
an approximate value for the necessary aileron deection.

120 (s + 1)
19. Plot the Bode diagram of transfer function G(s) = 2 .
s (s + 2) (s + 3)
208 CHAPTER 11. FINDING TIME AND FREQUENCY RESPONSES
Part III

Sensors and actuators

209
211

Then Dick gave a cry. It's just come into sight. Look, when we
passed those tall trees on the bank yonder, Tall Stone came into
view. It was behind them before that.
Good, said Julian. Now I'm going to stop paddling and keep Tall
Stone in sight. If it goes out of sight I'll tell you and you must back-
paddle. Dick, can you possibly paddle and look out for something
that could be Tock Hill on the opposite side? I daren't take my eyes
o Tall Stone in case it disappears.
Right, said Dick, and paddled while he looked earnestly for Tock
Hill.
Got it! he said suddenly. It must be it! Look, over there  a
funny little hill with a pointed top. Julian, can you still see Tall
Stone?
Yes, said Julian. Keep your eyes on Tock Hill. Now it's up to the
girls. George, paddle away and see if you can spot Steeple.
I can see it now, already! said George, and for one moment the
boys took their eyes o Tall Stone and Tock Hill and looked where
George pointed. They saw the steeple of a faraway church glinting
in the morning sun.
Good, good, good, said Julian. Now Anne  look for Chimney
 look down towards the end of the lake  where the house is. Can
you see its one chimney?
Not quite, said Anne. Paddle just a bit to the left  the left, I
said, George! Yes  yes, I can see the one chimney. Stop paddling
everyone. We're here!
They stopped paddling but the raft drifted on, and Anne lost the
chimney again! They had to paddle back a bit until it came into
sight. By that time George had lost her steeple!
At last all four things were in view at once, and the raft seemed to
be still and unmoving on the quiet waters of the lake.

Enid Blyton (1897  „1968), Five on a hike together, XVIII

In this part of the lecture notes:

Chapter 12 introduces the basic concepts of measuring chains and control loops.

Chapter 13 presents the technology of the most common types of sensors, and the
criteria to choose them.

Chapter 14 presents the technology of the most common types of actuators, and the
criteria to choose them.

Here is what you need to know beforehand to follow these chapters:

ˆ The Laplace and Fourier transforms, from Chapter 2;

ˆ Transfer functions, from Sections 4.1 and 4.2 of Chapter 4;

ˆ System theory, from Part II.


212
Chapter 12

Measuring chains and


actuation chains
In this chapter we take a closer look on sensors and actuators.

12.1 What are measuring chains and actuation


chains
Let us take another look at the simplest representation of a feedback control
system, already know to us from Figure 9.13 and repeated in Figure 12.1. We
saw in chapter 9 that, to feed back the output to the controller, we need a sensor
that measures the output y . The sensor is a device that converts the variable
to be read into another which can be read directly. This conversion is called
transduction. While we can often assume a perfect sensor H(s) = 1, in which Transduction
case the measured output ŷ is equal to the output itself (i.e. ŷ = y ), the sensor
must exist. Sensor technology will be addressed in more detail in chapter 13.

Example 12.1. The analogical ammeter in Figure 12.2 transduces current into
an angle. The angle can be read directly by sight on a scale. This sensor does
not record the reading.

Example 12.2. The seismograph in Figure 12.3 transduces a position (corre-


sponding to the amplitude of the vibration of the ground) again into an angle.
The reading is recorded on paper; time is transduced into a position thanks to
a rotation of the drum obtained through clockwork.

Likewise, the control action u provided by the controller must somehow be


implemented in the plant G(s); this is done through an actuator: for instance, Actuator
a motor in a mechanical plant, or a heater in a thermal plant. This is shown
in Figure 12.1. Once more, we can have such a good actuator Ga (s) that we
can assume the control action actually applied ũ to be equal to the control
action u that should be applied, the one given by the actuator  that is to say,
Ga (a) = 1 or ũ = u. Even if this is not assumed, the (imperfect) behaviour
of the actuator may be considered simply as part of the plant to be controlled
G(s). This makes sense if there is no way of changing the actuator, but if the
actuator can be replaced by a dierent one (hopefully a better one, though poor
replacement parts and actuator malfunctions are also part of life) it is easier to
have separate models and change only the actuator model. Actuator technology
will be addressed in more detail in chapter 14.
It is perfectly possible to have plants where no more elements need to be
modelled: think of the ush tank from Figure 3.16, where the sensor is the
oater, and the actuator the lever that rotates on a hinge and is connected to the
oater. But nowadays most sensors and actuators are electronic, and perform
a transduction of the measured variable to an electrical quantity (voltage or
current); the controllers that use measurements to decide the control actions to
apply are also electronic. Consequently, ŷ and u are digital signals (i.e. both
discrete in time and quantised in amplitude, as we saw in section 3.2); while y
is analogical, and ũ must be, if not analogical, at least continuous in time. That
is why it is necessary to have a digital to analogical converter , or simply a DA DA converter

213
214 CHAPTER 12. MEASURING CHAINS AND ACTUATION CHAINS

Figure 12.1: Closed loop control. Top: simplest representation (see Figure 9.13).
Bottom: sensor and actuator explicitly shown.

Figure 12.2: Ammeter from the Tagus power plant (currently the Museum of
Electricity), Lisbon. Notice that the scale is not linear.

Figure 12.3: Seismograph from the Santa Marta lighthouse (currently a mu-
seum), Cascais.
12.2. FILTERS 215

Figure 12.4: Closed loop control with sensor, actuator and AD/DA conversion
explicitly shown.

Figure 12.5: Open loop control with actuation chain explicitly shown.

converter, between the controller and the actuator, as seen in Figure 12.4. The
function of the AD converter is to receive a digital signal as input and provide
an analogical output (remember the example in Figure 3.13). Likewise, if after
all the sensor provides a measurement ŷ which is not digital, an analogical to
digital converter , or DA converter in short, is needed after the sensor. Even if the AD converter
sensor itself already provides a digital output, it is because it incorporates the
function of the DA converter. The function of the AD converter is to discretise
(in time) and quantise (in amplitude) its input.
The AD and the DA conversions may be carried out by the same device, AD/DA converter
which will have:

ˆ analog inputs, and the corresponding digital outputs;

ˆ digital inputs, and the corresponding analog outputs.

Figure 12.4 shows yet another element after the sensor: a lter, that is, a Filter
system designed to eliminate noise from the measurement. This noise can be
present in the output of the plant, or be a result of the sensor itself. The lter
can be applied to the analog measurement before AD conversion, or be applied
to the digital signal after AD conversion (this last option is the only one possible
if the sensor provides a digital reading).
Figure 12.4 also indicates what, in a digital control system, constitutes

ˆ the measuring chain: sensor, lter, AD conversion; Measuring chain

ˆ the actuation chain: DA conversion, actuator. Actuation chain

If these chains are ideal, they can be modelled by = 1 and uũ = 1.



y
An open-loop control system will also have an actuation chain, as seen if
Figure 12.5, but no measuring chain. A measuring chain is present in a system
which is measured but not controlled.

12.2 Filters
We have just mentioned lters in connection with the measuring chain, as a
means of eliminating noise from sensor measurements. In general, a lter is a
system that eliminates some frequency components of a signal. These compo-
nents are said to be cut. Those that are not eliminated are said to pass the
lter. Figure 12.6 shows the ideal behaviour of four types of lters: Types of lters
216 CHAPTER 12. MEASURING CHAINS AND ACTUATION CHAINS

Figure 12.6: Ideal lters.

ˆ low-pass lters eliminate high frequencies and pass low frequencies;


ˆ high-pass lters eliminate low frequencies and pass high frequencies;
ˆ band-pass lters eliminate both low and high frequencies and pass inter-
mediate frequencies;

ˆ band-stop or band-reject lters eliminate intermediate frequencies and


pass both high and low frequencies.

In terms of periods of oscillations:

ˆ low-pass lters eliminate small periods and pass large periods;

ˆ high-pass lters eliminate large periods and pass small periods;

ˆ band-pass lters eliminate both small and large periods and pass interme-
diate periods;

ˆ band-stop lters eliminate intermediate periods and pass both small and
large periods.

What low and high frequencies (or large and small periods) are depends on the
Cut-o frequency application. The cut-o frequency ωc is the limit separating the pass-band (the
interval of frequencies that pass the lter) from the stop-band (the interval of
frequencies that the lter eliminates).
Perfect lters do not exist No lter can be as good as what Figure 12.6 shows. Figure 12.7 pictures a
more realistic behaviour. Notice in particular:

ˆ that both the pass-band and the stop-band are dened within some tol-
erance (in Figure 12.7, δp is the tolerance of the pass-band and δs is the
tolerance of the stop-band), which can be larger or narrower depending
on what is needed for each particular application;

ˆ that there may be ripples in each of those intervals of frequencies;

ˆ that, because the gain of a lter within the stop-band will never reach
0 (i.e. −∞ dB), no frequencies of the ltered signal are ever eliminated:
only attenuated;

ˆ that, because the gain within the pass-band will not always be 1 (i.e.
0 dB), neither will the phase of the lter in the interval be always 0◦ : and
so, in practice, all lters always introduce some distortion in the signal
that passes;
12.2. FILTERS 217

Figure 12.7: Behaviour of lters, closer to reality than the ideal behaviour in
Figure 12.6. Notice that the band-stop lter shown has a larger value of δs , and
larger ripples in the stop-band, than the other lters.

ˆ that there is between the pass-band and the stop-band a transition band,
i.e. a range of frequencies where the signal no longer passes (since the
gain is below tolerance δp of the pass-band) and where it is not su-
ciently attenuated (since the gain is above tolerance δs of the stop-band),
lying between the cut-o frequency ωc which delimits the pass-band and
frequency ωs which delimits the stop-band.

Example 12.3. A 1 Hz signal is measured with an electronic sensor, and the


reading is corrupted with 50 Hz noise from the electric grid. To lter out this
2π × 50 = 314 rad/s noise without distorting the 2π × 1 = 6.28 rad/s signal,
we will use a rst order system, with the pole at the same distance from the
frequencies of signal and noise in a Bode diagram. A Bode diagram has a
logarithmic scale for frequencies, so the
√ pole should be at the geometric mean
of 6.28 rad/s and 314 rad/s, which is 6.28 × 314 = 44.4 rad/s:
44.4
F1 (s) = (12.1)
s + 44.4
Figure 12.8 shows the Bode diagram of F1 (s), and the gain and the phase at
the frequencies of both signal and noise:
−0.0861
ˆ |F1 (j 6.28)| = 10 20 = 0.99, which means that 99% of the signal passes:
it undergoes a 1% attenuation;
ˆ ∠F1 (j 6.28) = −8◦ , which means that distortion due to a delay in the
phase is not very signicant;
−17.1
ˆ |F1 (j 314)| = 10 20= 0.14, which means that 14% of the noise passes:
it undergoes a 86% attenuation;
ˆ ∠F1 (j 314) = −82◦ , which means that noise is not only attenuated: its
phase is also signicantly distorted.

Example 12.4. If the 14% attenuation is not enough, a second order lter can
be used instead. Let us put two poles at 44.4 rad/s:

44.42
F2 (s) = (12.2)
(s + 44.4)2
Figure 12.9 shows the Bode diagram of F2 (s), and the gain and the phase at
the frequencies of both signal and noise:
218 CHAPTER 12. MEASURING CHAINS AND ACTUATION CHAINS

Figure 12.8: Bode diagram of lter F1 (s) given by (12.1).

Figure 12.9: Bode diagram of lter F2 (s) given by (12.2).

−0.172
ˆ |F2 (j 6.28)| = 10 = 0.98, which means that 98% of the signal passes:
20

it undergoes a 2% attenuation;

ˆ ∠F2 (j 6.28) = −16◦ , which means that distortion due to a delay in the
phase is now larger;
−34.2
ˆ |F2 (j 314)| = 10 = 0.02, which means that 2% of the noise passes: it
20

undergoes a 98% attenuation;

ˆ ∠F2 (j 314) = −164◦ , which means that noise is now even more signi-
cantly distorted.

Notice how we got a better attenuation of the noise, paying the price of distorting
more the signal.

Second order lters should



Remark 12.1. In the previous example we used a second order system with
2
have 2 ≤ ξ ≤ 1 damping coecient ξ = 1 (double real pole). Notice that:

ˆ it does not make sense to have an overdamped system, ξ > 1, with two real
poles which do not coincide: the transition band would be larger without
any advantage (see Figures 11.8 and 11.9 again);

ˆ it does not make sense to have an underdamped system with ξ < 2 :
there
2

would be a resonance peak, meaning that the input would be amplied


near the end of the pass band, which does not make sense for a lter (see
Figures 11.10 and 11.11 again).
12.2. FILTERS 219

As a consequence, second order lters only make sense if critically damped or


lightly underdamped.
Remark 12.2. In both examples above, lter design is rather easy, because
signal and noise have well dened frequencies, which are signicantly apart from
each other. Whenever frequency ranges of signal or noise are not well known, or When lter design becomes
whenever they are close to each other, lter design becomes far mo