Analog and Digital Signal Processing PDF
Analog and Digital Signal Processing PDF
Ashok Ambardar
Michigan Technological University
Pacific Grove Albany Belmont Bonn Boston Cincinnati Detroit Johannesburg London
Madrid Melbourne Mexico City New York Paris Singapore Tokyo Toronto Wahington
CONTENTS
LIST OF TABLES xi
PREFACE xiii
FROM THE PREFACE TO THE FIRST EDITION xv
1 OVERVIEW 1
1.0 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 The Frequency Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 From Concept to Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2 ANALOG SIGNALS 8
2.0 Scope and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1 Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Operations on Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Signal Symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.4 Harmonic Signals and Sinusoids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.5 Commonly Encountered Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.6 The Impulse Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.7 The Doublet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.8 Moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3 DISCRETE SIGNALS 39
3.0 Scope and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.1 Discrete Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.2 Operations on Discrete Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.3 Decimation and Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.4 Common Discrete Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.5 Discrete-Time Harmonics and Sinusoids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.6 Aliasing and the Sampling Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.7 Random Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
v
vi Contents
4 ANALOG SYSTEMS 68
4.0 Scope and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.2 System Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.3 Analysis of LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.4 LTI Systems Described by Dierential Equations . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.5 The Impulse Response of LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4.6 System Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.7 Application-Oriented Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5 DISCRETE-TIME SYSTEMS 96
5.0 Scope and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5.1 Discrete-Time Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5.2 System Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
5.3 Digital Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5.4 Digital Filters Described by Dierence Equations . . . . . . . . . . . . . . . . . . . . . . . . 103
5.5 Impulse Response of Digital Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
5.6 Stability of Discrete-Time LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
5.7 Connections: System Representation in Various Forms . . . . . . . . . . . . . . . . . . . . . 116
5.8 Application-Oriented Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
10 MODULATION 300
10.0 Scope and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
10.1 Amplitude Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
10.2 Single-Sideband AM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
10.3 Angle Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
10.4 Wideband Angle Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
10.5 Demodulation of FM Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
10.6 The Hilbert Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
REFERENCES 798
INDEX 801
LIST OF TABLES
xi
xii List of Tables
In keeping with the goals of the first edition, this second edition of Analog and Digital Signal Processing
is geared to junior and senior electrical engineering students and stresses the fundamental principles and
applications of signals, systems, transforms, and filters. The premise is to help the student think clearly in
both the time domain and the frequency domain and switch from one to the other with relative ease. The
text assumes familiarity with elementary calculus, complex numbers, and basic circuit analysis.
This edition has undergone extensive revision and refinement, in response to reviewer comments and to
suggestions from users of the first edition (including students). Major changes include the following:
1. At the suggestion of some reviewers, the chapters have been reorganized. Specifically, continuous and
discrete aspects (that were previously covered together in the first few chapters) now appear in separate
chapters. This should allow instructors easier access to either sequential or parallel coverage of analog
and discrete signals and systems.
2. The material in each chapter has been pruned and streamlined to make the book more suited as a
textbook. We highlight the most important concepts and problem-solving methods in each chapter by
including boxed review panels. The review panels are reinforced by discussions and worked examples.
Many new figures have been added to help the student grasp and visualize critical concepts.
3. New application-oriented material has been added to many chapters. The material focuses on how the
theory developed in the text finds applications in diverse fields such as audio signal processing, digital
audio special eects, echo cancellation, spectrum estimation, and the like.
4. Many worked examples in each chapter have been revised and new ones added to reinforce and extend
key concepts. Problems at the end of each chapter are now organized into Drill and Reinforcement,
Review and Exploration, and Computation and Design and include a substantial number of new
problems. The computation and design problems, in particular, should help students appreciate the
application of theoretical principles and guide instructors in developing projects suited to their own
needs.
5. The Matlab-based software supplied with the book has been revised and expanded. All the routines
have been upgraded to run on the latest version (currently, v5) of both the professional edition and
student edition of Matlab, while maintaining downward compatibility with earlier versions.
6. The Matlab appendices (previously at the end of each chapter) have been consolidated into a separate
chapter and substantially revamped. This has allowed us to present integrated application-oriented
examples spanning across chapters in order to help the student grasp important signal-processing
concepts quickly and eectively. Clear examples of Matlab code based on native Matlab routines,
as well as the supplied routines, are included to help accelerate the learning of Matlab syntax.
xiii
xiv Preface
7. A set of new self-contained, menu-driven, graphical user interface (GUI) programs with point-and-click
features is now supplied for ease of use in visualizing basic signal processing principles and concepts.
These GUIs require no experience in Matlab programming, and little experience with its syntax,
and thus allow students to concentrate their eorts on understanding concepts and results. The
programs cover signal generation and properties, time-domain system response, convolution, Fourier
series, frequency response and Bode plots, analog filter design, and digital filter design. The GUIs are
introduced at the end of each chapter, in the Computation and Design section of the problems. I
am particularly grateful to Craig Borghesani, Terasoft, Inc. (http://world.std.com/!borg/) for his
help and Matlab expertise in bringing many of these GUIs to fruition.
This book has profited from the constructive comments and suggestions of the following reviewers:
Professor Khaled Abdel-Ghaar, University of California at Davis
Professor Tangul Basar, University of Illinois
Professor Martin E. Kaliski, California Polytechnic State University
Professor Roger Goulet, Universite de Sherbrooke
Professor Ravi Kothari, University of Cincinnati
Professor Nicholas Kyriakopoulos, George Washington University
Professor Julio C. Mandojana, Mankato State University
Professor Hadi Saadat, Milwaukee School of Engineering
Professor Jitendra K. Tugnait, Auburn University
Professor Peter Willett, University of Connecticut
Here, at Michigan Technological University, it is also our pleasure to acknowledge the following:
Professor Clark R. Givens for lending mathematical credibility to portions of the manuscript
Professor Warren F. Perger for his unfailing help in all kinds of TEX-related matters
Professor Tim Schulz for suggesting some novel DSP projects, and for supplying several data files
Finally, at PWS Publishing, Ms Suzanne Jeans, Editorial Project Manager, and the editorial and production
sta (Kirk Bomont, Liz Clayton, Betty Duncan, Susan Pendleton, Bill Stenquist, Jean Thompson, and
Nathan Wilbur), were instrumental in helping meet (or beat) all the production deadlines.
We would appreciate hearing from you if you find any errors in the text or discover any bugs in the
software. Any errata for the text and upgrades to the software will be posted on our Internet site.
This book on analog and digital signal processing is intended to serve both as a text for students and as a
source of basic reference for professionals across various disciplines. As a text, it is geared to junior/senior
electrical engineering students and details the material covered in a typical undergraduate curriculum. As
a reference, it attempts to provide a broader perspective by introducing additional special topics towards
the later stages of each chapter. Complementing this text, but deliberately not integrated into it, is a set of
powerful software routines (running under Matlab) that can be used not only for reinforcing and visualizing
concepts but also for problem solving and advanced design.
The text stresses the fundamental principles and applications of signals, systems, transforms and filters.
It deals with concepts that are crucial to a full understanding of time-domain and frequency-domain rela-
tionships. Our ultimate objective is that the student be able to think clearly in both domains and switch
from one to the other with relative ease. It is based on the premise that what might often appear obvious
to the expert may not seem so obvious to the budding expert. Basic concepts are, therefore, explained and
illustrated by worked examples to bring out their importance and relevance.
Scope
The text assumes familiarity with elementary calculus, complex numbers, basic circuit analysis and (in a few
odd places) the elements of matrix algebra. It covers the core topics in analog and digital signal processing
taught at the undergraduate level. The links between analog and digital aspects are explored and emphasized
throughout. The topics covered in this text may be grouped into the following broad areas:
1. An introduction to signals and systems, their representation and their classification.
2. Convolution, a method of time-domain analysis, which also serves to link the time domain and the
frequency domain.
3. Fourier series and Fourier transforms, which provide a spectral description of analog signals, and
their applications.
4. The Laplace transform, which forms a useful tool for system analysis and its applications.
5. Applications of Fourier and Laplace techniques to analog filter design.
6. Sampling and the discrete-time Fourier transform (DTFT) of sampled signals, and the DFT and
the FFT, all of which reinforce the central concept that sampling in one domain leads to a periodic
extension in the other.
7. The z-transform, which extends the DTFT to the analysis of discrete-time systems.
8. Applications of digital signal processing to the design of digital filters.
xv
xvi From the Preface to the First Edition
We have tried to preserve a rational approach and include all the necessary mathematical details, but we
have also emphasized heuristic explanations whenever possible. Each chapter is more or less structured as
follows:
1. A short opening section outlines the objectives and topical coverage and points to the required back-
ground.
2. Central concepts are introduced in early sections and illustrated by worked examples. Special topics
are developed only in later sections.
3. Within each section, the material is broken up into bite-sized pieces. Results are tabulated and sum-
marized for easy reference and access.
4. Whenever appropriate, concepts are followed by remarks, which highlight essential features or limita-
tions.
5. The relevant software routines and their use are outlined in Matlab appendices to each chapter.
Sections that can be related to the software are specially marked in the table of contents.
6. End-of-chapter problems include a variety of drills and exercises. Matlab code to generate answers
to many of these appears on the supplied disk.
A solutions manual for instructors is available from the publisher.
Software
A unique feature of this text is the analog and digital signal processing (ADSP) software toolbox for signal
processing and analytical and numerical computation designed to run under all versions of Matlab. The
routines are self-demonstrating and can be used to reinforce essential concepts, validate the results of ana-
lytical paper and pencil solutions, and solve complex problems that might, otherwise, be beyond the skills
of analytical computation demanded of the student.
The toolbox includes programs for generating and plotting signals, regular and periodic convolution,
symbolic and numerical solution of dierential and dierence equations, Fourier analysis, frequency response,
asymptotic Bode plots, symbolic results for system response, inverse Laplace and inverse z-transforms, design
of analog, IIR and FIR filters by various methods, and more.
Since our primary intent is to present the principles of signal processing, not software, we have made no
attempt to integrate Matlab into the text. Software related aspects appear only in the appendices to each
chapter. This approach also maintains the continuity and logical flow of the textual material, especially for
users with no inclination (or means) to use the software. In any case, the self-demonstrating nature of the
routines should help you to get started even if you are new to Matlab. As an aside, all the graphs for this
text were generated using the supplied ADSP toolbox.
We hasten to provide two disclaimers. First, our use of Matlab is not to be construed as an endorsement
of this product. We just happen to like it. Second, our routines are supplied in good faith; we fully expect
them to work on your machine, but provide no guarantees!
Acknowledgements
This book has gained immensely from the incisive, sometimes provoking, but always constructive, criticism
of Dr. J.C.Mandojana. Many other individuals have also contributed in various ways to this eort. Special
thanks are due, in particular, to
Drs. R.W. Bickmore and R.T. Sokolov, who critiqued early drafts of several chapters and provided
valuable suggestions for improvement.
Dr. A.R. Hambley, who willingly taught from portions of the final draft in his classes.
From the Preface to the First Edition xvii
Drs. D.B. Brumm, P.H. Lewis and J.C. Rogers, for helping set the tone and direction in which the
book finally evolved.
Mr. Scott Ackerman, for his invaluable computer expertise in (the many) times of need.
At PWS Publishing, the editor Mr. Tom Robbins, for his constant encouragement, and Ms. Pam
Rockwell for her meticulous attention to detail during all phases of editing and production, and Ken
Morton, Lai Wong, and Lisa Flanagan for their behind-the-scenes help.
The students, who tracked down inconsistencies and errors in the various drafts, and provided extremely
useful feedback.
The Mathworks, for permission to include modified versions of a few of their m-files with our software.
We would also like to thank Dr. Mark Thompson, Dr. Hadi Saadat and the following reviewers for their
useful comments and suggestions:
Campus lore has it that students complain about texts prescribed by their instructors as being too
highbrow or tough and not adequately reflecting student concerns, while instructors complain about texts
as being low-level and, somehow, less demanding. We have consciously tried to write a book that both the
student and the instructor can tolerate. Whether we have succeeded remains to be seen and can best be
measured by your response. And, if you have read this far, and are still reading, we would certainly like to
hear from you.
Chapter 1
OVERVIEW
1.0 Introduction
I listen and I forget,
I see and I remember, I do and I learn.
A Chinese Proverb
This book is about signals and their processing by systems. This chapter provides an overview of the
terminology of analog and digital processing and of the connections between the various topics and concepts
covered in subsequent chapters. We hope you return to it periodically to fill in the missing details and get
a feel for how all the pieces fit together.
1.1 Signals
Our world is full of signals, both natural and man-made. Examples are the variation in air pressure when we
speak, the daily highs and lows in temperature, and the periodic electrical signals generated by the heart.
Signals represent information. Often, signals may not convey the required information directly and may
not be free from disturbances. It is in this context that signal processing forms the basis for enhancing,
extracting, storing, or transmitting useful information. Electrical signals perhaps oer the widest scope for
such manipulations. In fact, it is commonplace to convert signals to electrical form for processing.
The value of a signal, at any instant, corresponds to its (instantaneous) amplitude. Time may assume
a continuum of values, t, or discrete values, nts , where ts is a sampling interval and n is an integer.
The amplitude may also assume a continuum of values or be quantized to a finite number of discrete levels
between its extremes. This results in four possible kinds of signals, as shown in Figure 1.1.
t n t n
The music you hear from your compact disc (CD) player due to changes in the air pressure caused by
the vibration of the speaker diaphragm is an analog signal because the pressure variation is a continuous
function of time. However, the information stored on the compact disc is in digital form. It must be processed
1
2 Chapter 1 Overview
and converted to analog form before you can hear the music. A record of the yearly increase in the world
population describes time measured in increments of one (year), and the population increase is measured in
increments of one (person). It is a digital signal with discrete values for both time and population.
Few other technologies have revolutionized the world as profoundly as those based on digital signal
processing. For example, the technology of recorded music was, until recently, completely analog from end
to end, and the most important commercial source of recorded music used to be the LP (long-playing) record.
The advent of the digital compact disc has changed all that in the span of just a few short years and made
the long-playing record practically obsolete. Signal processing, both analog and digital, forms the core of
this application and many others.
1.2 Systems
Systems may process analog or digital signals. All systems obey energy conservation. Loosely speaking, the
state of a system refers to variables, such as capacitor voltages and inductor currents, which yield a measure
of the system energy. The initial state is described by the initial value of these variables or initial conditions.
A system is relaxed if initial conditions are zero. In this book, we study only linear systems (whose
input-output relation is a straight line passing through the origin). If a complicated input can be split into
simpler forms, linearity allows us to find the response as the sum of the response to each of the simpler
forms. This is superposition. Many systems are actually nonlinear. The study of nonlinear systems often
involves making simplifying assumptions, such as linearity.
A, t 0 A(1 et/ ), t 0
A cos(0 t )
A cos(0 t) , = tan1(0 )
(1 + 02 2 )1/2
A cos(0 t ) A0 t/
A cos(0 t), t 0 + e , t0
(1 + 0 )
2 2 1/2 1 + 02 2
vi(t) v0 (t)
1 + R + 1
Input Output
C
1 e- t /
t t
- -
(a) Input cos(0t) and response (dark) (b) Input cos(0t), t > 0 and response (dark)
1 1
0.5 0.5
Amplitude
Amplitude
0 0
0.5 0.5
1 1
t=0 t=0
Time t Time t
It is not our intent here to see how the solutions arise but how to interpret the results in terms of system
performance. The cosine input yields only a sinusoidal component as the steady-state response. The
response to the suddenly applied step and the switched cosine also includes a decaying exponential term
representing the transient component.
Figure 1.4 Step response of an RC circuit for various and the concept of rise time
Magnitude
ck cos (2 kf0 t + k ) ck
ck
f
kf0
t
Phase
k
f
kf0
1
0.5 Cosine input (dashed)
Amplitude
0
10
0.5 1
1 0.1
Time t
If the input consists of unit cosines at dierent frequencies, the magnitude and phase (versus frequency)
of the ratio of the output describes the frequency response, as shown in Figure 1.7. The magnitude
spectrum clearly shows the eects of attenuation at high frequencies.
20
Magnitude
40
0.5
60
80
0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
Frequency f Frequency f
There are measures analogous to bandwidth that describe the time duration of a signal over which much
of the signal is concentrated. The time constant provides one such measure.
The relation B = 1 clearly brings out the reciprocity in time and frequency. The smaller the duration
or the more localized the time signal, the larger is its bandwidth or frequency spread. The quantity B is
a measure of the time-bandwidth product, a relation analogous to the uncertainty principle of quantum
physics. We cannot simultaneously make both duration and bandwidth arbitrarily small.
(a) Cosines at different frequencies (b) Sum of 100 cosines (c) Limiting form is impulse
100 2
1
1.5
0.5
Amplitude
Amplitude
Amplitude
50 1
0
0.5
0.5
0 0
1
0.5
t=0 t=0 t=0
Time t Time t Time t
The time-domain response to an impulse is called the impulse response. A system is completely char-
acterized in the frequency domain by its frequency response or transfer function. A system is completely
characterized in the time domain by its impulse response. Naturally, the transfer function and impulse
response are two equivalent ways of looking at the same system.
1.3.4 Convolution
The idea of decomposing a complicated signal into simpler forms is very attractive for both signal and system
analysis. One approach to the analysis of continuous-time systems describes the input as a sum of weighted
impulses and finds the response as a sum of weighted impulse responses. This describes the process of
convolution. Since the response is, in theory, a cumulative sum of infinitely many impulse responses, the
convolution operation is actually an integral.
operation of multiplication when we move to a transformed domain, but there is a price to pay. Since the
response is evaluated in the transformed domain, we must have the means to remap this response to the
time domain through an inverse transformation. Examples of this method include phasor analysis (for
sinusoids and periodic signals), Fourier transforms, and Laplace transforms. Phasor analysis only allows
us to find the steady-state response of relaxed systems to periodic signals. The Fourier transform, on the
other hand, allows us to analyze relaxed systems with arbitrary inputs. The Laplace transform uses
a complex frequency to extend the analysis both to a larger class of inputs and to systems with nonzero
initial conditions. Dierent methods of system analysis allow dierent perspectives on both the system and
the analysis results. Some are more suited to the time domain, others oer a perspective in the frequency
domain, and yet others are more amenable to numerical computation.
ANALOG SIGNALS
2.1 Signals
The study of signals allows us to assess how they might be processed to extract useful information. This is
indeed what signal processing is all about. An analog signal may be described by a mathematical expression
or graphically by a curve or even by a set of tabulated values. Real signals, alas, are not easy to describe
quantitatively. They must often be approximated by idealized forms or models amenable to mathematical
manipulation. It is these models that we concentrate on in this chapter.
t t t t
Piecewise continuous signals possess dierent expressions over dierent intervals. Continuous sig-
nals, such as x(t) = sin(t), are defined by a single expression for all time.
8
2.1 Signals 9
Periodic signals are infinite-duration signals that repeat the same pattern endlessly. The smallest
repetition interval is called the period T and leads to the formal definition
All time-limited functions of finite amplitude have finite absolute area. The criterion of absolute integrability
is often used to check for system stability or justify the existence of certain transforms.
The area of x2 (t) is tied to the power or energy delivered to a 1- resistor. The instantaneous power
pi (t) (in watts) delivered to a 1- resistor may be expressed as pi (t) = x2 (t) where the signal x(t) represents
either the voltage across it or the current through it. The total energy E delivered to the 1- resistor is
called the signal energy (in joules) and is found by integrating the instantaneous power pi (t) for all time:
! !
E= pi (t) dt = |x2 (t)| dt (2.3)
The absolute value |x(t)| allows this relation to be used for complex-valued signals. The energy of some
common signals is summarized in the following review panel.
The signal power P equals the time average of the signal energy over all time. If x(t) is periodic with
period T , the signal power is simply the average energy per period, and we have
!
1
P = |x(t)|2 dt (for periodic signals) (2.4)
T T
"
Notation: We use T to mean integration over any convenient one-period duration.
The average value can never exceed the rms value and thus xav xrms . Two useful results pertaining to the
power in sinusoids and complex exponentials are listed in the following review panel.
If x(t) is a nonperiodic power signal, we can compute the signal power (or average value) by averaging its
energy (or area) over a finite stretch T0 , and letting T0 to obtain the limiting form
! !
1 1
P = lim |x(t)|2 dt xav = lim x(t) dt (for nonperiodic signals) (2.6)
T0 T0 T T0 T0 T
0 0
We emphasize that these limiting forms are useful only for nonperiodic signals.
4 4
2 2 2
t t t t
6 1 4 1 4 1 4 6
Figure E2.1A The signals for Example 2.1(a)
2.1 Signals 11
Comment: The third term describes twice the area of x(t)y(t) (and equals 12).
(b) The signal x(t) = 2et 6e2t , t > 0 is an energy signal. Its energy is
! ! % & ' ! (
1
Ex = x (t) dt =
2
4e2t 24e3t + 36e4t dt = 2 8 + 9 = 3 J Note: t
e dt =
0 0 0
Comment: As a consistency check, ensure that the energy is always positive!
(c) Find the signal power for the periodic signals shown in Figure E2.1C.
x(t) f(t)
A y(t) A
A
T/2 t t
T t T
T
-A -A
Figure E2.1C The signals for Example 2.1(c)
We use the results of Review Panel 2.2 to find the energy in one period.
For x(t): The energy Ex in one period is the sum of the energy in each half-cycle. We compute
Ex = 12 A2 (0.5T ) + 12 (A)2 (0.5T ) = 0.5A2 T .
Ex
The power in x(t) is thus Px = = 0.5A2 .
T
For y(t): The energy Ey in one period of y(t) is Ey = 0.5A2 .
Ey
Thus Py = = 0.5A2 = 0.5A2 D where D = is the duty ratio.
T T T
For a half-wave rectified sine, D = 0.5 and the signal power equals 0.25A2 .
For a full-wave rectified sine, D = 1 and the signal power is 0.5A2 .
A2 T
For f (t): The energy Ef in one period is Ef = 13 A2 (0.5T ) + 13 (A)2 (0.5T ) = .
3
2
Ef A
The signal power is thus Pf = = .
T 3
(d) Let x(t) = Aejt . Since x(t) is complex valued, we work with |x(t)| (which equals A) to obtain
! !
1 T 1 T 2
Px = |x(t)|2 dt = A dt = A2
T 0 T 0
12 Chapter 2 Analog Signals
x(t), f (t) = 1 + x(t 1), g(t) = x(1 t), h(t) = x(0.5t + 0.5), w(t) = x(2t + 2)
To generate f (t) = 1 + x(t 1), we delay x(t) by 1 and add a dc oset of 1 unit.
To generate g(t) = x(1 t), we fold x(t) and then shift right by 1.
Consistency check: With t = 1 tn , the edge of x(t) at t = 2 translates to tn = 1 t = 1.
To generate h(t) = x(0.5t + 0.5), first advance x(t) by 0.5 and then stretch by 2 (or first stretch by 2
and then advance by 1).
Consistency check: With t = 0.5tn + 0.5, the edge of x(t) at t = 2 translates to tn = 2(t 0.5) = 3.
To generate w(t) = x(2t + 2), advance x(t) by 2 units, then shrink by 2 and fold.
Consistency check: With t = 2tn + 2, the edge of x(t) at t = 2 translates to tn = 0.5(t 2) = 0.
(b) Express the signal y(t) of Figure E2.2B in terms of the signal x(t).
x(t) y(t)
4
2
t t
1 1 1 5
Figure E2.2B The signals x(t) and y(t) for Example 2.2(b)
We note that y(t) is amplitude scaled by 2. It is also a folded, stretched, and shifted version of x(t).
If we fold 2x(t) and stretch by 3, the pulse edges are at (3, 3). We need a delay of 2 to get y(t), and
thus y(t) = 2x[(t 2)/3] = 2x( 3t + 23 ).
Alternatively, with y(t) = 2x(t + ), we use t = tn + to solve for and by noting that t = 1
corresponds to tn = 5 and t = 1 corresponds to tn = 1. Then
)
1 = 5 + 1 2
= =
1 = + 3 3
14 Chapter 2 Analog Signals
For an even symmetric signal, the signal values at t = and t = are equal. The area of an even
symmetric signal is twice the area on either side of the origin. For an odd symmetric signal, the signal values
at t = and t = are equal but opposite in sign and the signal value at the origin equals zero. The area
of an odd symmetric signal over symmetric limits (, ) is always zero.
Combinations (sums and products) of symmetric signals are also symmetric under certain conditions as
summarized in the following review panel. These results are useful for problem solving.
To find xe (t) and xo (t) from x(t), we fold x(t) and invoke symmetry to get
t t
1 2 1 2
Figure E2.3A(1) The signals for Example 2.3(a)
For x(t), we create 0.5x(t) and 0.5x(t), then add the two to give xe (t) and subtract to give xo (t) as
shown in Figure E2.3A(2). Note how the components get added (or subtracted) when there is overlap.
0.5x(t) 0.5x(t) xe (t) xo(t)
4
2
2 2
2 1
2 1 t
t t 1 2
t 1
1 2 2 1
2 1 1 2 2
Figure E2.3A(2) The process for finding the even and odd parts of x(t)
16 Chapter 2 Analog Signals
The process for finding the even and odd parts of y(t) is identical and shown in Figure E2.3A(3).
0.5y(t) 0.5y(t) ye (t) yo(t)
2
2 2 2 1
2 1 t
t t t 1 2
-1
1 2 2 1 2 1 1 2
2
Figure E2.3A(3) The process for finding the even and odd parts of y(t)
In either case, as a consistency check, make sure that the even and odd parts display the appropriate
symmetry and add up to the original signal.
(b) Let x(t) = (sin t + 1)2 . To find its even and odd parts, we expand x(t) to get
The complex exponential form requires two separate plots (its real part and imaginary part, for example)
for a graphical description.
If we write xp (t) = A cos(0 t + ) = A cos[0 (t tp )], the quantity tp = /0 is called the phase delay
and describes the time delay in the signal caused by a phase shift of .
The various time and frequency measures are related by
1 2 tp
f0 = 0 = = 2f0 = 0 tp = 2f0 tp = 2 (2.13)
T T T
We emphasize that an analog sinusoid or harmonic signal is always periodic and unique for any choice of
period or frequency (quite in contrast to digital sinusoids, which we study later).
2.4 Harmonic Signals and Sinusoids 17
For a combination of sinusoids at dierent frequencies, say y(t) =x1 (t) + x2 (t) + , the signal power
Py equals the sum of the individual powers and the rms value equals P y . The reason is that squaring y(t)
produces cross-terms such as 2x1 (t)x2 (t), all of which integrate to zero.
The frequencies (in rad/s) of the individual components are 23 , 12 , and 13 , respectively.
The fundamental frequency is 0 = GCD( 23 , 12 , 13 ) = 16 rad/s. Thus, T = 2 0 = 12 seconds.
+ 2 ,
The signal power is Px = 0.5 2 + 42 + 42 = 36 W.
The rms value is xrms = Px = 36 = 6.
18 Chapter 2 Analog Signals
(b) The signal x(t) = sin(t) + sin(t) is almost periodic because the frequencies 1 = 1 rad/s and 2 =
rad/s of the two components
+ are non-commensurate.
,
The signal power is Px = 0.5 12 + 12 = 1 W.
- -
1, |t| < 0.5 1 |t|, |t| 1
rect(t) = (width = 1) tri(t) = (width = 2) (2.16)
0, elsewhere 0, elsewhere
Both are even symmetric and possess unit area and unit height. The signal f (t) = rect( t ) describes a
t
rectangular pulse of width , centered at t = . The signal g(t) = tri( ) describes a triangular pulse of
width 2 centered at t = . These pulse signals serve as windows to limit and shape arbitrary signals.
Thus, h(t) = x(t)rect(t) equals x(t) abruptly truncated past |t| = 0.5, whereas x(t)tri(t) equals x(t) linearly
tapered about t = 0 and zero past |t| = 1.
An arbitrary signal may be represented in dierent forms each of which has its advantages, depending on
the context. For example, we will find signal description by intervals quite useful in convolution, a description
by a linear combination of shifted steps and ramps very useful in Laplace transforms and a description by
linear combinations of shifted rect and tri functions extremely useful in Fourier transforms.
(b) Refer to Figure E2.5B. Describe x(t) by a linear combination of rect and /or tri functions, y(t) by a
linear combination of steps and/or ramps, and both x(t) and y(t) by intervals.
x(t) y(t)
3
3
t t
3 6 3
Figure E2.5B The signals x(t) and y(t) for Example 2.5(b)
The signal x(t) may be described by a linear combination of shifted rect and tri functions as
The signal y(t) may be described by a linear combination of shifted steps and ramps as
Caution: We could also write y(t) = t rect[ 13 (t 1.5)], but this is a product (not a linear
combination) and not the preferred form.
The signals x(t) and y(t) may be described by intervals as
3 t, 0<t3 -
t, 0t<3
x(t) = 3 + t, 3t<6 y(t) =
0, elsewhere
0, elsewhere
sin(t)
sinc(t) = (2.17)
t
Since the sine term oscillates while the factor 1/t decreases with time, sinc(t) shows decaying oscillations.
At t = 0, the sinc function produces the indeterminate form 0/0. Using the approximation sin() (or
lH
opitals rule), we establish that sinc(t) = 1 in the limit as t 0:
1 A
sinc
sinc squared
Amplitude
It says that (t) is of zero duration but possesses finite area. To put the best face on this, we introduce
a third, equally bizarre criterion that says (t) is unbounded (infinite or undefined) at t = 0 (all of which
would make any mathematician wince).
( t)
1/
1/
(1)
1/ Area = 1
Area = 1 Area = 1 Area = 1
t t t t
Width = Width = Width =
Figure 2.2 The genesis of the impulse function
As we decrease , its width shrinks and the height increases proportionately to maintain unit area. As
0, we get a tall, narrow spike with unit area that satisfies all criteria associated with an impulse.
Signals such as the triangular pulse 1 tri(t/ ), the exponentials 1 exp(t/ )u(t) and 2 exp(|t|/ ), the
sinc functions 1 sinc(t/ ) and 1 sinc2 (t/ ), the Gaussian 1 exp[(t/ )2 ], and the Lorentzian /[( 2 + t2 )]
all possess unit area, and all are equivalent to the unit impulse (t) as 0.
The signal (t t0 ) describes an impulse located at t = t0 . Its area may be evaluated using any lower
and upper limits (say 1 and 2 ) that enclose its time of occurrence, t0 :
! 2 -
1, 1 < t0 < 2
( t0 ) d = (2.19)
1 0, otherwise
Notation: The area of the impulse A(t) equals A and is also called its strength. The function A(t) is
shown as an arrow with its area A labeled next to the tip. For visual appeal, we make its height proportional
to A. Remember, however, that its height at t = 0 is infinite or undefined. An impulse with negative area
is shown as an arrow directed downward.
This extremely important result is called the sifting property. It is the sifting action of an impulse (what
it does) that purists actually regard as a formal definition of the impulse.
From the product property, f (t) = x(t)(t 1) = x(1)(t 1). This is an impulse function with
strength x(1) = 2.
The derivative g(t) = x (t) includes the ordinary derivative (slopes) of x(t) and an impulse function of
strength 4 at t = 3.
"
By the sifting property, I = x(t)(t 2) dt = x(2) = 4.
"
(c) Evaluate I1 = 0
4t2 (t + 1) dt.
The result is I1 = 0 because (t + 1) (an impulse at t = 1) lies outside the limits of integration.
"2
(d) Evaluate I2 = 4
cos(2t)(2t + 1) dt.
Using the scaling and sifting properties of the impulse, we get
! 2
I2 = cos(2t)[0.5(t + 0.5)] dt = 0.5 cos(2t)|t=0.5 = 0.5
4
x(t) x I (t)
Multiplier
t t
Analog signal i(t) ts
(1) (1)
t Ideally sampled signal
ts
Sampling function
Figure 2.3 The ideally sampled signal is a (nonperiodic) impulse train
Note that even though xI (t) is an impulse train, it is not periodic. The strength of each impulse equals
the signal value x(kts ). This form actually provides a link between analog and digital signals.
To approximate a smooth signal x(t) by impulses, we section it into narrow rectangular strips of width
ts as shown in Figure 2.4 and replace each strip at the location kts by an impulse ts x(kts )(t kts ) whose
strength equals the area ts x(kts ) of the strip. This yields the impulse approximation
2
x(t) ts x(kts )(t kts ) (2.24)
k=
26 Chapter 2 Analog Signals
Section signal into narrow rectangular strips Replace each strip by an impulse
t t
ts ts
A signal x(t) may thus be regarded as a weighted, infinite sum of shifted impulses.
then correspond to (t). Now, x (t) is odd and shows two pulses of height 1/ 2 and 1/ 2 with zero area.
As 0, x (t) approaches + and from below and above, respectively. Thus, (t) is an odd function
characterized by zero width, zero area, and amplitudes of + and at t = 0. Formally, we write
- !
0, t = 0
(t) =
(t) dt = 0 (t) = (t) (2.26)
undefined, t=0
The two infinite spikes in (t) are not impulses (their area is not constant), nor do they cancel. In fact,
(t) is indeterminate at t = 0. The signal
" (t) is therefore sketched as a set
" of two spikes, which leads to
the name doublet. Even though its area (t) dt is zero, its absolute area | (t)| dt is infinite.
With = 1, we get (t) = (t). This implies that (t) is an odd function.
The product property of the doublet has a surprise in store. The derivative of x(t)(t ) may be
described in one of two ways. First, using the rule for derivatives of products, we have
d
[x(t)(t )] = x (t)(t ) + x(t) (t ) = x ()(t ) + x(t) (t ) (2.29)
dt
Second, using the product property of impulses, we also have
d d
[x(t)(t )] = [x()(t )] = x() (t ) (2.30)
dt dt
Comparing the two equations and rearranging, we get the rather unexpected result
x(t) (t ) = x() (t ) x ()(t ) (2.31)
This is the product property. Unlike impulses, x(t) (t ) does not just equal x() (t )!
Integrating the two sides that describe the product property, we obtain
! ! !
x(t) (t ) dt = x() (t ) dt x ()(t ) dt = x () (2.32)
This describes the sifting property of doublets. The doublet (t ) sifts out the negative derivative of
x(t) at t = .
Remark: Higher derivatives of (t) obey (n) (t) = (1)n (n) (t), are alternately odd and even, and possess
zero area. All are limiting forms of the same sequences that generate impulses, provided their ordinary
derivatives (up to the required order) exist. None are absolutely integrable. The impulse is unique in being
the only absolutely integrable function from among all its derivatives and integrals (the step, ramp, etc.).
The first derivative x (t) results in a rectangular pulse (the ordinary derivative of x(t)) and an impulse
(due to the jump) at t = 3.
The second derivative x (t) yields two impulses at t = 0 and t = 2 (the derivative of the rectangular
pulse) and a doublet at t = 3 (the derivative of the impulse).
! 2
(c) Evaluate I = [(t 3)(2t + 2) + 8 cos(t) (t 0.5)] dt.
2
With (2t + 2) = 0.5(t + 1), the sifting property of impulses and doublets gives
1 1
1 d 1
I = 0.5(t 3)1 8 [cos(t)]11 = 0.5(1 3) + 8 sin 0.5 = 2 + 8 = 23.1327
t=1 dt t=0.5
2.8 Moments
Moments are general measures of signal size based on area. The nth moment is defined as
!
mn = tn x(t) dt (2.33)
"
The zeroth moment m0 = x(t) dt is just the area of x(t). The normalized first moment mx = m1 /m0
is called the mean. Moments about the mean are called central moments. The nth central moment is
denoted n .
" !
tx(t) dt
mx = " n = (t mx )n x(t) dt (2.34)
x(t) dt
2.8 Moments 29
To account for complex valued signals or sign changes, it is often more useful to define moments in terms of
the absolute quantities |x(t)| or |x(t)|2 . The second central moment 2 is called the variance. It is often
denoted by 2 and defined by
m2
2 = 2 = m2x (2.35)
m0
The first few moments find widespread application. In physics, if x(t) represents the mass density, then
mx equals the centroid, 2 equals the moment of inertia, and equals the radius of gyration. In probability
theory, if x(t) represents the density function of a random variable, then mx equals the mean, 2 equals the
variance, and equals the standard deviation.
For power signals, the normalized second moment, m2 /m0 , equals the total power, and 2 equals the ac
power (the dierence between the total power and the dc power). The variance 2 may thus be regarded as
the power in a signal with its dc oset removed. For an energy signal, mx is a measure of the eective signal
delay, and is a measure of its eective width, or duration.
(b) Find the signal delay and duration for the signal x(t) = et u(t).
! ! !
The moments of x(t) are m0 = e dt = 1, m1 =
t
tet dt = 1, m2 = t2 et dt = 2.
0 0 0
% &1/2
We find that delay = mx = m1
m0 = 1, and duration = = m2
m0 m2x = (2 1)1/2 = 1.
30 Chapter 2 Analog Signals
CHAPTER 2 PROBLEMS
DRILL AND REINFORCEMENT
2.1 (Operations on Signals) For each signal x(t) of Figure P2.1, sketch the following:
(a) y(t) = x(t) (b) f (t) = x(t + 3)
(c) g(t) = x(2t 2) (d) h(t) = x(2 2t)
(e) p(t) = x[0.5(t 2)] (f ) s(t) = x(0.5t 1)
(g) xe (t) (its even part) (h) xo (t) (its odd part)
2.2 (Symmetry) Find the even and odd parts of each signal x(t).
(a) x(t) = et u(t) (b) x(t) = (1 + t)2 (c) x(t) = [sin(t) + cos(t)]2
2.3 (Symmetry) Evaluate the following integrals using the concepts of symmetry.
! 3 ! 2 3 4
(a) I = (4 t2 ) sin(5t) dt (b) I = 4 t3 cos(0.5t) dt
3 2
2.4 (Classification) For each periodic signal shown in Figure P2.4, evaluate the average value xav , the
energy E in one period, the signal power P , and the rms value xrms .
x(t) Signal 1 x(t) Signal 2 x(t) Signal 3
4 2
1
1
t 7 t t
2 2 5 2 2 5 2
2
5
Figure P2.4 Periodic signals for Problem 2.4
2.5 (Signal Classification) Classify each signal as a power signal, energy signal, or neither and find its
power or energy as appropriate.
(a) tet u(t) (b) et [u(t) u(t 1)] (c) te|t|
(d) et (e) 10et sin(t)u(t) (f ) sinc(t)u(t)
2.6 (Periodic Signals) Classify each of the following signals as periodic, nonperiodic, or almost periodic
and find the signal power where appropriate. For each periodic signal, also find the fundamental
frequency and the common period.
(a) x(t) = 4 3 sin(12t) + sin(30t) (b) x(t) = cos(10t)cos(20t)
(c) x(t) = cos(10t) cos(20t) (d) x(t) = cos(10t)cos(10t)
(e) x(t) = 2 cos(8t) + cos2 (6t) (f ) x(t) = cos(2t) 2 cos(2t 4 )
Chapter 2 Problems 31
2.7 (Periodic Signals) Classify each of the following signals as periodic, nonperiodic, or almost periodic
and find the signal power where appropriate. For each periodic signal, also find the fundamental
frequency and the common period.
(a) x(t) = 4 3 sin2 (12t) (b) x(t) = cos() + cos(20t) (c) x(t) = cos(t) + cos2 (t)
2.8 (Signal Description) For each signal x(t) shown in Figure P2.8,
(a) Express x(t) by intervals.
(b) Express x(t) as a linear combination of steps and/or ramps.
(c) Express x(t) as a linear combination of rect and/or tri functions.
(d) Sketch the first derivative x (t).
(e) Find the signal energy in x(t).
2.10 (Impulses and Comb Functions) Sketch the following signals. Note that the comb function is a
2
periodic train of unit impulses with unit spacing defined as comb(t) = (t k).
k=
2.12 (Generalized Derivatives) Sketch the signals x(t), x (t), and x (t) for the following:
(a) x(t) = 4 tri[ 21 (t 2)] (b) x(t) = et u(t) (c) x(t) = 2 rect(0.5t) + tri(t)
(d) x(t) = e|t| (e) x(t) = (1 et )u(t) (f ) x(t) = e2t rect( t1
2 )
32 Chapter 2 Analog Signals
2.13 (Ideally Sampled Signals) Sketch the ideally sampled signal and the impulse approximation for
each of the following signals, assuming a sampling interval of ts = 0.5 s.
(a) x(t) = rect(t/4) (b) x(t) = tri(t/2) (c) x(t) = sin(t) (d) x(t) = t rect(0.5t)
2.15 (rms Value) Find the signal power and rms value for a periodic pulse train with peak value A and
duty ratio D if the pulse shape is the following:
2.16 (Sketching Signals) Sketch the following signals. Which of these signals (if any) are identical?
(a) x(t) = r(t 2) (b) x(t) = tu(t) 2u(t 2) (c) x(t) = 2u(t) (t 2)u(t 2)
(d) x(t) = tu(t 2) 2u(t 2) (e) x(t) = tu(t 2) 2u(t) (f ) x(t) = (t 2)u(t) u(t 2)
2.17 (Signals and Derivatives) Sketch each signal x(t) and represent it as a linear combination of step
and/or ramp functions where possible.
(a) x(t) = u(t + 1)u(1 t) (b) x(t) = sgn(t)rect(t) (c) x(t) = t rect(t)
(d) x(t) = t rect(t 0.5) (e) x(t) = t rect(t 2) (f ) x(t) = u(t + 1)u(1 t)tri(t + 1)
2.18 (Areas) Use the signals x(t) = (t) and x(t) = sinc(t) as examples to justify the following:
(a) If the area of |x(t)| is finite, the area of x2 (t) need not be finite.
(b) If the area of x2 (t) is finite, the area of |x(t)| need not be finite.
(c) If the area of x2 (t) is finite, the area of x(t) is also finite.
2.19 (Energy) Consider an energy signal x(t), over the range 3 t 3, with energy E = 12 J. Find the
range of the following signals and compute their signal energy.
2.20 (Power) Consider a periodic signal x(t) with time period T = 6 and power P = 4 W. Find the time
period of the following signals and compute their signal power.
Use this result to show that for any energy signal, the signal energy equals the sum of the energy in
its odd and even parts.
2.22 (Areas and Energy) The area of the signal et u(t) equals unity. Use this result and the notion of
how the area changes upon time scaling to find the following (without formal integration).
(a) The area of x(t) = e2t u(t).
(b) The energy of x(t) = e2t u(t).
(c) The area of y(t) = 2e2t u(t) 6et u(t).
(d) The energy of y(t) = 2e2t u(t) 6et u(t).
2.23 (Power) Over one period, a periodic signal increases linearly from A to B in T1 seconds, decreases
linearly from B to A in T2 seconds, and equals A for the rest of the period. What is the power of this
periodic signal if its period T is given by T = 2(T1 + T2 )?
2.24 (Power and Energy) Use simple signals such as u(t), u(t 1), et u(t) (and others) as examples to
argue for or against the following statements.
(a) The sum of energy signals is an energy signal.
(b) The sum of a power and an energy signal is a power signal.
(c) The algebraic sum of two power signals can be an energy signal or a power signal or identically
zero.
(d) The product of two energy signals is zero or an energy signal.
(e) The product of a power and energy signal is an energy signal or identically zero.
(f ) The product of two power signals is a power signal or identically zero.
2.25 (Switched Periodic Signals) Let x(t) be a periodic signal with power Px . Show that the power Py
of the switched periodic signal y(t) = x(t)u(t t0 ) is given by Py = 0.5Px . Use this result to compute
the signal power for the following:
(a) y(t) = u(t) (b) y(t) = | sin(t)|u(t) (c) y(t) = 2 sin(2t)u(t) + 2 sin(t)
(d) y(t) = 2 u(t) (e) y(t) = (1 et )u(t) (f ) y(t) = 2 sin(t)u(t) + 2 sin(t)
2.26 (Power and Energy) Compute the signal energy or signal power as appropriate for each x(t).
(a) x(t) = e2t u(t) (b) x(t) = et1 u(t) (c) x(t) = e(1t) u(1 t)
(d) x(t) = e1+2t u(1 t) (e) x(t) = e(12t) u(1 2t) (f ) x(t) = et u(t 2)
2
(g) x(t) = e|1t| (h) x(t) = sinc(3t 1) (i) x(t) = et /2
2.27 (Power and Energy) Classify each signal as a power signal, energy signal, or neither, and compute
the signal energy or signal power where appropriate.
1
(a) x(t) = u(t) (b) x(t) = 1 + u(t) (c) x(t) =
1 + |t|
1 1
(d) x(t) = (e) x(t) = 1 + cos(t)u(t) (f ) x(t) = , t 1
1 + t2 t
1
(g) x(t) = , t 1 (h) x(t) = cos(t)u(t) (i) x(t) = cos(t)u(t) cos[(t 4)]u(t 4)
t
34 Chapter 2 Analog Signals
2.28 (Periodicity) The sum of two periodic signals is periodic if their periods T1 and T2 are commensurate.
Under what conditions will their product be periodic? Use sinusoids as examples to prove your point.
2.29 (Periodicity) Use Eulers identity to confirm that the signal x(t) = ej2f0 t is periodic with period
T = 1/f0 and use this result in the following:
(a) Is the signal
y(t) = x(2t) + 3x(0.5t) periodic? If so, what is its period?
(b) Is the signal
f (t) = 2ej16t + 3ej7t periodic? If so, what is its period?
(c) Is the signal
g(t) = 4ej16t 5e7t periodic? If so, what is its period?
(d) Is the signal
h(t) = 3ej16t 2e7 periodic? If so, what is its period?
2
(e) Is the signal s(t) = X[k]ej2kf0 t periodic? If so, what is its period?
k=
2.30 (Periodicity) It is claimed that each of the following signals is periodic. Verify this claim by sketching
each signal and finding its period. Find the signal power for those that are power signals.
2.32 (Periodicity) It is claimed that the sum of an energy signal x(t) and its shifted (by multiples of T )
replicas is a periodic signal with period T . Verify this claim by sketching the following and, for each
case, compare the area of one period of the periodic extension with the total area of x(t).
(a) The sum of x(t) = tri(t/2) and its replicas shifted by T = 6.
(b) The sum of x(t) = tri(t/2) and its replicas shifted by T = 4.
(c) The sum of x(t) = tri(t/2) and its replicas shifted by T = 3.
2.33 (Periodic Extension) The sum of an absolutely integrable signal x(t) and its shifted (by multiples
of T ) replicas is called the periodic extension of x(t) with period T . Show that the periodic extension
of the signal x(t) = et u(t) with period T is y(t) = x(t)/(1 eT ). How does the area of one period
of y(t) compare with the total area of x(t). Sketch y(t) and find its signal power.
2.34 (Half-Wave Symmetry) Argue that if a half-wave symmetric signal x(t) with period T is made
up of several sinusoidal components, each component is also half-wave symmetric over one period T .
Which of the following signals show half-wave symmetry?
(a) x(t) = cos(2t) + cos(6t) + cos(10t)
(b) x(t) = 2 + cos(2t) + sin(6t) + sin(10t)
(c) x(t) = cos(2t) + cos(4t) + sin(6t)
2.35 (Derivatives) Each of the following signals is zero outside the interval 1 t 1. Sketch the signals
x(t), x (t), and x (t).
(a) x(t) = cos(0.5t) (b) x(t) = 1 + cos(t) (c) x(t) = tri(t) (d) x(t) = 1 t2
Chapter 2 Problems 35
2.36 (Practical Signals) Energy signals that are commonly encountered as the response of analog systems
include the decaying exponential, the exponentially damped ramp, and the exponentially damped sine.
Compute the signal energy for the following:
(a) x(t) = et/ u(t) (b) x(t) = tet/ u(t) (c) f (t) = et sin(2t)u(t)
2.37 (Time Constant) For an exponential signal of the form x(t) = Aet/ u(t), the quantity is called
the time constant and provides a measure of how rapidly the signal decays. A practical estimate of
the time it takes for the signal to decay to less than 1% of its initial value is 5 . What is the actual
time it takes for x(t) to decay to exactly 1% of its initial value? How well does the practical estimate
compare with the exact result?
2.38 (Rise Time) The rise time is a measure of how fast a signal reaches a constant final value and is
commonly defined as the time it takes to rise from 10% to 90% of the final value. Compute the rise
time of the following signals.
-
sin(0.5t), 0 t 1
(a) x(t) = (1 e )u(t)
t
(b) y(t) =
1, t 1
2.39 (Rise Time and Scaling) In practice the rise time tR of the signal x(t) = (1 et/ )u(t) is often
approximated as tR 2.2 .
(a) What is the actual rise time of x(t), and how does it compare with the practical estimate?
(b) Compute the rise time of the signals f (t) = x(3t) and g(t) = x(t/3). How are these values related
to the rise time of x(t)? Generalize this result to find the rise time of h(t) = x(t).
2.40 (Settling Time) The settling time is another measure for signals that reach a nonzero final value.
The 5% settling time is defined as the time it takes for a signal to settle to within 5% of its final value.
Compute the 5% settling time of the following signals.
-
sin(0.5t), 0 t 1
(a) x(t) = (1 e )u(t)
t
(b) y(t) =
1, t 1
2.41 (Signal Delay) The delay of an energy signal is a measure of how far the signal has been shifted
from its mean position and is defined in one of two ways:
" "
tx(t) dt tx2 (t) dt
D1 = " D2 = "
x(t) dt
x2 (t) dt
(a) Verify that the delays D1 and D2 of x(t) = rect(t) are both zero.
(b) Find and compare the delays D1 and D2 of the following signals.
(1) x(t) = rect(t 2) (2) x(t) = et u(t) (3) x(t) = tet u(t)
2.42 (Signal Models) Argue that each of the following models can describe the signal of Figure P2.42,
and find the parameters A and for each model.
At
(a) x(t) = Atet u(t) (b) x(t) = A(et e2t ) (c) x(t) =
+ t2
x(t)
1
t
1
Figure P2.42 Signals for Problem 2.42
36 Chapter 2 Analog Signals
2.43 (Instantaneous Frequency) The instantaneous phase of the sinusoid x(t) = cos[(t)] is defined
as its argument (t), and the instantaneous frequency fi (t) is then defined by the derivative of the
instantaneous phase as fi (t) = (t)/2. Consider the signal y(t) = cos(2f0 t + ). Show that its
instantaneous frequency is constant and equals f0 Hz.
2.44 (Chirp Signals) Signals whose frequency varies linearly with time are called swept-frequency signals,
or chirp signals. Consider the signal x(t) = cos[(t)] where the time-varying phase (t) is also called
the instantaneous phase. The instantaneous frequency i (t) = (t) is defined as the derivative of the
instantaneous phase (in rad/s).
(a) What is the expression for (t) and x(t) if the instantaneous frequency is to be 10 Hz?
(b) What is the expression for (t) and x(t) if the instantaneous frequency varies linearly from 0 to
100 Hz in 2 seconds?
(c) What is the expression for (t) and x(t) if the instantaneous frequency varies linearly from 50 Hz
to 100 Hz in 2 seconds?
(d) Set up a general expression for a chirp signal x(t) whose frequency varies linearly from f0 Hz to
f1 Hz in t0 seconds.
2.45 (Chirp Signals) Chirp signals whose frequency varies linearly with time are often used in signal-
processing applications (such as radar). Consider the signal x(t) = cos(t2 ). How does the instanta-
neous frequency of x(t) vary with time. What value of will result in a signal whose frequency varies
from dc to 10 Hz in 4 seconds?
2.46 (Impulses as Limiting Forms) Argue that the following signals describe the impulse x(t) = A(t)
as 0. What is the constant A for each signal?
1 t2 /2
(a) x(t) = e (b) x(t) = (c) x(t) = 1
sinc( t ) (d) x(t) = 1 |t|/
e
2 + t2
2.47 (Impulses) It is possible to show that the signal [f (t)] is a string of impulses at the roots tk of
f (t) = 0 whose strengths equal 1/ | f (tk ) |. Use this result to sketch the following signals.
2.48 (Periodicity) Use Matlab to plot each signal over the range 0 t 3, using a small time step (say,
0.01 s). If periodic, determine the period and compute the signal power (by hand if possible or using
Matlab otherwise).
(a) x(t) = sin(2t) (b) y(t) = ex(t) (c) z(t) = ejx(t)
(d) f (t) = cos[x(t)] (e) g(t) = cos[x2 (t)]
2.49 (Curious Signals) Let s(t) = sin(t). Use Matlab to sketch each of the following signals over the
range 2 t 6, using a small time step (say, 0.02 s). Confirm that each signal is periodic and find
the period and power.
(a) x(t) = u[s(t)] (b) y(t) = sgn[s(t)] (c) f (t) = sgn[s(t)] + sgn[s(t + 0.5)]
(d) g(t) = r[s(t)] (e) h(t) = es(t)
"
2.50 (Numerical Integration) A crude way to compute the definite integral I = x(t) dt is to ap-
proximate x(t) by N rectangular strips of width ts and find I as the sum of the areas under each
strip:
I ts [x(t1 ) + x(t2 ) + + x(tN )], tk = kts
This is called the rectangular rule. The sum of the quantity in brackets can be computed using the
Matlab routine sum and then multiplied by ts to approximate the integral I as the area.
(a) Use the rectangular rule to approximate the integrals of x(t) = tri(t) and y(t) = sin2 (t), 0
t 1 with N = 5 and N = 10 and compare with the exact values. Does increasing the the
number of strips N lead to more accurate results?
(b) The trapezoidal rule uses trapezoidal strips of width ts to approximate the integral I. Show that
this rule leads to the approximation
I ts [ 12 x(t1 ) + x(t2 ) + + 12 x(tN )], tk = kts
(c) Use the trapezoidal rule to approximate the integral of the signals y(t) = sin2 (t), 0 t 1 and
x(t) = tri(t) with N = 5 and N = 10 and compare with the exact values. Are the results for a
given N more accurate than those found by the rectangular rule?
"t
2.51 (Numerical Integration) A crude way to find the running integral y(t) = 0
x(t) dt is to approxi-
mate x(t) by rectangular strips and compute
n
2
y(t) ts x(kts )
k=0
The cumulative sum can be computed using the Matlab routine cumsum and then multiplied by ts to
obtain y(t). Let x(t) = 10et sin(2t), 0 t T0 . Find an exact closed form result for y(t).
(a) Let T0 = 2. Plot the exact expression and the approximate running integral of x(t) using a time
step of ts = 0.1 s and ts = 0.01 s. Comment on the dierences.
(b) Let T0 = 5. Plot the approximate running integral of x(t) with ts = 0.1 s and ts = 0.01 s. From
the graph, can you predict the area of x(t) as T0 ? Does the error between this predicted
value and the value from the exact result decrease as ts decreases? Should it? Explain.
2.52 (Numerical Derivatives) The derivative x (t) can be numerically approximated by the slope
x[n] x[n 1]
x (t)|t=nts
ts
38 Chapter 2 Analog Signals
where ts is a small time step. The Matlab routine diff yields the dierence x[n] x[n 1] (whose
length is 1 less than the length of x[n]). Use Matlab to obtain the approximate derivative of x(t)
over 0 t 3 with ts = 0.1 s. Compute the exact derivative of x(t) = 10et sin(2t)u(t) and plot
both the exact and approximate results on the same plot. Also plot the error between the exact and
approximate results. What happens to the error if ts is halved?
2.53 (Signal Operations) The ADSP routine operate can be used to plot scaled and/or shifted versions
of a signal x(t). Let x(t) = 2u(t + 1) r(t + 1) + r(t 1). Use Matlab to plot the signals
x(t), y(t) = x(2t 1), and f (t) = x(1 2t).
2.54 (Periodic Signals) The ADSP routines lcm1 and gcd1 allow us to find the LCM or GCD of an array
of rational fractions. Use these routines to find the common period and fundamental frequency of the
signal x(t) = 2 cos(2.4t) 3 sin(5.4t) + cos(14.4t 0.2).
2.55 (Energy and Power) The ADSP routine enerpwr also computes the energy (or power) if x(t) is a
string expression (it does not require you to specify a time step). Let x(t) = 6 sinc(2t), 0.5 t 0.5.
Use the routine enerpwr to find the energy in this signal and compute the signal power if x(t) describes
one period of a periodic signal with period T = 1.4 s.
2.56 (Beats) Consider the amplitude modulated signal given by x(t) = cos(2f0 t)cos[2(f0 + f )t].
(a) This signal can be expressed in the form x(t) = A cos(2f1 t) + B cos(2f2 t). How are A, B, f1 ,
and f2 related to the parameters of the original signal? Use Matlab to plot x(t) for 0 t 1 s
with a time step of 8192
1
s, f0 = 400 Hz, and A = B = 1 for the following values of f .
2.57 (Chirp Signals) Consider the chirp signal x(t) = cos(t2 /6). Plot x(t) over 0 t T using a small
time step (say, 0.02 s) with T = 2, 6, 10 s. What do the plots reveal as T is increased? Is this signal
periodic? Should it be? How does its instantaneous frequency vary with time?
2.58 (Simulating an Impulse) An impulse may be regarded as a tall, narrow spike that arises as a
limiting form of many ordinary functions such as the sinc and Gaussian. Consider the Gaussian signal
2
x(t) = t10 e(t/t0 ) . As we decrease t0 , its height increases to maintain a constant area.
(a) Plot x(t) over 2 t 2 for t0 = 1, 0.5, 0.1, 0.05, 0.01 using a time step of 0.1t0 . What can you
say about the symmetry in x(t)?
(b) Find the area of x(t) for each t0 , using the Matlab command sum. How does the area change
with t0 ?
(c) Does x(t) approach an impulse as t0 0? If x(t) A(t), what is the value of A?
(d) Plot the (numerical) derivative x (t) for each t0 , using the Matlab command diff. Find the
area of x (t) for each t0 . How does the area change with t0 ? What can you say about the nature
of x(t) as t0 0?
Chapter 3
DISCRETE SIGNALS
A discrete signal x[n] is called right-sided if it is zero for n < N (where N is finite), causal if it is zero
for n < 0, left-sided if it is zero for n > N , and anti-causal if it is zero for n 0.
n n n n
N N
39
40 Chapter 3 Discrete Signals
Signals for which the absolute sum |x[n]| is finite are called absolutely summable. For nonperiodic signals,
the signal energy E is a useful measure. It is defined as the sum of the squares of the signal values
"
E= |x[n]|2 (3.3)
n=
The absolute value allows us to extend this relation to complex-valued signals. Measures for periodic signals
are based on averages since their signal energy is infinite. The average value xav and signal power P of a
periodic signal x[n] with period N are defined as the average sum per period and average energy per period,
respectively:
N 1 N 1
1 " 1 "
xav = x[n] P = |x[n]|2 (3.4)
N n=0 N n=0
Note that the index runs from n = 0 to n = N 1 and includes all N samples in one period. Only for
nonperiodic signals is it useful to use the limiting forms
M
" M
"
1 1
xav = lim x[n] P = lim |x[n]|2 (3.5)
M 2M + 1 M 2M + 1
n=M n=M
Signals with finite energy are called energy signals (or square summable). Signals with finite power are called
power signals. All periodic signals are power signals.
1% &
3 3
1" 1" 2
xav = x[n] = 0 P = x [n] = 36 + 36 = 18 W
4 n=0 4 n=0 4
1% &
3
1"
P = |x[n]|2 = 36 + 36 + 36 + 36 = 36 W
4 n=0 4
In either case, a sample of x[n] at the original index n will be plotted at a new index nN given by
n = nN , and this can serve as a consistency check in sketches.
42 Chapter 3 Discrete Signals
2
2 2 2 2 2
n n n n n n
2 3 6 4 1 3 2 2 3 5
Figure E3.2 The signals for Example 3.2
3.2.1 Symmetry
If a signal x[n] is identical to its mirror image x[n], it is called an even symmetric signal. If x[n]
diers from its mirror image x[n] only in sign, it is called an odd symmetric or antisymmetric signal.
Mathematically,
xe [n] = xe [n] xo [n] = xo [n] (3.6)
In either case, the signal extends over symmetric limits N n N . For an odd symmetric signal, xo [0] = 0
and the sum of xo [n] over symmetric limits (, ) equals zero:
M
"
xo [k] = 0 (3.7)
k=M
3.2 Operations on Discrete Signals 43
To find xe [n] and xo [n] from x[n], we fold x[n] and invoke symmetry to get
Naturally, if x[n] has even symmetry, xo [n] will equal zero, and if x[n] has odd symmetry, xe [n] will equal
zero.
The various !signals are sketched in Figure E3.3A. As a consistency check you should confirm that
xo [0] = 0, xo [n] = 0, and that the sum xe [n] + xo [n] recovers x[n].
44 Chapter 3 Discrete Signals
(b) Let x[n] = u[n] u[n 5]. Find and sketch its odd and even parts.
The signal x[n] and the genesis of its odd and even parts are shown in Figure E3.3B. Note the value
of xe [n] at n = 0 in the sketch.
x [n] 0.5x [n] 0.5x [n] x e [n] x o [n]
2 2
1 1 1 1
n n n n 4 n
4 4 4 4 4 4
1
Figure E3.3B The signal x[n] and its odd and even parts for Example 3.3(b)
3.3.1 Decimation
Suppose x[n] corresponds to an analog signal x(t) sampled at intervals ts . The signal y[n] = x[2n] then
corresponds to the compressed signal x(2t) sampled at ts and contains only alternate samples of x[n] (cor-
responding to x[0], x[2], x[4], . . .). We can also obtain y[n] directly from x(t) (not its compressed version)
if we sample it at intervals 2ts (or at a sampling rate S = 1/2ts ). This means a twofold reduction in the
sampling rate. Decimation by a factor of N is equivalent to sampling x(t) at intervals N ts and implies an
N-fold reduction in the sampling rate. The decimated signal x[N n] is generated from x[n] by retaining every
N th sample corresponding to the indices k = N n and discarding all others.
3.3.2 Interpolation
If x[n] corresponds to x(t) sampled at intervals ts , then y[n] = x[n/2] corresponds to x(t) sampled at ts /2
and has twice the length of x[n] with one new sample between adjacent samples of x[n]. If an expression for
x[n] (or the underlying analog signal) were known, it would be no problem to determine these new sample
values. If we are only given the sample values of x[n] (without its analytical form), the best we can do is
interpolate between samples. For example, we may choose each new sample value as zero (zero interpolation),
a constant equal to the previous sample value (step interpolation), or the average of adjacent sample values
(linear interpolation). Zero interpolation is referred to as up-sampling and plays an important role in
practical interpolation schemes. Interpolation by a factor of N is equivalent to sampling x(t) at intervals
ts /N and implies an N-fold increase in both the sampling rate and the signal length.
3.3 Decimation and Interpolation 45
Some Caveats
Consider the two sets of operations shown below:
x[n] decimate by 2 x[2n] interpolate by 2 x[n]
We see that decimation is indeed the inverse of interpolation, but the converse is not necessarily true.
After all, it is highly unlikely for any interpolation scheme to recover or predict the exact value of the
samples that were discarded during decimation. In situations where both interpolation and decimation are
to be performed in succession, it is therefore best to interpolate first. In practice, of course, interpolation or
decimation should preserve the information content of the original signal, and this imposes constraints on
the rate at which the original samples were acquired.
The step-interpolated signal is h[n] = x[ n3 ] = {1, 1, 1, 2, 2, 2, 5, 5, 5, 1, 1, 1}.
46 Chapter 3 Discrete Signals
The linearly interpolated signal is s[n] = x[ n3 ] = {1, 4 5
3, 3, 2, 3, 4, 5, 3, 1, 1, 23 , 13 }.
In linear interpolation, note that we interpolated the last two values toward zero.
(b) Let x[n] = {3, 4, 5, 6}. Find g[n] = x[2n 1] and the step-interpolated signal h[n] = x[0.5n 1].
In either case, we first find y[n] = x[n 1] = {3, 4, 5, 6}. Then
g[n] = y[2n] = x[2n 1] = {4, 6}.
h[n] = y[ n2 ] = x[0.5n 1] = {3, 3, 4, 4, 5, 5, 6, 6}.
(c) Let x[n] = {3, 4, 5, 6}. Find y[n] = x[2n/3] assuming step interpolation where needed.
Since we require both interpolation and decimation, we first interpolate and then decimate to get
After interpolation: g[n] = x[ n3 ] = {3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6}.
After decimation: y[n] = g[2n] = x[ 23 n] = {3, 3, 4, 5, 5, 6}.
This is just an impulse with strength x[k]. The product property leads directly to
"
x[n][n k] = x[k] (3.13)
n=
This is the sifting property. The impulse extracts the value x[k] from x[n] at the impulse location n = k.
The product and sifting properties are analogous to their analog counterparts.
For example, the signals u[n] and r[n] may be expressed as a train of shifted impulses:
"
"
u[n] = [n k] r[n] = k[n k] (3.15)
k=0 k=0
The signal u[n] may also be expressed as the cumulative sum of [n], and the signal r[n] may be described
as the cumulative sum of u[n]:
n
" n
"
u[n] = [k] r[n] = u[k] (3.16)
k= k=
48 Chapter 3 Discrete Signals
n n
-N N -N N
(b) Mathematically describe the signals of Figure E3.6B in at least two dierent ways.
x [n] y [n] h [n]
4 6 6
3 4 4 4
2 2 2
n 2 n n
1
1 2 1 2 3 4 5 6 3 2 1 1 2 3
1
Figure E3.6B The signals for Example 3.6(b)
1. The signal x[n] may be described as the sequence x[n] = {4, 2, 1, 3}.
It may also be written as x[n] = 4[n + 1] + 2[n] [n 1] + 3[n 2].
2. The signal y[n] may be represented variously as
A numeric sequence: y[n] = {0, 0, 2, 4, 6, 6, 6}.
A sum of shifted impulses: y[n] = 2[n 2] + 4[n 3] + 6[n 4] + 6[n 5] + 6[n 6].
A sum of steps and ramps: y[n] = 2r[n 1] 2r[n 4] 6u[n 7].
Note carefully that the argument of the step function is [n 7] (and not [n 6]).
3. The signal h[n] may be described as h[n] = 6 tri(n/3) or variously as
A numeric sequence: h[n] = {0, 2, 4, 6, 4, 2, 0}.
A sum of impulses: h[n] = 2[n + 2] + 4[n + 1] + 6[n] + 4[n 1] + 2[n 2].
A sum of steps and ramps: h[n] = 2r[n + 3] 4r[n] + 2r[n 3].
3.5 Discrete-Time Harmonics and Sinusoids 49
This complex-valued signal requires two separate plots (the real and imaginary parts, for example) for a
graphical description. If 0 < r < 1, x[n] describes a signal whose real and imaginary parts are exponentially
decaying cosines and sines. If r = 1, the real and imaginary parts are pure cosines and sines with a peak
value of unity. If r > 1, we obtain exponentially growing sinusoids.
The quantities f and = 2f describe analog frequencies. The normalized frequency F = f /S is called the
digital frequency and has units of cycles/sample. The frequency = 2F is the digital radian frequency
with units of radians/sample. The various analog and digital frequencies are compared in Figure 3.1. Note
that the analog frequency f = S (or = 2S) corresponds to the digital frequency F = 1 (or = 2).
Are all discrete-time sinusoids and harmonics periodic in time? Not always! To understand this idea,
suppose x[n] is periodic with period N such that x[n] = x[n + N ]. This leads to
cos(2nF + ) = cos[2(n + N )F + ] = cos(2nF + + 2NF ) (3.21)
The two sides are equal provided NF equals an integer k. In other words, F must be a rational fraction (ratio
of integers) of the form k/N . What we are really saying is that a DT sinusoid is not always periodic but only
if its digital frequency is a ratio of integers or a rational fraction. The period N equals the denominator of
k/N , provided common factors have been canceled from its numerator and denominator. The significance
of k is that it takes k full periods of the analog sinusoid to yield one full period of the sampled sinusoid. The
common period of a combination of periodic DT sinusoids equals the least common multiple (LCM) of their
individual periods. If F is not a rational fraction, there is no periodicity, and the DT sinusoid is classified as
nonperiodic or almost periodic. Examples of periodic and nonperiodic DT sinusoids appear in Figure 3.2.
Even though a DT sinusoid may not always be periodic, it will always have a periodic envelope.
(a) cos(0.125n) is periodic. Period N=16 (b) cos(0.5n) is not periodic. Check peaks or zeros.
1 Envelope 1 Envelope
0.5 is periodic 0.5 is periodic
Amplitude
Amplitude
0 0
0.5 0.5
1 1
0 4 8 12 16 20 24 28 0 4 8 12 16 20 24 28
DT Index n DT Index n
(b) What is the period of the harmonic signal x[n] = ej0.2n + ej0.3n ?
The digital frequencies in x[n] are F1 = 0.1 = 1
10 = k1
N1 and F2 = 0.15 = 3
20 = N2 .
k2
(c) The signal x(t) = 2 cos(40t) + sin(60t) is sampled at 75 Hz. What is the common period of the
sampled signal x[n], and how many full periods of x(t) does it take to obtain one period of x[n]?
The frequencies in x(t) are f1 = 20 Hz and f2 = 30 Hz. The digital frequencies of the individual
components are F1 = 2075 = 15 = N1 and F2 = 75 = 5 = N2 . Their periods are N1 = 15 and N2 = 5.
4 k1 30 2 k2
Consider an analog signal x(t) = cos(2f0 t + ) and its sampled version x[n] = cos(2nF0 + ), where
F0 = f0 /S. If x[n] is to be a unique representation of x(t), we must be able to reconstruct x(t) from x[n].
In practice, reconstruction uses only the copy or image of the periodic spectrum of x[n] in the principal
period 0.5 F 0.5, which corresponds to the analog frequency range 0.5S f 0.5S. We use a
lowpass filter to remove all other replicas or images, and the output of the lowpass filter corresponds to the
reconstructed analog signal. As a result, the highest frequency fH we can identify in the signal reconstructed
from its samples is fH = 0.5S.
Whether the frequency of the reconstructed analog signal matches x(t) or not depends on the sampling
rate S. If S > 2f0 , the digital frequency F0 = f0 /S is always in the principal range 0.5 F 0.5, and the
reconstructed analog signal is identical to x(t). If S < 2f0 , the digital frequency exceeds 0.5. Its image in
the principal range appears at the lower digital frequency Fa = F0 M (corresponding to the lower analog
frequency fa = f0 MS), where M is an integer that places the digital frequency Fa between 0.5 and 0.5 (or
the analog frequency fa between 0.5S and 0.5S). The reconstructed analog signal xa (t) = cos(2fa t + )
is at a lower frequency fa = SFa than f0 and is no longer a replica of x(t). This phenomenon, where a
reconstructed sinusoid appears at a lower frequency than the original, is called aliasing. The real problem
is that the original signal x(t) and the aliased signal xa (t) yield identical sampled representations at the
sampling frequency S and prevent unique identification of x(t) from its samples!
Thus, five periods of x(t) yield 12 samples (one period) of the sampled signal.
(b) A 100-Hz sinusoid is sampled at rates of 240 Hz, 140 Hz, 90 Hz, and 35 Hz. In each case, has aliasing
occurred, and if so, what is the aliased frequency?
To avoid aliasing, the sampling rate must exceed 200 Hz. If S = 240 Hz, there is no aliasing, and
the reconstructed signal (from its samples) appears at the original frequency of 100 Hz. For all other
choices of S, the sampling rate is too low and leads to aliasing. The aliased signal shows up at a lower
frequency. The aliased frequencies corresponding to each sampling rate S are found by subtracting out
multiples of S from 100 Hz to place the result in the range 0.5S f 0.5S. If the original signal
has the form x(t) = cos(200t + ), we obtain the following aliased frequencies and aliased signals:
1. S = 140 Hz, fa = 100 140 = 40 Hz, xa (t) = cos(80t + ) = cos(80t )
2. S = 90 Hz, fa = 100 90 = 10 Hz, xa (t) = cos(20t + )
3. S = 35 Hz, fa = 100 3(35) = 5 Hz, xa (t) = cos(10t + ) = cos(10t )
We thus obtain a 40-Hz sinusoid (with reversed phase), a 10-Hz sinusoid, and a 5-Hz sinusoid (with
reversed phase), respectively. Notice that negative aliased frequencies simply lead to a phase reversal
and do not represent any new information. Finally, had we used a sampling rate exceeding the Nyquist
rate of 200 Hz, we would have recovered the original 100-Hz signal every time. Yes, it pays to play by
the rules of the sampling theorem!
(c) Two analog sinusoids x1 (t) (shown light) and x2 (t) (shown dark) lead to an identical sampled version as
illustrated in Figure E3.8C. Has aliasing occurred? Identify the original and aliased signal. Identify the
digital frequency of the sampled signal corresponding to each sinusoid. What is the analog frequency
of each sinusoid if S = 50 Hz? Can you provide exact expressions for each sinusoid?
0.5
Amplitude
0.5
1
0 0.05 0.1 0.15 0.2 0.25 0.3
Time t [seconds]
Figure E3.8C The sinusoids for Example 3.8(c)
Look at the interval (0, 0.1) s. The sampled signal shows five samples per period. This covers three
full periods of x1 (t) and so F1 = 35 . This also covers two full periods of x2 (t), and so F2 = 25 . Clearly,
x1 (t) (with |F1 | > 0.5) is the original signal that is aliased to x2 (t). The sampling interval is 0.02 s.
54 Chapter 3 Discrete Signals
So, the sampling rate is S = 50 Hz. The original and aliased frequencies are f1 = SF1 = 30 Hz and
f2 = SF2 = 20 Hz.
From the figure, we can identify exact expressions for x1 (t) and x2 (t) as follows. Since x1 (t) is a delayed
cosine with x1 (0) = 0.5, we have x1 (t) = cos(60t 3 ). With S = 50 Hz, the frequency f1 = 30 Hz
actually aliases to f2 = 20 Hz, and thus x2 (t) = cos(40t 3 ) = cos(40t + 3 ). With F = 30 50 = 0.6
(or F = 0.4), the expression for the sampled signal is x[n] = cos(2nF 3 ).
(d) A 100-Hz sinusoid is sampled, and the reconstructed signal (from its samples) shows up at 10 Hz.
What was the sampling rate S?
If you said 90 Hz (100 S = 10), you are not wrong. But you could also have said 110 Hz (100 S =
10). In fact, we can also subtract out integer multiples of S from 100 Hz, and S is then found from
the following expressions (as long as we ensure that S > 20 Hz):
1. 100 MS = 10
2. 100 MS = 10
Solving the first expression for S, we find, for example, S = 45 Hz (with M = 2) or S = 30 Hz (with
M = 3). Similarly, the second expression gives S = 55 Hz (with M = 2). Which of these sampling
rates was actually used? We have no way of knowing!
period. The frequency fr of the reconstructed signal is then fr = SF = 540F = 200 Hz.
2. If S = 70 Hz, the digital frequency of the sampled signal is F = 100
70 = 7 , which does not lie in the
10
principal period. The frequency of the principal period is F = 7 1 = 37 , and the frequency fr of
10
reconstructed signal is then fr = 70F = SF = 30 Hz. The negative sign simply translates to a phase
reversal in the reconstructed signal.
3.7 Random Signals 55
3.7.1 Probability
Figure 3.3 shows the results of two experiments, each repeated under identical conditions. The first exper-
iment always yields identical results no matter how many times it is run and yields a deterministic signal.
We need to run the experiment only once to predict what the next, or any other run, will yield.
(a) Four realizations of a deterministic signal (b) Four realizations of a random signal
Amplitude
Amplitude
Time Time
The second experiment gives a dierent result or realization x(t) every time the experiment is repeated
and describes a stochastic or random system. A random signal or random process X(t) comprises
the family or ensemble of all such realizations obtained by repeating the experiment many times. Each
realization x(t), once obtained, ceases to be random and can be subjected to the same operations as we use
for deterministic signals (such as derivatives, integrals, and the like). The randomness of the signal stems
from the fact that one realization provides no clue as to what the next, or any other, realization might yield.
At a given instant t, each realization of a random signal can assume a dierent value, and the collection of
all such values defines a random variable. Some values are more likely to occur, or more probable, than
others. The concept of probability is tied to the idea of repeating an experiment a large number of times
in order to estimate this probability. Thus, if the value 2 V occurs 600 times in 1000 runs, we say that the
probability of occurrence of 2 V is 0.6.
The probability of an event A, denoted Pr(A), is the proportion of successful outcomes to the (very
large) number of times the experiment is run and is a fraction between 0 and 1 since the number of successful
56 Chapter 3 Discrete Signals
runs cannot exceed the total number of runs. The larger the probability Pr(A), the more the chance of event
A occurring. To fully characterize a random variable, we must answer two questions:
1. What is the range of all possible (nonrandom) values it can acquire? This defines an ensemble space,
which may be finite or infinite.
2. What are the probabilities for all the possible values in this range? This defines the probability
distribution function F (x). Clearly, F (x) must always lie between 0 and 1.
It is common to work with the derivative of the probability distribution function called the probability
density function f (x). The distribution function F (x) is simply the running integral of the density f (x):
* x
d F (x)
f (x) = or F (x) = f () d (3.22)
dx
The probability that X lies between x1 and x2 is Pr[x1 < X x2 ] = F (x2 ) F (x1 ). The area of f (x) is 1.
The mean, or expectation, is a measure of where the distribution is centered. The variance measures the
spread of the distribution about its mean. The less the spread, the smaller is the variance. The variance
is also a measure of the ac power in a signal. The quantity is known as the standard deviation and
provides a measure of the uncertainty in a physical measurement.
In a uniform distribution, every value is equally likely, since the random variable shows no preference for
a particular value. The density function f (x) is just a rectangular pulse defined by
1
, x
f (x) = (uniform distribution) (3.25)
0, otherwise
3.7 Random Signals 57
and the distribution F (x) is a ramp that flattens out. When quantizing signals in uniform steps, the error
in representing a signal value is assumed to be uniformly distributed between 0.5 and 0.5, where is
the quantization step. The density function of the phase of a sinusoid with random phase is also uniformly
distributed between and .
The bell-shaped Gaussian probability density is also referred to as normal and defined by
. /
1 (x mx )2
f (x) = exp (normal distribution) (3.26)
2 2 2
The mean (or variance) of the sum of Gaussian distributions equals the sum of the individual means (or
variances). The probability distribution of combinations of statistically independent, random phenomena
often tends to a Gaussian. This is the central limit theorem.
The idea of distributions also applies to deterministic periodic signals for which they can be found as
exact analytical expressions. Consider the periodic signal x(t) of Figure 3.5. The probability Pr[X < 0] that
x(t) < 0 is zero. The probability Pr[X < 3] that x(t) is less than 3 is 1. Since x(t) is linear over one period
(T = 3), all values in this range are equally likely, and F (x) must vary linearly from 0 to 1 over this range.
This yields the distribution F (x) and density f (x) as shown. Note that the area of f (x) equals unity.
In many situations, we use artificially generated signals (which can never be truly random) with prescribed
statistical features called pseudorandom signals. Such signals are actually periodic (with a very long
period), but over one period their statistical features approximate those of random signals.
Histograms: The estimates fk of a probability distribution are obtained by constructing a histogram from
a large number of observations. A histogram is a bar graph of the number of observations falling within
specified amplitude levels, or bins, as illustrated in Figure 3.6.
Number of observations
Number of observations
Signal-to-Noise Ratio: For a noisy signal x(t) = s(t) + An(t) with a signal component s(t) and a noise
component An(t) (with noise amplitude A), the signal-to-noise ratio (SNR) is the ratio of the signal power
s2 and noise power A2 n2 and usually defined in decibels (dB) as
# 2 $
s
SNR = 10 log dB (3.28)
A2 n2
(a) One realization of noisy sine (b) Average of 8 realizations (c) Average of 48 realizations
5 5 5
Amplitude
Amplitude
Amplitude
0 0 0
5 5 5
0 5 10 0 5 10 0 5 10
Time Time Time
CHAPTER 3 PROBLEMS
DRILL AND REINFORCEMENT
3.1 (Discrete Signals) Sketch each signal and find its energy or power as appropriate.
(a) x[n] = {6, 4, 2, 2} (b) x[n] = {3, 2, 1, 0, 1}
(c) x[n] = {0, 2, 4, 6} (d) x[n] = u[n] u[n 4]
(e) x[n] = cos(n/2) (f ) x[n] = 8(0.5)n u[n]
3.2 (Operations) Let x[n] = {6, 4, 2, 2}. Sketch the following signals and find their signal energy.
(a) y[n] = x[n 2] (b) f [n] = x[n + 2] (c) g[n] = x[n + 2] (d) h[n] = x[n 2]
3.3 (Operations) Let x[n] = 8(0.5)n (u[n + 1] u[n 4]). Sketch the following signals.
(a) y[n] = x[n 3] (b) f [n] = x[n + 1] (c) g[n] = x[n + 4] (d) h[n] = x[n 2]
3.4 (Decimation and Interpolation) Let x[n] = {4, 0, 2, 1, 3}. Find and sketch the following
signals and compare their signal energy with the energy in x[n].
(a) The decimated signal d[n] = x[2n]
(b) The zero-interpolated signal f [n] = x[ n2 ]
(c) The step-interpolated signal g[n] = x[ n2 ]
(d) The linearly interpolated signal h[n] = x[ n2 ]
3.5 (Symmetry) Sketch each signal and its even and odd parts.
(a) x[n] = 8(0.5)n u[n] (b) x[n] = u[n] (c) x[n] = 1 + u[n]
(d) x[n] = u[n] u[n 4] (e) x[n] = tri( n3
3 ) (f ) x[n] = {6, 4, 2, 2}
3.6 (Sketching Discrete Signals) Sketch each of the following signals:
(a) x[n] = r[n + 2] r[n 2] 4u[n 6] (b) x[n] = rect( n6 )
(c) x[n] = rect( n2
4 ) (d) x[n] = 6 tri( n4
3 )
3.8 (Discrete-Time Harmonics) Check for the periodicity of the following signals, and compute the
common period N if periodic.
(a) x[n] = cos( n2 ) (b) x[n] = cos( n2 )
(c) x[n] = sin( n
4 ) 2 cos( 6 )
n
(d) x[n] = 2 cos( n4 ) + cos ( 4 )
2 n
3.9 (Digital Frequency) Set up an expression for each signal, using a digital frequency |F | < 0.5, and
another expression using a digital frequency in the range 4 < F < 5.
3.10 (Sampling and Aliasing) Each of the following sinusoids is sampled at S = 100 Hz. Determine if
aliasing has occurred and set up an expression for each sampled signal using a digital frequency in the
principal range (|F | < 0.5).
(a) x(t) = cos(320t + 4 ) (b) x(t) = cos(140t 4 ) (c) x(t) = sin(60t)
3.12 (Signal Representation) The two signals shown in Figure P3.12 may be expressed as
(a) x[n] = An (u[n] u[n N ]) (b) y[n] = A cos(2F n + )
Find the constants in each expression and then find the signal energy or power as appropriate.
x [n] y [n] 2 2
4 1 1 1
1 1
n
1
n 1 1 1 1
1 2 3 4 5 2 2
Figure P3.12 Signals for Problem 3.12
3.13 (Energy and Power) Classify the following as energy signals, power signals, or neither and find the
energy or power as appropriate.
(a) x[n] = 2n u[n] (b) x[n] = 2n u[n 1] (c) x[n] = cos(n)
1 1
(d) x[n] = cos(n/2) (e) x[n] = u[n 1] (f ) x[n] = u[n 1]
n n
1
(g) x[n] = u[n 1] (h) x[n] = ejn (i) x[n] = ejn/2
n2
(j) x[n] = e(j+1)n/4
(k) x[n] = j n/4 (l) x[n] = ( j)n + ( j)n
3.14 (Energy and Power) Sketch each of the following signals, classify as an energy signal or power
signal, and find the energy or power as appropriate.
Chapter 3 Problems 61
"
(a) x[n] = y[n kN ], where y[n] = u[n] u[n 3] and N = 6
k=
"
(b) x[n] = (2)n5k (u[n 5k] u[n 5k 4])
k=
3.15 (Sketching Signals) Sketch the following signals and describe how they are related.
(a) x[n] = [n] (b) f [n] = rect(n) (c) g[n] = tri(n) (d) h[n] = sinc(n)
3.16 (Discrete Exponentials) A causal discrete exponential has the form x[n] = n u[n].
(a) Assume that is real and positive. Pick convenient values for > 1, = 1, and < 1; sketch
x[n]; and describe the nature of the sketch for each choice of .
(b) Assume that is real and negative. Pick convenient values for < 1, = 1, and > 1;
sketch x[n]; and describe the nature of the sketch for each choice of .
(c) Assume that is complex and of the form = Aej , where A is a positive constant. Pick
convenient values for and for A < 1, A = 1, and A > 1; sketch the real part and imaginary
part of x[n] for each choice of A; and describe the nature of each sketch.
(d) Assume that is complex and of the form = Aej , where A is a positive constant. Pick
convenient values for and for A < 1, A = 1, and A > 1; sketch the magnitude and imaginary
phase of x[n] for each choice of A; and describe the nature of each sketch.
3.17 (Interpolation and Decimation) Let x[n] = 4 tri(n/4). Sketch the following signals and describe
how they dier.
(a) x[ 23 n], using zero interpolation followed by decimation
(b) x[ 23 n], using step interpolation followed by decimation
(c) x[ 23 n], using decimation followed by zero interpolation
(d) x[ 23 n], using decimation followed by step interpolation
3.18 (Fractional Delay) Starting with x[n], we can generate the signal x[n 2] (using a delay of 2) or
x[2n 3] (using a delay of 3 followed by decimation). However, to generate a fractional delay of the
form x[n MN ] requires a delay, interpolation, and decimation!
(a) Describe the sequence of operations required to generate x[n 23 ] from x[n].
(b) Let x[n] = {1, 4, 7, 10, 13}. Sketch x[n] and x[n 23 ]. Use linear interpolation where required.
(c) Generalize the results of part (a) to generate x[n M
N ] from x[n]. Any restrictions on M and N ?
3.19 (The Roots of Unity) The N roots of the equation z N = 1 can be found by writing it as z N = ej2k
to give z = ej2k/N , k = 0, 1, . . . , N 1. What is the magnitude of each root? The roots can be
displayed as vectors directed from the origin whose tips lie on a circle.
(a) What is the length of each vector and the angular spacing between adjacent vectors? Sketch for
N = 5 and N = 6.
(b) Extend this concept to find the roots of z N = 1 and sketch for N = 5 and N = 6.
3.20 (Digital Sinusoids) Find the period N of each signal if periodic. Express each signal using a digital
frequency in the principal range (|F | < 0.5) and in the range 3 F 4.
3.21 (Aliasing and Signal Reconstruction) The signal x(t) = cos(320t + 4 ) is sampled at 100 Hz,
and the sampled signal x[n] is reconstructed at 200 Hz to recover the analog signal xr (t).
(a) Has aliasing occurred? What is the period N and the digital frequency F of x[n]?
(b) How many full periods of x(t) are required to generate one period of x[n]?
(c) What is the analog frequency of the recovered signal xr (t)?
(d) Write expressions for x[n] (using |F | < 0.5) and for xr (t).
3.22 (Digital Pitch Shifting) One way to accomplish pitch shifting is to play back (or reconstruct) a
sampled signal at a dierent sampling rate. Let the analog signal x(t) = sin(15800t + 0.25) be
sampled at a sampling rate of 8 kHz.
(a) Find its sampled representation with digital frequency |F | < 0.5.
(b) What frequencies are heard if the signal is reconstructed at a rate of 4 kHz?
(c) What frequencies are heard if the signal is reconstructed at a rate of 8 kHz?
(d) What frequencies are heard if the signal is reconstructed at a rate of 20 kHz?
3.23 (Discrete-Time Chirp Signals) Consider the signal x(t) = cos[(t)], where (t) = t2 . Show that
its instantaneous frequency fi (t) = 2 (t) varies linearly with time.
1
(a) Choose such that the frequency varies from 0 Hz to 2 Hz in 10 seconds, and generate the
sampled signal x[n] from x(t), using a sampling rate of S = 4 Hz.
(b) It is claimed that, unlike x(t), the signal x[n] is periodic. Verify this claim, using the condition
for periodicity (x[n] = x[n + N ]), and determine the period N of x[n].
(c) The signal y[n] = cos(F0 n2/M ), n = 0, 1, . . . , M 1, describes an M -sample chirp whose digital
frequency varies linearly from 0 to F0 . What is the period of y[n] if F0 = 0.25 and M = 8?
3.24 (Time Constant) For exponentially decaying discrete signals, the time constant is a measure of
how fast a signal decays. The 60-dB time constant describes the (integer) number of samples it takes
for the signal level to decay by a factor of 1000 (or 20 log 1000 = 60 dB).
(a) Let x[n] = (0.5)n u[n]. Compute its 60-dB time constant and 40-dB time constant.
(b) Compute the time constant in seconds if the discrete-time signal is derived from an analog signal
sampled at 1 kHz.
3.25 (Signal Delay) The delay D of a discrete-time energy signal x[n] is defined by
"
kx2 [k]
k=
D=
"
x2 [k]
k=
(a) Verify that the delay of the symmetric sequence x[n] = {4, 3, 2, 1, 0, 1, 2, 3, 4} is zero.
(b) Compute the delay of the signals g[n] = x[n 1] and h[n] = x[n 2].
(c) What is the delay of the signal y[n] = 1.5(0.5)n u[n] 2[n]?
3.26 (Periodicity) It is claimed that the sum of an absolutely summable signal x[n] and its shifted (by
multiples of N ) replicas is a periodic signal xp [n] with period N . Verify this claim by sketching the
following and, for each case, compute the power in the resulting periodic signal xp [n] and compare the
sum and energy of one period of xp [n] with the sum and energy of x[n].
Chapter 3 Problems 63
3.27 (Periodic Extension) The sum of an absolutely summable signal x[n] and its shifted (by multiples
of N ) replicas is called the periodic extension of x[n] with period N . Show that one period of the
x[n]
periodic extension of the signal x[n] = n u[n] with period N is y[n] = , 0 n N 1. How
1 N
does the one-period sum of y[n] compare with the sum of x[n]? What is the signal power in x[n] and
y[n]?
3.28 (Signal Norms) Norms provide a measure of the size of a signal. The p-norm, or H older norm,
! 1/p
xp for discrete signals is defined by xp = ( |x|p ) , where 0 < p < is a positive integer. For
p = , we also define x as the peak absolute value |x|max .
(a) Let x[n] = {3, j4, 3 + j4}. Find x1 , x2 , and x .
(b) What is the significance of each of these norms?
3.29 (Discrete Signals) Plot each signal x[n] over 10 n 10. Then, using the ADSP routine operate
(or otherwise), plot each signal y[n] and compare with the original.
(a) x[n] = u[n + 4] u[n 4] + 2[n + 6] [n 3] y[n] = x[n 4]
(b) x[n] = r[n + 6] r[n + 3] r[n 3] + r[n 6] y[n] = x[n 4]
(c) x[n] = rect( 10
n
) rect( n36 ) y[n] = x[n + 4]
(d) x[n] = 6 tri( 6 ) 3 tri( n3 )
n
y[n] = x[n + 4]
3.30 (Signal Interpolation) Let h[n] = sin(n/3), 0 n 10. Using the ADSP routine interpol (or
otherwise), plot h[n], the zero-interpolated, step-interpolated, and linearly interpolated signals using
interpolation by 3.
3.31 (Discrete Exponentials) A causal discrete exponential may be expressed as x[n] = n u[n], where
the nature of dictates the form of x[n]. Plot the following over 0 n 40 and comment on the
nature of each plot.
64 Chapter 3 Discrete Signals
3.32 (Discrete-Time Sinusoids) Which of the following signals are periodic and with what period? Plot
each signal over 10 n 30. Do the plots confirm your expectations?
(a) x[n] = 2 cos( n
2 ) + 5 sin( 5 )
n
(b) x[n] = 2 cos( n
2 ) sin( 3 )
n
3.33 (Complex-Valued Signals) A complex-valued signal x[n] requires two plots for a complete descrip-
tion in one of two formsthe magnitude and phase vs. n or the real part vs. n and imaginary part
vs. n.
(a) Let x[n] = {2, 1 + j, j2, 2 j2, 4}. Sketch each form for x[n] by hand.
(b) Let x[n] = ej0.3n . Use Matlab to plot each form over 30 n 30. Is x[n] periodic? If so,
can you identify its period from the Matlab plots? From which form, and how?
3.34 (Complex Exponentials) Let x[n] = 5 2ej ( 9 4 ) . Plot the following signals and, for each case,
n
derive analytic expressions for the signals plotted and compare with your plots. Is the signal x[n]
periodic? What is the period N ? Which plots allow you determine the period of x[n]?
(a) The real part and imaginary part of x[n] over 20 n 20
(b) The magnitude and phase of x[n] over 20 n 20
(c) The sum of the real and imaginary parts over 20 n 20
(d) The dierence of the real and imaginary parts over 20 n 20
3.35 (Complex Exponentials) Let x[n] = ( j)n + ( j)n . Plot the following signals and, for each case,
derive analytic expressions for the sequences plotted and compare with your plots. Is the signal x[n]
periodic? What is the period N ? Which plots allow you determine the period of x[n]?
(a) The real part and imaginary part of x[n] over 20 n 20
(b) The magnitude and phase of x[n] over 20 n 20
3.36 (Discrete-Time Chirp Signals) An N -sample chirp signal x[n] whose digital frequency varies
linearly from F0 to F1 is described by
. # $/
F1 F0 2
x[n] = cos 2 F0 n + n , n = 0, 1, . . . , N 1
2N
(a) Generate and plot 800 samples of a chirp signal x whose digital frequency varies from F = 0 to
F = 0.5. Observe how the frequency of x varies linearly with time, using the ADSP command
timefreq(x).
(b) Generate and plot 800 samples of a chirp signal x whose digital frequency varies from F = 0 to
F = 1. Is the frequency always increasing? If not, what is the likely explanation?
3.37 (Chirp Signals) It is claimed that the chirp signal x[n] = cos(n2 /6) is periodic (unlike the analog
chirp signal x(t) = cos(t2 /6)). Plot x[n] over 0 n 20. Does x[n] appear periodic? If so, can you
identify the period N ? Justify your results by trying to find an integer N such that x[n] = x[n + N ]
(the basis for periodicity).
Chapter 3 Problems 65
3.38 (Signal Averaging) Extraction of signals from noise is an important signal-processing application.
Signal averaging relies on averaging the results of many runs. The noise tends to average out to zero,
and the signal quality or signal-to-noise ratio (SNR) improves.
(a) Generate samples of the sinusoid x(t) = sin(800t) sampled at S = 8192 Hz for 2 seconds. The
sampling rate is chosen so that you may also listen to the signal if your machine allows.
(b) Create a noisy signal s[n] by adding x[n] to samples of uniformly distributed noise such that s[n]
has an SNR of 10 dB. Compare the noisy signal with the original and compute the actual SNR
of the noisy signal.
(c) Sum the signal s[n] 64 times and average the result to obtain the signal sa [n]. Compare the
averaged signal sa [n], the noisy signal s[n], and the original signal x[n]. Compute the SNR of
the averaged signal xa [n]. Is there an improvement in the SNR? Do you notice any (visual and
audible) improvement? Should you?
(d) Create the averaged result xb [n] of 64 dierent noisy signals and compare the averaged signal
xb [n] with the original signal x[n]. Compute the SNR of the averaged signal xb [n]. Is there an
improvement in the SNR? Do you notice any (visual and/or audible) improvement? Explain how
the signal xb [n] diers from xa [n].
(e) The reduction in SNR is a function of the noise distribution. Generate averaged signals, using
dierent noise distributions (such as Gaussian noise) and comment on the results.
3.39 (The Central Limit Theorem) The central limit theorem asserts that the sum of independent noise
distributions tends to a Gaussian distribution as the number N of distributions in the sum increases.
In fact, one way to generate a random signal with a Gaussian distribution is to add many (typically 6
to 12) uniformly distributed signals.
(a) Generate the sum of uniformly distributed random signals using N = 2, N = 6, and N = 12 and
plot the histograms of each sum. Does the histogram begin to take on a Gaussian shape as N
increases? Comment on the shape of the histogram for N = 2.
(b) Generate the sum of random signals with dierent distributions using N = 6 and N = 12. Does
the central limit theorem appear to hold even when the distributions are not identical (as long
as you select a large enough N )? Comment on the physical significance of this result.
3.40 (Music Synthesis I) A musical composition is a combination of notes, or signals, at various frequen-
cies. An octave covers a range of frequencies from f0 to 2f0 . In the western musical scale, there are 12
notes per octave, logarithmically equispaced. The frequencies of the notes from f0 to 2f0 correspond
to
f = 2k/12 f0 k = 0, 1, 2, . . . , 11
The 12 notes are as follows (the and stand for sharp and flat, and each pair of notes in parentheses
has the same frequency):
A (A or B ) B C (C or D ) D (D or E ) E F (F or G ) G (G or A )
An Example: Raga Malkauns: In Indian classical music, a raga is a musical composition based on
an ascending and descending scale. The notes and their order form the musical alphabet and grammar
from which the performer constructs musical passages, using only the notes allowed. The performance
of a raga can last from a few minutes to an hour or more! Raga malkauns is a pentatonic raga (with
five notes) and the following scales:
Ascending: D F G B C D Descending: C B G F D
66 Chapter 3 Discrete Signals
The final note in each scale is held twice as long as the rest. To synthesize this scale in Matlab, we
start with a frequency f0 corresponding to the first note D and go up in frequency to get the notes in
the ascending scale; when we reach the note D, which is an octave higher, we go down in frequency to
get the notes in the descending scale. Here is a Matlab code fragment.
Generate sampled sinusoids at these frequencies, using an appropriate sampling rate (say, 8192 Hz);
concatenate them, assuming silent passages between each note; and play the resulting signal, using the
Matlab command sound. Use the following Matlab code fragment as a guide:
3.41 (Music Synthesis II) The raw scale of raga malkauns will sound pretty dry! The reason for
this is the manner in which the sound from a musical instrument is generated. Musical instruments
produce sounds by the vibrations of a string (in string instruments) or a column of air (in woodwind
instruments). Each instrument has its characteristic sound. In a guitar, for example, the strings are
plucked, held, and then released to sound the notes. Once plucked, the sound dies out and decays.
Furthermore, the notes are never pure but contain overtones (harmonics). For a realistic sound, we
must include the overtones and the attack, sustain, and release (decay) characteristics. The sound
signal may be considered to have the form x(t) = (t)cos(2f0 t + ), where f0 is the pitch and (t)
is the envelope that describes the attack-sustain-release characteristics of the instrument played. A
crude representation of some envelopes is shown in Figure P3.41 (the piecewise linear approximations
will work just as well for our purposes). Woodwind instruments have a much longer sustain time and
a much shorter release time than do plucked string and keyboard instruments.
(t) Envelopes of (t) Envelopes of
woodwind instruments 1 string and keyboard instruments
1
t t
1 1
Figure P3.41 Envelopes and their piecewise linear approximations (dark) for Problem 3.41
Experiment with the scale of raga malkauns and try to produce a guitar-like sound, using the appro-
priate envelope form. You should be able to discern an audible improvement.
Chapter 3 Problems 67
3.42 (Music Synthesis III) Synthesize the following notes, using a woodwind envelope, and synthesize
the same notes using a plucked string envelope.
F (0.3) D(0.4) E(0.4) A(1) A(0.4) E(0.4) F (0.3) D(1)
All the notes cover one octave, and the numbers in parentheses give a rough indication of their relative
duration. Can you identify the music? (It is Big Ben.)
3.43 (Music Synthesis IV) Synthesize the first bar of Pictures at an Exhibition by Mussorgsky, which
has the following notes:
A(3) G(3) C(3) D(2) G (1) E(3) D(2) G (1) E(3) C(3) D(3) A(3) G(3)
All the notes cover one octave except the note G , which is an octave above G. The numbers in
parentheses give a rough indication of the relative duration of the notes (for more details, you may
want to listen to an actual recording). Assume that a keyboard instrument (such as a piano) is played.
3.44 (DTMF Tones) In dual-tone multi-frequency (DTMF) or touch-tone telephone dialing, each number
is represented by a dual-frequency tone. The frequencies for each digit are listed in Chapter 18.
(a) Generate DTMF tones corresponding to the telephone number 487-2550, by sampling the sum of
two sinusoids at the required frequencies at S = 8192 Hz for each digit. Concatenate the signals
by putting 50 zeros between each signal (to represent silence) and listen to the signal using the
Matlab command sound.
(b) Write a Matlab program that generates DTMF signals corresponding to a vector input repre-
senting the digits in a phone number. Use a sampling frequency of S = 8192 Hz.
Chapter 4
ANALOG SYSTEMS
4.1 Introduction
In its broadest sense, a physical system is an interconnection of devices and elements subject to physical
laws. A system that processes analog signals is referred to as an analog system or continuous-time (CT)
system. The signal to be processed forms the excitation or input to the system. The processed signal is
termed the response or output.
The response of any system is governed by the input and the system details. A system may of course
be excited by more than one input, and this leads to the more general idea of multiple-input systems. We
address only single-input, single-output systems in this text. The study of systems involves the input, the
output, and the system specifications. Conceptually, we can determine any one of these in terms of the other
two. System analysis implies a study of the response subject to known inputs and system formulations.
Known input-output specifications, on the other hand, usually allow us to identify, or synthesize, the system.
System identification or synthesis is much more dicult because many system formulations are possible
for the same input-output relationship.
Most real-world systems are quite complex and almost impossible to analyze quantitatively. Of necessity,
we are forced to use models or abstractions that retain the essential features of the system and simplify
the analysis, while still providing meaningful results. The analysis of systems refers to the analysis of the
models that in fact describe such systems, and it is customary to treat the system and its associated models
synonymously. In the context of signal processing, a system that processes the input signal in some fashion
is also called a filter.
68
4.1 Introduction 69
Such variables may represent physical quantities or may have no physical significance whatever. Their choice
is governed primarily by what the analysis requires. For example, capacitor voltages and inductor currents
are often used as state variables since they provide an instant measure of the system energy. Any inputs
applied to the system result in a change in the energy or state of the system. All physical systems are,
by convention, referenced to a zero-energy state (variously called the ground state, the rest state, the
relaxed state, or the zero state) at t = .
The behavior of a system is governed not only by the input but also by the state of the system at the
instant at which the input is applied. The initial values of the state variables define the initial conditions
or initial state. This initial state, which must be known before we can establish the complete system
response, embodies the past history of the system. It allows us to predict the future response due to any
input regardless of how the initial state was arrived at.
4.1.2 Operators
Any equation is based on a set of operations. An operator is a rule or a set of directionsa recipe if you
willthat shows us how to transform one function to another. For example, the derivative operator s ddt
transforms a function x(t) to y(t) = s{x(t)} or d x(t)
dt . If an operator or a rule of operation is represented by
the symbol O, the equation
O{x(t)} = y(t) (4.1)
implies that if the function x(t) is treated exactly as the operator O requires, we obtain the function y(t).
For example, the operation O{ } = 4 dt d
{ } + 6 says that to get y(t), we must take the derivative of x(t),
multiply by 4 and then add 6 to the result 4 dt d
{x(t)} + 6 = 4 dx
dt + 6 = y(t).
If an operation on the sum of two functions is equivalent to the sum of operations applied to each
separately, the operator is said to be additive. In other words,
O{x1 (t) + x2 (t)} = O{x1 (t)} + O{x2 (t)} (for an additive operation) (4.2)
If an operation on Kx(t) is equivalent to K times the linear operation on x(t) where K is a scalar, the
operator is said to be homogeneous. In other words,
O{Kx(t)} = KO{x(t)} (for a homogeneous operation) (4.3)
Together, the two describe the principle of superposition. An operator O is termed a linear operator
if it is both additive and homogeneous. In other words,
O{Ax1 (t) + Bx2 (t)} = AO{x1 (t)} + BO{x2 (t)} (for a linear operation) (4.4)
If an operation performed on a linear combination of x1 (t) and x2 (t) produces the same results as a linear
combination of operations on x1 (t) and x2 (t) separately, the operation is linear. If not, it is nonlinear.
Linearity thus implies superposition. An important concept that forms the basis for the study of linear
systems is that the superposition of linear operators is also linear.
Testing an Operator for Linearity: If an operator fails either the additive or the homogeneity test,
it is nonlinear. In all but a few (usually contrived) cases, if an operator passes either the additive or the
homogeneity test, it is linear (meaning that it will also pass the other). In other words, only one test,
additivity or homogeneity, suces to confirm linearity (or lack thereof) in most cases.
70 Chapter 4 Analog Systems
d{ }
(d) Consider the derivative operator O{ } = dt , which transforms x(t) to x (t).
We find that AO{x(t)} = Ax (t) and O{Ax(t)} = x (At) = Ax (t).
The two are equal, and the derivative operator is homogeneous and thus linear.
Of course, to be absolutely certain, we could use the full force of the linearity relation to obtain
O{Ax1 (t) + Bx2 (t)} = d
dt [Ax1 (t) + Bx2 (t)] and AO{x1 (t)} + AO{x2 (t)} = Ax1 (t) + Bx2 (t).
The two results are equal, and we thus confirm the linearity of the derivative operator.
y (n) (t) + a1 y (n1) (t) + + an1 y (1) (t) + an y(t) = b0 x(m) (t) + b1 x(m1) (t) + + bm1 x(1) (t) + bm x(t) (4.5)
The order n of the dierential equation refers to the order of the highest derivative of the output y(t).
It is customary to normalize the coecient of the highest derivative of y(t) to 1. The coecients ak and bk
k
may be functions of x(t) and/or y(t) and/or t. Using the derivative operator sk ddtk with s0 1, we may
recast this equation in operator notation as
d y(t) d2 y(t)
Notation: For low-order systems, we will also use the notation y (t) dt , y (t) dt2 etc.
4.2 System Classification 71
For a linear system, scaling the input leads to an identical scaling of the output. In particular, this means
zero output for zero input and a linear input-output relation passing through the origin. This is possible only
if every system element obeys a similar relationship at its own terminals. Since independent sources have
terminal characteristics that are constant and do not pass through the origin, a system that includes such
sources is therefore nonlinear. Formally, a linear system must also be relaxed (with zero initial conditions) if
superposition is to hold. We can, however, use superposition even for a system with nonzero initial conditions
(or internal sources) that is otherwise linear. We treat it as a multiple-input system by including the initial
conditions (or internal sources) as additional inputs. The output then equals the superposition of the outputs
due to each input acting alone, and any changes in the input are related linearly to changes in the response.
As a result, the response can be written as the sum of a zero-input response (due to the initial conditions
alone) and the zero-state response (due to the input alone). This is the principle of decomposition,
which allows us to analyze linear systems in the presence of nonzero initial conditions. Both the zero-input
response and the zero-state response obey superposition individually.
(c) y(t) = x(t) is linear but time varying. With t t, we see that AO{x(t)} = A[x(t)] and O{Ax(t)} =
Ax(t). The two are equal.
To test for time invariance, we find that O{x(t t0 )} = x(t t0 ) but y(t t0 ) = x[(t t0 )]. The
two are not equal, and the time-scaling operation is time varying. Figure E4.3C illustrates this for
y(t) = x(2t), using a shift of t0 = 2.
4.2 System Classification 73
x(t 2) y2 (t)
1 Time scale 1
(compress by 2) Not the same!!
2 6 1 3
Figure E4.3C Illustrating time variance of the system for Example 4.3(c)
(d) y(t) = x(t 2) is linear and time invariant. The operation t t 2 reveals that
AO{x(t)} = A[x(t 2)] and O{Ax(t)} = Ax(t 2). The two are equal.
O{x(t t0 )} = x(t t0 2) and y(t t0 ) = x(t t0 2). The two are equal.
(b) What can you say about the linearity and time-invariance of the four circuits and their governing
dierential equations shown in the Figure E4.4B.
3 + 3 i 2 (t) 3 3t
+ + + +
i(t) i(t) i(t) i(t)
For (a), 2i (t) + 3i(t) = v(t). This is LTI because all the element values are constants.
For (b), 2i (t) + 3i2 (t) = v(t). This is nonlinear due to the nonlinear element.
For (c), 2i (t) + 3i(t) + 4 = v(t). This is nonlinear due to the 4-V internal source.
For (d), 2i (t) + 3ti(t) = v(t). This is time varying due to the time-varying resistor.
y (n) (t) + a1 y (n1) (t) + + an1 y (1) (t) + an y(t) = x(t) (4.8)
{a0 sn + a1 sn1 + + an1 s + an }y(t) = x(t) (4.9)
Table 4.1 Form of the Natural Response for Analog LTI Systems
Entry Root of Characteristic Equation Form of Natural Response
et cos(t)(A0 + A1 t + A2 t2 + + Ap tp )
4 Complex, repeated: ( j)p+1
+ et sin(t)(B0 + B1 t + B2 t2 + + Bp tp )
Table 4.2 Form of the Forced Response for Analog LTI Systems
Note: If the right-hand side (RHS) is et , where is also a root of the characteristic
equation repeated r times, the forced response form must be multiplied by tr .
Entry Forcing Function (RHS) Form of Forced Response
5 t C0 + C1 t
6 tp C0 + C1 t + C2 t2 + + Cp tp
The forced response arises due to the interaction of the system with the input and thus depends on
both the input and the system details. It satisfies the given dierential equation and has the same form
as the input. Table 4.2 summarizes these forms for various types of inputs. The constants in the forced
response can be found uniquely and independently of the natural response or initial conditions simply by
satisfying the given dierential equation.
The total response is found by first adding the forced and natural response and then evaluating the
undetermined constants (in the natural component) using the prescribed initial conditions.
Remarks: For stable systems, the natural response is also called the transient response, since it decays to
zero with time. For systems with harmonic or switched harmonic inputs, the forced response is a harmonic
at the input frequency and is termed the steady-state response.
1. Since x(t) = 4e3t , we select the forced response as yF (t) = Ce3t . Then
yF (t) = 3Ce3t , yF (t) = 9Ce3t , and yF (t) + 3yF (t) + 2yF (t) = (9C 9C + 2C)e3t = 4e3t .
Thus, C = 2, yF (t) = 2e3t , and y(t) = yN (t) + yF (t) = K1 et + K2 e2t + 2e3t .
Using initial conditions, we get y(0) = K1 + K2 + 2 = 3 and y (0) = K1 2K2 6 = 4.
This gives K2 = 11, K1 = 12, and y(t) = (12et 11e2t + 2e3t )u(t).
2. Since x(t) = 4e2t has the same form as a term of yN (t), we must choose yF (t) = Cte2t .
Then yF (t) = 2Cte2t + Ce2t , and yF (t) = 2C(1 2t)e2t 2Ce2t . Thus,
yF (t) + 3yF (t) + 2yF (t) = (2C + 4Ct 2C 6Ct + 3C + 2Ct)e2t = 4e2t . This gives C = 4.
Thus, yF (t) = 4te2t , and y(t) = yN (t) + yF (t) = K1 et + K2 e2t 4te2t .
Using initial conditions, we get y(0) = K1 + K2 = 3 and y (0) = K1 2K2 4 = 4.
Thus, K2 = 11, K1 = 14, and y(t) = (14et 11e2t 4te2t )u(t).
EXAMPLE 4.7 (Zero-Input and Zero-State Response for the Single-Input Case)
Let y (t) + 3y (t) + 2y(t) = x(t) with x(t) = 4e3t and initial conditions y(0) = 3 and y (0) = 4.
Find its zero-input response and zero-state response.
The characteristic equation is s2 + 3s + 2 = 0 with roots s1 = 1 and s2 = 2.
Its natural response is yN (t) = K1 es1 t + K2 es2 t = K1 et + K2 e2t .
1. The zero-input response is found from yN (t) and the prescribed initial conditions:
2. Similarly, yzs (t) is found from the general form of y(t) but with zero initial conditions.
Since x(t) = 4e3t , we select the forced response as yF (t) = Ce3t .
Then, yF (t) = 3Ce3t , yF (t) = 9Ce3t , and yF (t) + 3yF (t) + 2yF (t) = (9C 9C + 2C)e3t = 4e3t .
Thus, C = 2, yF (t) = 2e3t , and yzs (t) = K1 et + K2 e2t + 2e3t .
With zero initial conditions, we obtain
yzs (0) = K1 + K2 + 2 = 0
yzs (0) = K1 2K2 6 = 0
3. The total response is the sum of yzs (t) and yzi (t):
y (n) (t) + a1 y (n1) (t) + + an y(t) = b0 x(m) (t) + b1 x(m1) (t) + + bm x(t) (4.12)
3. The ZIR is found from yzi (t) = C1 et + C2 e2t , with y(0) = 0 and y (0) = 1. This yields
yzi (0) = C1 + C2 = 0 and yzi
(0) C1 2C2 = 1. We find C1 = 1 and C2 = 1. Then,
yzi (t) = et e2t
4. Finally, the total response is y(t) = yzs (t) + yzi (t) = et + 11e2t 10e3t , t 0.
Impulse response h(t): The output of a relaxed LTI system if the input is a unit impulse (t).
Step response s(t): The output of a relaxed LTI system if the input is a unit step u(t).
1 eat
s(t) = u(t) (4.15)
a
The impulse response h(t) equals the derivative of the step response. Thus,
" #
d 1 eat
h(t) = s (t) =
u(t) = eat u(t) (4.16)
dt a
Similarly, it turns out that the impulse response of the second-order system y (t)+a1 y (t)+a2 y(t) = x(t)
can be found as the solution to the homogeneous equation y (t) + a1 y (t) + a2 y(t) = 0, with initial conditions
y(0) = 0 and y (0) = 1. These results can be generalized to higher-order systems. For the nth-order,
single-input system given by
the impulse response h(t) is found as the solution to the homogeneous equation
h(n) (t) + a1 h(n1) (t) + + an h(t) = 0 h(n1) (0) = 1 (and all other ICs zero) (4.18)
Note that the highest-order initial condition is h(n1) (0) = 1 and all other initial conditions are zero.
and compute its impulse response h0 (t) from the homogeneous equation
(n) (n1) (n1)
h0 (t) + a1 h0 (t) + + an h0 (t) = 0, h0 (0) = 1 (4.20)
(b) Find the impulse response of the system y (t) + 2y(t) = x (t) + 3x(t).
The impulse response h0 (t) of the single-input system y (t) + 2y(t) = x(t) is h0 (t) = e2t u(t).
The impulse response of the given system is thus
h(t) = h0 (t) + 3h0 (t) = (t) 2e2t u(t) + 3e2t u(t) = (t) + e2t u(t).
(c) Find the impulse response of the system y (t) + 3y (t) + 2y(t) = x (t).
The impulse response h0 (t) of the system y (t) + 3y (t) + 2y(t) = x(t) is (from Example 4.9)
h0 (t) = (et e2t )u(t). The required impulse response is then h(t) = h0 (t). We compute:
h0 (t) = (et + 2e2t )u(t)
d
h(t) = h0 (t) = [h (t)] = (et 4e2t )u(t) + (t)
dt 0
4.6 System Stability 85
y (n) (t) + a1 y (n1) (t) + + an y(t) = b0 x(m) (t) + b1 x(m1) (t) + + bm x(t), mn (4.22)
the conditions for BIBO stability involve the roots of the characteristic equation. A necessary and sucient
condition for BIBO stability of an LTI system is that every root of its characteristic equation must have
a negative real part (and the highest derivative of the input must not exceed that of the output). This
criterion is based on the results of Tables 4.1 and 4.2. Roots with negative real parts ensure that the natural
(and zero-input) response always decays with time (see Table 4.1), and the forced (and zero-state) response
always remains bounded for every bounded input. Roots with zero real parts make the system unstable.
Simple (non-repeated) roots with zero real parts produce a constant (or sinusoidal) natural response which
is bounded, but if the input is also a constant (or a sinusoid at the same frequency), the forced response is a
ramp or growing sinusoid (see Table 4.2) and hence unbounded. Repeated roots with zero real parts result
in a natural response that is itself a growing sinusoid or polynomial and thus unbounded.
If the highest derivative of the input exceeds (not just equals) that of the output, the system is unstable.
For example, if y(t) = d x(t)
dt , a step input (which is bounded) produces an impulse output (which is unbounded
at t = 0). In the next chapter, we shall see that the stability condition described here is entirely equivalent
to having an LTI system whose impulse response h(t) is absolutely integrable. The stability of nonlinear or
time-varying systems must usually be checked by other means.
(b) The system y (t) + 3y (t) = x(t) is unstable. The roots of its characteristic equation s2 + 3s = 0 are
s1 = 0 , and s2 = 3, and one of the roots does not have a negative real part. Although its natural
response is bounded (it has the form yN (t) = Au(t) + Be3t u(t)), the input x(t) = u(t) produces a
forced response of the form Ctu(t), which becomes unbounded.
(c) The system y (t) + 3y (t) = x(t) is unstable. The roots of its characteristic equation s3 + 3s2 = 0 are
s1 = s2 = 0, and s3 = 3. They result in the natural response yN (t) = Au(t) + Btu(t) + Ce3t u(t),
which becomes unbounded.
86 Chapter 4 Analog Systems
The impulse response h(t) equals the derivative of the step response. Thus,
1 t/
h(t) = s (t) = e u(t) (impulse response) (4.25)
Performance Measures
The time-domain performance of systems is often measured in terms of their impulse response and/or step
response. For an exponential signal Aet/ , the smaller the time constant , the faster is the decay. For
first-order systems, the time constant is a useful measure of the speed of the response, as illustrated in
Figure 4.1.
The smaller the time constant, the faster the system responds, and the more the output resembles
(matches) the applied input. An exponential decays to less than 1% of its peak value in about 5 . As
a result, the step response is also within 1% of its final value in about 5 . This forms the basis for the
observation that it takes about 5 to reach steady state. For higher-order systems, the rate of decay and the
4.7 Application-Oriented Examples 87
time to reach steady state depends on the largest time constant max (corresponding to the slowest decay)
associated with the exponential terms in its impulse response. A smaller max implies a faster response and
a shorter time to reach steady state. The speed of response is also measured by the rise time, which is often
defined as the time it takes for the step response to rise from 10% to 90% of its final value. Another useful
measure of system performance is the delay time, which is often defined as the time it takes for the step
response to reach 50% of its final value. These measures are also illustrated in Figure 4.1. Another measure
is the settling time, defined as the time it takes for the step response to settle to within a small fraction
(typically, 5%) of its final value.
+ 1 + + + + +
1 3
H 2 1H 1 2H
x(t) 1F y(t) x(t) 1F y(t) x(t) 1F 1F 1 y(t)
Second-order Bessel filter Second-order Butterworth filter Third-order Butterworth filter
Figure E4.12. Circuits for Example 4.12.
CHAPTER 4 PROBLEMS
DRILL AND REINFORCEMENT
4.1 (Operators) Which of the following describe linear operators?
$t
(a) O{ } = 4{ } (b) O{ } = 4{ } + 3 (c) y(t) = x(t) dt
d x(t)
(d) O{ } = sin{ } (e) y(t) = x(4t) (f ) y(t) = 4 + 3x(t)
dt
4.2 (System Classification) In each of the following systems, x(t) is the input and y(t) is the output.
Classify each system in terms of linearity, time invariance, memory, and causality.
(a) y (t) + 3y (t) = 2x (t) + x(t) (b) y (t) + 3y(t)y (t) = 2x (t) + x(t)
(c) y (t) + 3tx(t)y (t) = 2x (t) (d) y (t) + 3y (t) = 2x2 (t) + x(t + 2)
(e) y(t) + 3 = x2 (t) + 2x(t) (f ) y(t) = 2x(t + 1) + 5
(g) y (t) + et y (t) = |x (t 1)| (h) y(t) = x2 (t) + 2x(t + 1)
$t
(i) y (t) + cos(2t)y (t) = x (t + 1) (j) y(t) + t y(t) dt = 2x(t)
$t $ t+1
(k) y (t) + 0 y(t) dt = |x (t)| x(t) (l) y (t) + t 0 y(t) dt = x (t) + 2
4.3 (Classification) Classify the following systems in terms of their linearity, time invariance, causality,
and memory.
(a) The modulation system
y(t) = x(t)cos(2f0 t).
(b) The modulation system
y(t) = [A + x(t)]cos(2f0 t).
(c) The modulation system
y(t) = cos[2f0 tx(t)].
(d) The modulation system
y(t) = cos[2f0 t + x(t)].
%
(e) The sampling system y(t) = x(t) (t kts ).
k=
4.4 (Forced Response) Evaluate the forced response of the following systems.
(a) y (t) + 2y(t) = u(t) (b) y (t) + 2y(t) = cos(t)u(t)
(c) y (t) + 2y(t) = et u(t) (d) y (t) + 2y(t) = e2t u(t)
(e) y (t) + 2y(t) = tu(t) (f ) y (t) + 2y(t) = te2t u(t)
4.5 (Forced Response) Evaluate the forced response of the following systems.
(a) y (t) + 5y (t) + 6y(t) = 3u(t) (b) y (t) + 5y (t) + 6y(t) = 6et u(t)
(c) y (t) + 5y (t) + 6y(t) = 5 cos(t)u(t) (d) y (t) + 5y (t) + 6y(t) = 2e2t u(t)
(e) y (t) + 5y (t) + 6y(t) = 2tu(t) (f ) y (t) + 5y (t) + 6y(t) = (6et + 2e2t )u(t)
4.6 (Steady-State Response) The forced response of a system to sinusoidal inputs is termed the steady-
state response. Evaluate the steady-state response of the following systems.
(a) y (t) + 5y(t) = 2u(t) (b) y (t) + y(t) = cos(t)u(t)
(c) y (t) + 3y(t) = sin(t)u(t) (d) y (t) + 4y(t) = cos(t) + sin(2t)
(e) y (t) + 5y (t) + 6y(t) = cos(3t)u(t) (f ) y (t) + 4y (t) + 4y(t) = cos(2t)u(t)
90 Chapter 4 Analog Systems
4.7 (Zero-State Response) Evaluate the zero-state response of the following systems.
(a) y (t) + 2y(t) = u(t) (b) y (t) + y(t) = cos(t)u(t)
(c) y (t) + y(t) = r(t) (d) y (t) + 3y(t) = et u(t)
(e) y (t) + 2y(t) = e2t u(t) (f ) y (t) + 2y(t) = e2t cos(t)u(t)
4.8 (Zero-State Response) Evaluate the zero-state response of the following systems.
(a) y (t) + 5y (t) + 6y(t) = 6u(t) (b) y (t) + 4y (t) + 3y(t) = 2e2t u(t)
(c) y (t) + 2y (t) + 2y(t) = 2et u(t) (d) y (t) + 4y (t) + 5y(t) = cos(t)u(t)
(e) y (t) + 4y (t) + 3y(t) = r(t) (f ) y (t) + 5y (t) + 4y(t) = (2et + 2e3t )u(t)
4.9 (System Response) Evaluate the natural, forced, zero-state, zero-input, and total response of the
following systems.
(a) y (t) + 5y(t) = u(t) y(0) = 2
(b) y (t) + 3y(t) = 2e2t u(t) y(0) = 1
(c) y (t) + 4y(t) = 8tu(t) y(0) = 2
(d) y (t) + 2y(t) = 2 cos(2t)u(t) y(0) = 4
(e) y (t) + 2y(t) = 2e2t u(t) y(0) = 6
(f ) y (t) + 2y(t) = 2e2t cos(t)u(t) y(0) = 8
4.10 (System Response) Evaluate the response y(t) of the following systems.
(a) y (t) + y(t) = 2x (t) + x(t) x(t) = 4e2t u(t) y(0) = 2
(b) y (t) + 3y(t) = 3x (t) x(t) = 4e2t u(t) y(0) = 0
(c) y (t) + 4y(t) = x (t) x(t) x(t) = 4u(t) y(0) = 6
(d) y (t) + 2y(t) = x(t) + 2x(t 1) x(t) = 4u(t) y(0) = 0
(e) y (t) + 2y(t) = x (t) 2x(t 1) x(t) = 2et u(t) y(0) = 0
(f ) y (t) + 2y(t) = x (t) 2x (t 1) + x(t 2) x(t) = 2et u(t) y(0) = 4
4.11 (System Response) For each of the following, evaluate the natural, forced, zero-state, zero-input,
and total response. Assume y (0) = 1 and all other initial conditions zero.
(a) y (t) + 5y (t) + 6y(t) = 6u(t) y(0) = 0 y (0) = 1
(b) y (t) + 5y (t) + 6y(t) = 2et u(t) y(0) = 0 y (0) = 1
(c) y (t) + 4y (t) + 3y(t) = 36tu(t) y(0) = 0 y (0) = 1
(d) y (t) + 4y (t) + 4y(t) = 2e2t u(t) y(0) = 0 y (0) = 1
(e) y (t) + 4y (t) + 4y(t) = 8 cos(2t)u(t) y(0) = 0 y (0) = 1
(f ) [(s + 1)2 (s + 2)]y(t) = e2t u(t) y(0) = 0 y (0) = 1 y (0) = 0
4.12 (System Response) Evaluate the response y(t) of the following systems.
(a) y (t) + 3y (t) + 2y(t) = 2x (t) + x(t) x(t) = 4u(t) y(0) = 2 y (0) = 1
(b) y (t) + 4y (t) + 3y(t) = 3x (t) x(t) = 4e2t u(t) y(0) = 0 y (0) = 0
(c) y (t) + 4y (t) + 4y(t) = x (t) x(t) x(t) = 4u(t) y(0) = 6 y (0) = 3
(d) y (t) + 2y (t) + 2y(t) = x(t) + 2x(t 1) x(t) = 4u(t) y(0) = 0 y (0) = 0
(e) y (t) + 5y (t) + 6y(t) = x (t) 2x(t 1) x(t) = 2et u(t) y(0) = 0 y (0) = 0
(f ) y (t) + 5y (t) + 4y(t) = x (t) 2x (t 1) x(t) = 3et u(t) y(0) = 4 y (0) = 4
4.13 (Impulse Response) Find the impulse response of the following systems.
(a) y (t) + 3y(t) = x(t) (b) y (t) + 4y(t) = 2x(t)
(c) y (t) + 2y(t) = x (t) 2x(t) (d) y (t) + y(t) = x (t) x(t)
Chapter 4 Problems 91
4.14 (Impulse Response) Find the impulse response of the following systems.
(a) y (t) + 5y (t) + 4y(t) = x(t) (b) y (t) + 4y (t) + 4y(t) = 2x(t)
(c) y (t) + 4y (t) + 3y(t) = 2x (t) x(t) (d) y (t) + 2y (t) + y(t) = x (t) + x (t)
4.15 (Stability) Which of the following systems are stable, and why?
(a) y (t) + 4y(t) = x(t) (b) y (t) 4y(t) = 3x(t)
(c) y (t) + 4y(t) = x (t) + 3x(t) (d) y (t) + 5y (t) + 4y(t) = 6x(t)
(e) y (t) + 4y(t) = 2x (t) x(t) (f ) y (t) + 5y (t) + 6y(t) = x (t)
(g) y (t) 5y (t) + 4y(t) = x(t) (h) y (t) + 2y (t) 3y(t) = 2x (t)
4.16 (Impulse Response) The voltage input to a series RC circuit with a time constant is 1 t/
e u(t).
4.17 (System Response) The step response of an LTI system is given by s(t) = (1 et )u(t).
(a) Establish its impulse response h(t) and sketch both s(t) and h(t).
(b) Evaluate and sketch the response y(t) to the input x(t) = rect(t 0.5).
4.19 (System Classification) Investigate the linearity, time invariance, memory, causality, and stability
of the following operations.
! t ! t
(a) y(t) = y(0) + x() d (b) y(t) = x() d, t > 0
! t+1 0 !0 t+
(c) y(t) = x() d (d) y(t) = x( + 2) d
!t1
t !t
t+
(e) y(t) = x( 2) d (f ) y(t) = x( + 1) d
t t1
4.20 (Classification) Check the following for linearity, time invariance, memory, causality, and stability.
(a) The time-scaling system y(t) = x(2t)
(b) The folding system y(t) = x(t)
(c) The time-scaling system y(t) = x(0.5t)
(d) The sign-inversion system y(t) = sgn[x(t)]
(e) The rectifying system y(t) = |x(t)|
4.21 (Classification) Consider the two systems (1) y(t) = x(t) and (2) y(t) = x(t + ).
(a) For what values of is each system linear?
(b) For what values of is each system causal?
(c) For what values of is each system time invariant?
(d) For what values of is each system instantaneous?
92 Chapter 4 Analog Systems
4.22 (System Response) Consider the relaxed system y (t) + y(t) = x(t).
(a) The input is x(t) = u(t). What is the response?
(b) Use the result of part (a) (and superposition) to find the response of this system to the input
x1 (t) shown in Figure P4.22.
(c) The input is x(t) = tu(t). What is the response?
(d) Use the result of part (c) (and superposition) to find the response of this system to the input
x2 (t) shown in Figure P4.22.
(e) How are the results of parts (a) and (b) related to the results of parts (c) and (d)?
x 1(t) x 2(t)
4 4
2 t t
1 1 2
4 4
Figure P4.22 Input signals for Problem 4.22
4.23 (System Response) Consider the relaxed system y (t) + 1 y(t) = x(t).
(a) What is the response of this system to the unit step x(t) = u(t)?
(b) What is the response of this system to the unit impulse x(t) = (t)?
(c) What is the response of this system to the rectangular pulse x(t) = u(t) u(t )? Under what
conditions for and will the response resemble (be a good approximation to) the input?
4.24 (System Response) It is known that the response of the system y (t) + y(t) = x(t), = 0, is given
by y(t) = (5 + 3e2t )u(t).
(a) Identify the natural and forced response.
(b) Identify the values of and y(0).
(c) Identify the zero-input and zero-state response.
(d) Identify the input x(t).
4.25 (System Response) It is known that the response of the system y (t) + y(t) = x(t) is given by
y(t) = (5et + 3e2t )u(t).
(a) Identify the zero-input and zero-state response.
(b) What is the zero-input response of the system y (t) + y(t) = x(t) if y(0) = 10?
(c) What is the response of the relaxed system y (t) + y(t) = x(t 2)?
(d) What is the response of the relaxed system y (t) + y(t) = x (t) + 2x(t)?
4.26 (System Response) It is known that the response of the system y (t) + y(t) = x(t) is given by
y(t) = (5 + 2t)e3t u(t).
(a) Identify the zero-input and zero-state response.
(b) What is the zero-input response of the system y (t) + y(t) = x(t) if y(0) = 10?
(c) What is the response of the relaxed system y (t) + y(t) = x(t 2)?
(d) What is the response of the relaxed system y (t) + y(t) = 2x(t) + x (t)?
(e) What is the complete response of the system y (t) + y(t) = x (t) + 2x(t) if y(0) = 4?
Chapter 4 Problems 93
4.27 (Impulse Response) Consider the relaxed system y (t) + 1 y(t) = x(t).
(a) What is the impulse response of this system?
(b) What is the response of this system to the rectangular pulse x(t) = 1 [u(t) u(t )]? Show
that as 0, we obtain the system impulse response h(t).
(c) What is the response of this system to the exponential input x(t) = 1 et/ u(t)? Show that as
0, we obtain the system impulse response h(t).
4.28 (System Response) Find the response of the following systems for t 0.
(a) y (t) + 2y(t) = 2e(t1) u(t 1) y(0) = 5
(b) y (t) + 2y(t) = e2t u(t) + 2e(t1) u(t 1) y(0) = 5
(c) y (t) + 2y(t) = tet + 2e(t1) u(t 1) y(0) = 5
(d) y (t) + 2y(t) = cos(2t) + 2e(t1) u(t 1) y(0) = 5
4.29 (Impulse Response) Find the step response and impulse response of each circuit in Figure P4.29.
+ R + + R + + +
C
x(t) R y(t) x(t) y(t) x(t) R y(t)
C
Circuit 1 Circuit 2 Circuit 3
+ L + + R + + R +
Circuit 4 Circuit 5 Circuit 6
Figure P4.29 Circuits for Problem 4.29
4.30 (Impulse Response) The input-output relation for an LTI system is shown in Figure P4.30. What
is the impulse response h(t) of this system?
x(t) Input y(t) Output
4 4
2
t 5 t
1 3 1 3
4
Figure P4.30 Figure for Problem 4.30
4.31 (System Response) Consider two relaxed RC circuits with 1 = 0.5 s and 2 = 5 s. The input to
both is the rectangular pulse x(t) = 5 rect(t 0.5) V. The output is the capacitor voltage.
(a) Find and sketch the outputs y1 (t) and y2 (t) of the two circuits.
(b) At what time t > 0 does the output of both systems attain the same value?
4.32 (Classification and Stability) Argue for or against the following statements, assuming relaxed
systems and constant element values. You may validate your arguments using simple circuits.
(a) A system with only resistors is always instantaneous and stable.
(b) A system with only inductors and/or capacitors is always stable.
(c) An RLC system with at least one resistor is always linear, causal, and stable.
94 Chapter 4 Analog Systems
4.33 (Dierential Equations from Impulse Response) Though there is an easy way of obtaining a
system dierential equation from its impulse response using transform methods, we can also obtain such
a representation by working in the time domain itself. Let a system be described by h(t) = et u(t). If
we compute h (t) = (t) et u(t), we find that h (t) + h(t) = (t), and the system dierential equation
follows as y (t) + y(t) = x(t). Using this idea, determine the system dierential equation corresponding
to each impulse response h(t).
4.34 (Inverse Systems) If the input to a system is x0 (t) and its response is y0 (t), the inverse system
is defined as a system that recovers x0 (t) when its input is y0 (t). Inverse systems are often used to
undo the eects of measurement systems such as a transducers. The system equation of the inverse of
many LTI systems can be found simply by switching the input and output. For example, if the system
equation is y(t) = x(t 3), the inverse system is x(t) = y(t 3) (or y(t) = x(t + 3), by time invariance).
Find the inverse of each system and determine whether the inverse system is stable.
(a) y (t) + 2y(t) = x(t) (b) y (t) + 2y (t) + y(t) = x (t) + 2x(t)
4.35 (Inverse Systems) A requirement for a system to have an inverse is that unique inputs must produce
unique outputs. Thus, the system y(t) = |x(t)| does not have an inverse because of the sign ambiguity.
Determine which of the following systems are invertible and, for those that are, find the inverse system.
(a) y(t) = x2 (t) (b) y(t) = ex(t) (c) y(t) = cos[x(t)]
(d) y(t) = ejx(t) (e) y(t) = x(t 2) (f ) y (t) + y(t) = x(t)
4.36 (System Response in Symbolic Form) The ADSP routine sysresp1 yields a symbolic result for
the system response (see Chapter 21 for examples of its usage). Consider the system y (t) + 2y(t) =
2x(t). Use sysresp1 to obtain its
(a) Step response.
(b) Impulse response.
(c) Zero-state response to x(t) = 4e3t u(t).
(d) Complete response to x(t) = 4e3t u(t) with y(0) = 5.
4.37 (System Response) Use the ADSP routine sysresp1 to find the step response and impulse response
of the following filters and plot each result over 0 t 4. Compare the features of the step response
of each filter. Compare the features of the impulse response of each filter.
(a) y (t) + y(t)
= x(t) (a first-order lowpass filter)
(b) y (t) + 2y (t) + y(t) = x(t) (a second-order Butterworth lowpass filter)
Chapter 4 Problems 95
4.38 (Rise Time and Settling Time) For systems whose step response rises to a nonzero final value,
the rise time is commonly defined as the time it takes to rise from 10% to 90% of the final value. The
settling time is another measure for such signals. The 5% settling time, for example, is defined as the
time it takes for a signal to settle to within 5% of its final value. For each system, use the ADSP
routine sysresp1 to find the impulse response and step response and plot the results over 0 t 4.
For those systems whose step response rises toward a nonzero final value, use the ADSP routine trbw
to numerically estimate the rise time and the 5% settling time.
(a) y (t) + y(t)
= x(t) (a first-order lowpass filter)
(b) y (t) + 2y (t) + y(t) = x(t) (a second-order Butterworth lowpass filter)
(c) y (t) + y (t) + y(t) = x (t) (a bandpass filter)
(d) y (t) + 2y (t) + 2y (t) + y(t) = x(t) (a third-order Butterworth lowpass filter)
4.39 (System Response) Consider the system y (t) + 4y(t) + Cy(t) = x(t).
(a) Use sysresp1 to obtain its step response and impulse response for C = 3, 4, 5 and plot each
response over an appropriate time interval.
(b) How does the step response dier for each value of C? For what value of C would you expect
the smallest rise time? For what value of C would you expect the smallest 3% settling time?
(c) Confirm your predictions in the previous part by numerically estimating the rise time and settling
time, using the ADSP routine trbw.
4.40 (Steady-State Response in Symbolic Form) The ADSP routine ssresp yields a symbolic ex-
pression for the steady-state response to sinusoidal inputs (see Chapter 21 for examples of its usage).
Find the steady-state response to the input x(t) = 2 cos(3t 3 ) for each of the following systems and
plot the results over 0 t 3, using a time step of 0.01 s.
(a) y (t) + y(t) = 2x(t), for = 1, 2
(b) y (t) + 4y(t) + Cy(t) = x(t), for C = 3, 4, 5
4.41 (Numerical Simulation of Analog Systems) The ADSP routine ctsim returns estimates of
the system response using numerical integration such as Simpsons rule and Runge-Kutta methods.
Consider the dierential equation y (t) + y(t) = x(t). In the following, use the second-order Runge-
Kutta method throughout.
(a) Let x(t) = rect(t 0.5) and = 1. Evaluate its response y(t) analytically. Use ctsim to evaluate
its response y1 (t) over 0 t 3, using a time step of 0.1 s. Plot both results on the same graph
and compare.
(b) Let x(t) = sin(t), 0 t . Use ctsim to evaluate its response y1 (t) over 0 t 6, using
= 1, 3, 10 and a time step of 0.02 s. Plot each response along with the input x(t) on the same
graph. Does the response begin to resemble the input as is increased? Should it? Explain.
(c) Let x(t) = sin(t), 0 t . Use ctsim to evaluate its response y1 (t) over 0 t 6, using
= 100 and a time step of 0.02 s. Plot the response along with the input x(t) on the same graph.
Now change the time step to 0.03 s. What is the response? To explain what is happening, find
and plot the response for time steps of 0.0201 s, 0.0202 s, and 0.0203 s. Describe what happens
to the computed response and why.
Chapter 5
DISCRETE-TIME SYSTEMS
96
5.2 System Classification 97
Together, the two describe the principle of superposition. An operator O is termed a linear operator
if it is both additive and homogeneous:
O{Ax1 [n] + Bx2 [n]} = AO{x1 [n]} + BO{x2 [n]} (for a linear operation) (5.4)
Otherwise, it is nonlinear. In many instances, it suces to test only for homogeneity (or additivity) to
confirm the linearity of an operation (even though one does not imply the other). An important concept
that forms the basis for the study of linear systems is that the superposition of linear operators is also linear.
The order N describes the output term with the largest delay. It is customary to normalize the leading
coecient to unity.
5.2.1 Linearity
A linear system is one for which superposition applies and implies that the system is relaxed (with zero initial
conditions) and the system equation involves only linear operators. However, we can use superposition even
for a system with nonzero initial conditions that is otherwise linear. We treat it as a multiple-input system by
including the initial conditions as additional inputs. The output then equals the superposition of the outputs
due to each input acting alone, and any changes in the input are related linearly to changes in the response.
As a result, its response can be written as the sum of a zero-input response (due to the initial conditions
alone) and the zero-state response (due to the input alone). This is the principle of decomposition,
which allows us to analyze linear systems in the presence of nonzero initial conditions. Both the zero-input
response and the zero-state response obey superposition.
98 Chapter 5 Discrete-Time Systems
(c) y[n] = x[2n] is linear but time varying. The operation n 2n reveals that
AO{x[n]} = A(x[2n]), and O{Ax[n]} = (Ax[2n]). The two are equal.
O{x[n n0 ]} = x[2n n0 ], but y[n n0 ] = x[2(n n0 )]. The two are not equal.
(d) y[n] = x[n 2] is linear and time invariant. The operation n n 2 reveals that
AO{x[n]} = A(x[n 2]), and O{Ax[n]} = (Ax[n 2]). The two are equal.
O{x[n n0 ]} = x[n n0 2], and y[n n0 ] = x[n n0 2]. The two are equal.
1. Terms containing products of the input and/or output make a system equation nonlinear. A constant
term also makes a system equation nonlinear.
2. Coecients of the input or output that are explicit functions of n make a system equation time varying.
Time-scaled inputs or outputs such as y[2n] also make a system equation time varying.
(c) y[n] + 2y 2 [n] = 2x[n] x[n 1]. This is nonlinear but time invariant.
(d) y[n] 2y[n 1] = (2)x[n] x[n]. This is nonlinear but time invariant.
This describes an N th-order recursive filter whose present output depends on its own past values y[n k]
and on the past and present values of the input. It is also called an infinite impulse response (IIR) filter
because its impulse response h[n] (the response to a unit impulse input) is usually of infinite duration. Now
consider the dierence equation described by
Its present response depends only on the input terms and shows no dependence (recursion) on past values of
the response. It is called a nonrecursive filter, or a moving average filter, because its response is just
a weighted sum (moving average) of the input terms. It is also called a finite impulse response (FIR)
filter (because its impulse response is of finite duration).
y [n]
Delay elements in cascade result in an output delayed by the sum of the individual delays. The operational
notation for a delay of k units is z k . A nonrecursive filter described by
can be realized using a feed-forward structure with N delay elements, and a recursive filter of the form
requires a feedback structure (because the output depends on its own past values). Each realization is
shown in Figure 5.2 and requires N delay elements. The general form described by
requires both feed-forward and feedback and 2N delay elements, as shown in Figure 5.3. However, since
LTI systems may be cascaded in any order (as we shall learn in the next chapter), we can switch the two
subsystems to obtain a canonical realization with only N delays, as also shown in Figure 5.3.
x [n]
B0 +
y [n] x [n]
+ y [n]
+ +
z1 z1
A1
+
B1 + + +
z1 z1
A2
B2 +
+
+ +
z1 AN z1
BM
Figure 5.2 Realization of a nonrecursive (left) and recursive (right) digital filter
x [n]
+ + y [n] x [n]
+ + y [n]
+ B0 + + B0 +
z1 z1 z1
A1 A1
+ +
+ +
+ B1 + + B1 +
z1 z1 z1
A2 A2
+ + + +
+ B2 + + B2 +
z1 z1 z1
AN BN AN BN
Figure 5.3 Direct (left) and canonical (right) realization of a digital filter
The state variable representation describes an nth-order system by n simultaneous first-order dierence
equations called state equations in terms of n state variables. It is useful for complex or nonlinear systems
and those with multiple inputs and outputs. For LTI systems, state equations can be solved using matrix
methods. The state variable form is also readily amenable to numerical solution. We do not pursue this
method in this book.
(b) Consider a system described by y[n] = a1 y[n 1] + b0 nu[n]. Let the initial condition be y[1] = 0. We
then successively compute
y[0] = a1 y[1] = 0
y[1] = a1 y[0] + b0 u[1] = b0
y[2] = a1 y[1] + 2b0 u[2] = a1 b0 + 2b0
y[3] = a1 y[2] + 3b0 u[3] = a1 [a1 b0 + 2b0 ] + 3b0 = a21 + 2a1 b0 + 3b0
104 Chapter 5 Discrete-Time Systems
Using the closed form for the sum kxk from k = 1 to k = N (with x = a1 ), we get
a1 [1 (n + 1)an + na(n+1) ]
y[n] = an1 b0 an1 + b0 an1
1
(1 a1 )2
What a chore! More elegant ways of solving dierence equations are described later in this chapter.
(c) Consider the recursive system y[n] = y[n 1] + x[n] x[n 3]. If x[n] equals [n] and y[1] = 0, we
successively obtain
y[0] = y[1] + [0] [3] = 1 y[3] = y[2] + [3] [0] = 1 1 = 0
y[1] = y[0] + [1] [2] = 1 y[4] = y[3] + [4] [1] = 0
y[2] = y[1] + [2] [1] = 1 y[5] = y[4] + [5] [2] = 0
The impulse response of this recursive filter is zero after the first three values and has a finite length.
It is actually a nonrecursive (FIR) filter in disguise!
Table 5.1 Form of the Natural Response for Discrete LTI Systems
! "p+1 rn cos(n)(A0 + A1 n + A2 n2 + + Ap np )
4 Complex, repeated: rej
+ rn sin(n)(B0 + B1 n + B2 n2 + + Bp np )
Table 5.2 Form of the Forced Response for Discrete LTI Systems
Note: If the right-hand side (RHS) is n , where is also a root of the characteristic
equation repeated p times, the forced response form must be multiplied by np .
Entry Forcing Function (RHS) Form of Forced Response
5 n C0 + C1 n
6 np C0 + C1 n + C2 n2 + + Cp np
x [n]
+ y [n]
+
z1
0.6
The dierence equation describing this system is y[n] 0.6y[n 1] = x[n] = (0.4)n , n 0.
Its characteristic equation is 1 0.6z 1 = 0 or z 0.6 = 0.
Its root z = 0.6 gives the form of the natural response yN [n] = K(0.6)n .
Since x[n] = (0.4)n , the forced response is yF [n] = C(0.4)n .
We find C by substituting for yF [n] into the dierence equation
yF [n] 0.6yF [n 1] = (0.4)n = C(0.4)n 0.6C(0.4)n1 .
Cancel out (0.4)n from both sides and solve for C to get
C 1.5C = 1 or C = 2.
Thus, yF [n] = 2(0.4)n . The total response is y[n] = yN [n] + yF [n] = 2(0.4)n + K(0.6)n .
We use the initial condition y[1] = 10 on the total response to find K:
y[1] = 10 = 5 + K
0.6 and K = 9.
Thus, y[n] = 2(0.4) + 9(0.6)n , n 0.
n
(b) Consider the dierence equation y[n] 0.5y[n 1] = 5 cos(0.5n), n 0 with y[1] = 4.
Its characteristic equation is 1 0.5z 1 = 0 or z 0.5 = 0.
Its root z = 0.5 gives the form of the natural response yN [n] = K(0.5)n .
Since x[n] = 5 cos(0.5n), the forced response is yF [n] = A cos(0.5n) + B sin(0.5n).
We find yF [n 1] = A cos[0.5(n 1)] + B sin[0.5(n 1)] = A sin(0.5n) B cos(0.5n). Then
yF [n] 0.5yF [n 1] = (A + 0.5B)cos(0.5n) (0.5A B)sin(0.5n) = 5 cos(0.5n)
Equate the coecients of the cosine and sine terms to get
(A + 0.5B) = 5, (0.5A B) = 0 or A = 4, B = 2, and yF [n] = 4 cos(0.5n) + 2 sin(0.5n).
The total response is y[n] = K(0.5)n + 4 cos(0.5n) + 2 sin(0.5n). With y[1] = 4, we find
y[1] = 4 = 2K 2 or K = 3, and thus y[n] = 3(0.5)n + 4 cos(0.5n) + 2 sin(0.5n), n 0.
The steady-state response is 4 cos(0.5n) + 2 sin(0.5n), and the transient response is 3(0.5)n .
5.4 Digital Filters Described by Dierence Equations 107
(c) Consider the dierence equation y[n] 0.5y[n 1] = 3(0.5)n , n 0 with y[1] = 2.
Its characteristic equation is 1 0.5z 1 = 0 or z 0.5 = 0.
Its root, z = 0.5, gives the form of the natural response yN [n] = K(0.5)n .
Since x[n] = (0.5)n has the same form as the natural response, the forced response is yF [n] = Cn(0.5)n .
We find C by substituting for yF [n] into the dierence equation:
yF [n] 0.5yF [n 1] = 3(0.5)n = Cn(0.5)n 0.5C(n 1)(0.5)n1 .
Cancel out (0.5)n from both sides and solve for C to get Cn C(n 1) = 3, or C = 3.
Thus, yF [n] = 3n(0.5)n . The total response is y[n] = yN [n] + yF [n] = K(0.5)n + 3n(0.5)n .
We use the initial condition y[1] = 2 on the total response to find K:
y[1] = 2 = 2K 6, and K = 4.
Thus, y[n] = 4(0.5)n + 3n(0.5)n = (4 + 3n)(0.5)n , n 0.
Comparison with the generic realization of Figure 5.2 reveals that the system dierence equation is
EXAMPLE 5.7 (Zero-Input and Zero-State Response for the Single-Input Case)
(a) Consider the dierence equation y[n] 0.6y[n 1] = (0.4)n , n 0, with y[1] = 10.
The forced response and the form of the natural response were found in Example 5.6(a) as:
yF [n] = 2(0.4)n yN [n] = K(0.6)n
1. Its ZSR is found from the form of the total response is yzs [n] = 2(0.4)n + K(0.6)n , with zero
initial conditions:
yzs [1] = 0 = 5 + K
0.6 K=3 yzs [n] = 2(0.4)n + 3(0.6)n , n 0
2. Its ZIR is found from the natural response yzi [n] = K(0.6)n , with given initial conditions:
yzi [1] = 10 = K
0.6 K=6 yzi [n] = 6(0.6)n , n 0
3. The total response is y[n] = yzi [n] + yzs [n] = 2(0.4)n + 9(0.6)n , n 0.
This matches the results of Example 5.6(a).
(b) Let y[n] 16 y[n 1] 16 y[n 2] = 4, n 0, with y[1] = 0 and y[2] = 12.
1. The ZIR has the form of the natural response yzi [n] = K1 ( 12 )n + K2 ( 13 )n (see Example 5.6(d)).
To find the constants, we use the given initial conditions y[1] = 0 and y[2] = 12:
0 = K1 ( 12 )1 + K2 ( 13 )1 = 2K1 3K2 12 = K1 ( 12 )2 + K2 ( 13 )2 = 4K1 + 9K2
Thus, K1 = 1.2, K2 = 0.8, and
yzi [n] = 1.2( 21 )n + 0.8( 13 )n , n0
5.4 Digital Filters Described by Dierence Equations 109
2. The ZSR has the same form as the total response. Since the forced response (found in Exam-
ple 5.6(d)) is yF [n] = 6, we have
yzs [n] = K1 ( 12 )n + K2 ( 13 )n + 6
To find the constants, we assume zero initial conditions, y[1] = 0 and y[2] = 0, to get
y[1] = 0 = 2K1 3K2 + 6 y[2] = 0 = 4K1 + 9K2 + 6
(c) (Linearity of the ZSR and ZIR) An IIR filter is described by y[n] y[n 1] 2y[n 2] = x[n],
with x[n] = 6u[n] and initial conditions y[1] = 1, y[2] = 4.
1. Find the zero-input response, zero-state response, and total response.
2. How does the total response change if y[1] = 1, y[2] = 4 as given, but x[n] = 12u[n]?
3. How does the total response change if x[n] = 6u[n] as given, but y[1] = 2, y[2] = 8?
For the ZSR, we use the form of the total response and zero initial conditions:
yzs [n] = yF [n] + yN [n] = 0.5 + A(1)n + B(2)n , y[1] = y[2] = 0
+
+
z1
2
Impulse response h[n]: The output of a relaxed LTI system if the input is a unit impulse [n]
Step response s[n]: The output of a relaxed LTI system if the input is a unit step u[n]
the impulse response h[n] (with x[n] = [n]) is an M + 1 term sequence of the input terms, which may be
written as
h[n] = B0 [n] + B1 [n 1] + + BM [n M ] or h[n] = {B0 , B1 , . . . , BM } (5.20)
Since the input [n] is zero for n > 0, we must apparently assume a forced response that is zero and thus
solve for the natural response using initial conditions (leading to a trivial result). The trick is to use at
least one nonzero initial condition, which we must find by recursion. By recursion, we find h[0] = 1. Since
[n] = 0, n > 0, the impulse response is found as the natural response of the homogeneous equation
subject to the nonzero initial condition h[0] = 1. All the other initial conditions are assumed to be zero
(h[1] = 0 for a second-order system, h[1] = h[2] = 0 for a third-order system, and so on).
The impulse response of the given system is h[n] = 4h0 [n] h0 [n 1]. We find
h[n] = [1.2( 21 )n + 0.8( 13 )n ]u[n] [3.6( 21 )n1 + 2.4( 31 )n1 ]u[n 1]
Comment: Remember that the impulse response of this recursive system is of finite length.
h[1] = 0.4h[0] = 0.4 h[2] = 0.4h[1] = (0.4)2 h[3] = 0.4h[2] = (0.4)3 etc.
The general form is easily discerned as h[n] = (0.4)n and is valid for n 0.
Comment: The causal impulse response of y[n] y[n 1] = x[n] is h[n] = n u[n].
(b) Find the anti-causal impulse response of the first-order system y[n] 0.4y[n 1] = x[n].
For the anti-causal impulse response, we assume h[n] = 0, n 0, and solve for h[n], n < 0, by recursion
from h[n 1] = 2.5(h[n] [n]). With h[1] = 2.5(h[0] [0]) = 2.5, and [n] = 0, n = 0, we find
h[2] = 2.5h[1] = (2.5)2 h[3] = 2.5h[2] = (2.5)3 h[4] = 2.5h[3] = (2.5)4 etc.
The general form is easily discerned as h[n] = (2.5)n = (0.4)n and is valid for n 1.
Comment: The anti-causal impulse response of y[n] y[n 1] = x[n] is h[n] = n u[n 1].
5.6 Stability of Discrete-Time LTI Systems 115
(b) The system y[n] y[n 1] = x[n] is unstable. The root of its characteristic equation z 1 = 0 is z = 1
gives the natural response yN = Ku[n], which is actually bounded. However, for an input x[n] = u[n],
the forced response will have the form Cnu[n], which becomes unbounded.
(c) The system y[n] 2y[n 1] + y[n 2] = x[n] is unstable. The roots of its characteristic equation
z 2 2z + 1 = 0 are equal and produce the unbounded natural response yN [n] = Au[n] + Bnu[n].
(d) The system y[n] 12 y[n 1] = nx[n] is linear, time varying and unstable. The (bounded) step input
x[n] = u[n] results in a response that includes the ramp nu[n], which becomes unbounded.
(e) The system y[n] = x[n] 2x[n 1] is stable because it describes an FIR filter.
116 Chapter 5 Discrete-Time Systems
(b) Let h[n] = 3(0.6)n u[n]. This suggests a dierence equation whose left-hand side is y[n] 0.6y[n 1].
We then set up h[n] 0.6h[n 1] = 3(0.6)n u[n] 1.8(0.6)n1 u[n 1]. This simplifies to
h[n] 0.6h[n 1] = 3(0.6)n u[n] 3(0.6)n u[n 1] = 3(0.6)n (u[n] u[n 1]) = 3(0.6)n [n] = 3[n]
The dierence equation corresponding to h[n] 0.6h[n 1] = 3[n] is y[n] 0.6y[n 1] = 3x[n].
(c) Let h[n] = 2(0.5)n u[n] + (0.5)n u[n]. This suggests a characteristic equation (z 0.5)(z + 0.5).
The left-hand side of the dierence equation is thus y[n] 0.25y[n 2]. We now compute
h[n] 0.25h[n 2] = 2(0.5)n u[n] + (0.5)n u[n] 0.25(2(0.5)n1 u[n 1] + (0.5)n1 u[n 1])
This simplifies to
h[n] 0.25h[n 2] = [2(0.5)n + (0.5)n ](u[n] u[n 2]) = [2(0.5)n + (0.5)n ]([n] + [n 2])
This simplifies further to h[n] 0.25h[n 2] = 3[n] 0.5[n 1].
Finally, the dierence equation is y[n] 0.25y[n 2] = 3x[n] 0.5x[n 1].
Not all systems have an inverse. For a system to have an inverse, or be invertible, distinct inputs must
lead to distinct outputs. If a system produces an identical output for two dierent inputs, it does not have
an inverse. For invertible LTI systems described by dierence equations, finding the inverse system is as
easy as switching the input and output variables.
The original system is described by y[n] = x[n] 0.5x[n 1]. By switching the input and output, the
inverse system is described by y[n] 0.5y[n 1] = x[n]. The realization of each system is shown in
118 Chapter 5 Discrete-Time Systems
Figure E5.16A(2). Are they related? Yes. If you flip the realization of the echo system end-on-end
and change the sign of the feedback signal, you get the inverse realization.
+
Input + Output Input Output
+
z1 z1
0.5 0.5
g[n] = (4[n] + 4[n 1]) (2[n 1] + 2[n 2]) = 4[n] + 2[n 1]) 2[n 2])
y0 [0] = 0.5y0 [1] + 4[0] = 4 y0 [1] = 0.5y0 [0] + 2[0] = 4 y0 [2] = 0.5y0 [1] 2[0] = 0
All subsequent values of y0 [n] are zero since the input terms are zero for n > 2. The output is thus
y0 [n] = {4, 4}, the same as the input to the overall system.
(c) The linear (but time-varying) decimating system y[n] = x[2n] does not have an inverse. Two inputs,
which dier in the samples discarded (for example, the signals {1, 2, 4, 5} and {1, 3, 4, 8} yield the same
output {1, 4}). If we try to recover the original signal by interpolation, we cannot uniquely identify
the original signal.
(d) The linear (but time-varying) interpolating system y[n] = x[n/2] does have an inverse. Its inverse is a
decimating system that discards the very samples inserted during interpolation and thus recovers the
original signal.
(e) The LTI system y[n] = x[n] + 2x[n 1] also has an inverse. Its inverse is found by switching the input
and output as y[n] + 2y[n 1] = x[n]. This example also shows that the inverse of an FIR filter results
in an IIR filter.
5.8 Application-Oriented Examples 119
This describes an FIR filter whose output y[n] equals the input x[n] and its delayed (by D samples) and
attenuated (by ) replica of x[n] (the echo term). Its realization is sketched in Figure 5.5. The D-sample
delay is implemented by a cascade of D delay elements and represented by the block marked z D . This
filter is also called a comb filter (for reasons to be explained in later chapters).
+
Input + Output Input Output
+ +
z-D z-D
Reverberations are due to multiple echoes (from the walls and other structures in a concert hall, for
example). For simplicity, if we assume that the signal suers the same delay D and the same attenuation
in each round-trip to the source, we may describe the action of reverb by
Subtracting the second equation from the first, we obtain a compact form for a reverb filter:
This is an IIR filter whose realization is also sketched in Figure 5.5. Its form is reminiscent of the inverse of
the echo system y[n] + y[n D] = x[n], but with replaced by .
In concept, it should be easy to tailor the simple reverb filter to simulate realistic eects by including
more terms with dierent delays and attenuation. In practice, however, this is no easy task, and the filter
designs used by commercial vendors in their applications are often proprietary.
where x1 [n] corresponds to one period (N samples) of the signal x[n]. This form actually describes a reverb
system with no attenuation whose delay equals the period N . Hardware implementation often uses a circular
buer or wave-table (in which one period of the signal is stored), and cycling over it generates the periodic
signal. The same wave-table can also be used to change the frequency (or period) of the signal (to double
the frequency for example, we would cycle over alternate samples) or for storing a new signal.
CHAPTER 5 PROBLEMS
DRILL AND REINFORCEMENT
5.2 (System Classification) In each of the systems below, x[n] is the input and y[n] is the output.
Check each system for linearity, shift invariance, memory, and causality.
(a) y[n] y[n 1] = x[n] (b) y[n] + y[n + 1] = nx[n]
(c) y[n] y[n + 1] = x[n + 2] (d) y[n + 2] y[n + 1] = x[n]
(e) y[n + 1] x[n]y[n] = nx[n + 2] (f ) y[n] + y[n 3] = x2 [n] + x[n + 6]
(g) y[n] 2n y[n] = x[n] (h) y[n] = x[n] + x[n 1] + x[n 2]
5.3 (Response by Recursion) Find the response of the following systems by recursion to n = 4 and
try to discern the general form for y[n].
(a) y[n] ay[n 1] = [n] y[1] = 0
(b) y[n] ay[n 1] = u[n] y[1] = 1
(c) y[n] ay[n 1] = nu[n] y[1] = 0
(d) y[n] + 4y[n 1] + 3y[n 2] = u[n 2] y[1] = 0 y[2] = 1
5.4 (Forced Response) Find the forced response of the following systems.
(a) y[n] 0.4y[n 1] = u[n] (b) y[n] 0.4y[n 1] = (0.5)n
(c) y[n] + 0.4y[n 1] = (0.5)n (d) y[n] 0.5y[n 1] = cos(n/2)
5.5 (Forced Response) Find the forced response of the following systems.
(a) y[n] 1.1y[n 1] + 0.3y[n 2] = 2u[n] (b) y[n] 0.9y[n 1] + 0.2y[n 2] = (0.5)n
(c) y[n] + 0.7y[n 1] + 0.1y[n 2] = (0.5)n (d) y[n] 0.25y[n 2] = cos(n/2)
5.6 (Zero-State Response) Find the zero-state response of the following systems.
(a) y[n] 0.5y[n 1] = 2u[n] (b) y[n] 0.4y[n 1] = (0.5)n
(c) y[n] 0.4y[n 1] = (0.4)n (d) y[n] 0.5y[n 1] = cos(n/2)
5.7 (Zero-State Response) Find the zero-state response of the following systems.
(a) y[n] 1.1y[n 1] + 0.3y[n 2] = 2u[n] (b) y[n] 0.9y[n 1] + 0.2y[n 2] = (0.5)n
(c) y[n] + 0.7y[n 1] + 0.1y[n 2] = (0.5)n (d) y[n] 0.25y[n 2] = cos(n/2)
5.8 (System Response) Let y[n] 0.5y[n 1] = x[n], with y[1] = 1. Find the response of this system
for the following inputs.
(a) x[n] = 2u[n] (b) x[n] = (0.25)n u[n] (c) x[n] = n(0.25)n u[n]
(d) x[n] = (0.5)n u[n] (e) x[n] = n(0.5)n (f ) x[n] = (0.5)n cos(0.5n)
122 Chapter 5 Discrete-Time Systems
5.10 (System Response) Sketch a realization for each system, assuming zero initial conditions. Then
evaluate the complete response from the information given. Check your answer by computing the first
few values by recursion.
(a) y[n] 0.4y[n 1] = x[n] x[n] = (0.5)n u[n] y[1] = 0
(b) y[n] 0.4y[n 1] = 2x[n] + x[n 1] x[n] = (0.5)n u[n] y[1] = 0
(c) y[n] 0.4y[n 1] = 2x[n] + x[n 1] x[n] = (0.5)n u[n] y[1] = 5
(d) y[n] + 0.5y[n 1] = x[n] x[n 1] x[n] = (0.5)n u[n] y[1] = 2
(e) y[n] + 0.5y[n 1] = x[n] x[n 1] x[n] = (0.5)n u[n] y[1] = 0
5.11 (System Response) For each system, evaluate the natural, forced, and total response. Assume that
y[1] = 0, y[2] = 1. Check your answer for the total response by computing its first few values by
recursion.
(a) y[n] + 4y[n 1] + 3y[n 2] = u[n] (b) y[n] + 4y[n 1] + 4y[n 2] = 2n u[n]
(c) y[n] + 4y[n 1] + 8y[n 2] = cos(n)u[n] (d) {(1 + 2z 1 )2 }y[n] = n(2)n u[n]
(e) {1 + 34 z 1 + 18 z 2 }y[n] = ( 31 )n u[n] (f ) {1 + 0.5z 1 + 0.25z 2 }y[n] = cos(0.5n)u[n]
(g) {z 2 + 4z + 4}y[n] = 2n u[n] (h) {1 0.5z 1 }y[n] = (0.5)n cos(0.5n)u[n]
5.12 (System Response) For each system, set up a dierence equation and compute the zero-state,
zero-input, and total response, assuming x[n] = u[n] and y[1] = y[2] = 1.
(a) {1 z 1 2z 2 }y[n] = x[n] (b) {z 2 z 2}y[n] = x[n]
(c) {1 34 z 1 + 18 z 2 }y[n] = {z 1 }x[n] (d) {1 34 z 1 + 18 z 2 }y[n] = {1 + z 1 }x[n]
(e) {1 0.25z 2 }y[n] = x[n] (f ) {z 2 0.25}y[n] = {2z 2 + 1}x[n]
5.13 (Impulse Response by Recursion) Find the impulse response h[n] by recursion up to n = 4 for
each of the following systems.
(a) y[n] y[n 1] = 2x[n] (b) y[n] 3y[n 1] + 6y[n 2] = x[n 1]
(c) y[n] 2y[n 3] = x[n 1] (d) y[n] y[n 1] + 6y[n 2] = nx[n 1] + 2x[n 3]
5.14 (Analytical Form for Impulse Response) Classify each filter as recursive or FIR (nonrecursive),
and causal or noncausal, and find an expression for its impulse response h[n].
(a) y[n] = x[n] + x[n 1] + x[n 2] (b) y[n] = x[n + 1] + x[n] + x[n 1]
(c) y[n] + 2y[n 1] = x[n] (d) y[n] + 2y[n 1] = x[n 1]
(e) y[n] + 2y[n 1] = 2x[n] + 6x[n 1] (f ) y[n] + 2y[n 1] = x[n + 1] + 4x[n] + 6x[n 1]
(g) {1 + 4z 1 + 3z 2 }y[n] = {z 2 }x[n] (h) {z 2 + 4z + 4}y[n] = {z + 3}x[n]
(i) {z 2 + 4z + 8}y[n] = x[n] (j) y[n] + 4y[n 1] + 4y[n 2] = x[n] x[n + 2]
5.15 (Stability) Investigate the causality and stability of the following systems.
(a) y[n] = x[n 1] + x[n] + x[n + 1] (b) y[n] = x[n] + x[n 1] + x[n 2]
(c) y[n] 2y[n 1] = x[n] (d) y[n] 0.2y[n 1] = x[n] 2x[n + 2]
(e) y[n] + y[n 1] + 0.5y[n 2] = x[n] (f ) y[n] y[n 1] + y[n 2] = x[n] x[n + 1]
(g) y[n] 2y[n 1] + y[n 2] = x[n] x[n 3] (h) y[n] 3y[n 1] + 2y[n 2] = 2x[n + 3]
Chapter 5 Problems 123
5.17 (System Classification) Classify the following systems in terms of their linearity, time invariance,
memory, causality, and stability.
(a) y[n] = x[n/3] (zero interpolation)
(b) y[n] = cos(n)x[n] (modulation)
(c) y[n] = [1 + cos(n)]x[n] (modulation)
(d) y[n] = cos(nx[n]) (frequency modulation)
(e) y[n] = cos(n + x[n]) (phase modulation)
(f )y[n] = x[n] x[n 1] (dierencing operation)
(g) y[n] = 0.5x[n] + 0.5x[n 1] (averaging operation)
N
$ 1
(h) y[n] = N1 x[n k] (moving average)
k=0
(i) y[n] y[n 1] = x[n], 0 < < 1 (exponential averaging)
(j) y[n] = 0.4(y[n 1] + 2) + x[n]
5.18 (Classification) Classify each system in terms of its linearity, time invariance, memory, causality,
and stability.
(a) The folding system y[n] = x[n].
(b) The decimating system y[n] = x[2n].
(c) The zero-interpolating system y[n] = x[n/2].
(d) The sign-inversion system y[n] = sgn{x[n]}.
(e) The rectifying system y[n] = |x[n]|.
5.19 (Classification) Classify each system in terms of its linearity, time invariance, causality, and stability.
(a) y[n] = round{x[n]} (b) y[n] = median{x[n + 1], x[n], x[n 1]}
(c) y[n] = x[n] sgn(n) (d) y[n] = x[n] sgn{x[n]}
5.20 (Inverse Systems) Are the following systems invertible? If not, explain why; if invertible, find the
inverse system.
(a) y[n] = x[n] x[n 1] (dierencing operation)
(b) y[n] = 13 (x[n] + x[n 1] + x[n 2]) (moving average operation)
(c) y[n] = 0.5x[n] + x[n 1] + 0.5x[n 2] (weighted moving average operation)
(d) y[n] y[n 1] = (1 )x[n], 0 < < 1 (exponential averaging operation)
(e) y[n] = cos(n)x[n] (modulation)
(f ) y[n] = cos(x[n])
(g) y[n] = ex[n]
124 Chapter 5 Discrete-Time Systems
5.21 (An Echo System and Its Inverse) An echo system is described by y[n] = x[n] + 0.5x[n N ].
Assume that the echo arrives after 1 ms and the sampling rate is 2 kHz.
(a) What is the value of N ? Sketch a realization of this echo system.
(b) What is the impulse response and step response of this echo system?
(c) Find the dierence equation of the inverse system. Then, sketch its realization and find its
impulse response and step response.
5.22 (Reverb) A reverb filter is described by y[n] = x[n] + 0.25y[n N ]. Assume that the echoes arrive
every millisecond and the sampling rate is 2 kHz.
(a) What is the value of N ? Sketch a realization of this reverb filter.
(b) What is the impulse response and step response of this reverb filter?
(c) Find the dierence equation of the inverse system. Then, sketch its realization and find its
impulse response and step response.
5.23 (System Response) Consider the system y[n] 0.5y[n 1] = x[n]. Find its zero-state response to
the following inputs.
(a) x[n] = u[n] (b) x[n] = (0.5)n u[n] (c) x[n] = cos(0.5n)u[n]
(d) x[n] = (1)n u[n] (e) x[n] = j n u[n] (f ) x[n] = ( j)n u[n] + ( j)n u[n]
5.24 (System Response) For the system realization shown in Figure P5.24, find the response to the
following inputs and initial conditions.
(a) x[n] = u[n] y[1] = 0 (b) x[n] = u[n] y[1] = 4
(c) x[n] = (0.5)n u[n] y[1] = 0 (d) x[n] = (0.5)n u[n] y[1] = 6
(e) x[n] = (0.5)n u[n] y[1] = 0 (f ) x[n] = (0.5)n u[n] y[1] = 2
x [n]
+ y [n]
z1
0.5
5.26 (System Response) Find the impulse response of the following filters.
(a) y[n] = x[n] x[n 1] (dierencing operation)
(b) y[n] = 0.5x[n] + 0.5x[n 1] (averaging operation)
N
$ 1
(c) y[n] = N1 x[n k], N = 3 (moving average)
k=0
N
$ 1
(d) y[n] = 2
N (N +1) (N k)x[n k], N = 3 (weighted moving average)
k=0
(e) y[n] y[n 1] = (1 )x[n], N = 3, = N 1
N +1 (exponential averaging)
Chapter 5 Problems 125
5.27 (System Response) It is known that the response of the system y[n] + y[n 1] = x[n], = 0, is
given by y[n] = [5 + 3(0.5)n ]u[n].
(a) Identify the natural response and forced response.
(b) Identify the values of and y[1].
(c) Identify the zero-input response and zero-state response.
(d) Identify the input x[n].
5.28 (System Response) It is known that the response of the system y[n] + 0.5y[n 1] = x[n] is described
by y[n] = [5(0.5)n + 3(0.5)n )]u[n].
(a) Identify the zero-input response and zero-state response.
(b) What is the zero-input response of the system y[n] + 0.5y[n 1] = x[n] if y[1] = 10?
(c) What is the response of the relaxed system y[n] + 0.5y[n 1] = x[n 2]?
(d) What is the response of the relaxed system y[n] + 0.5y[n 1] = x[n 1] + 2x[n]?
5.29 (System Response) It is known that the response of the system y[n] + y[n 1] = x[n] is described
by y[n] = (5 + 2n)(0.5)n u[n].
(a) Identify the zero-input response and zero-state response.
(b) What is the zero-input response of the system y[n] + y[n 1] = x[n] if y[1] = 10?
(c) What is the response of the relaxed system y[n] + y[n 1] = x[n 1])?
(d) What is the response of the relaxed system y[n] + y[n 1] = 2x[n 1] + x[n]?
(e) What is the complete response of the y[n] + y[n 1] = x[n] + 2x[n 1] if y[1] = 4?
5.30 (System Interconnections) Two systems are said to be in cascade if the output of the first system
acts as the input to the second. Find the response of the following cascaded systems if the input is a
unit step and the systems are described as follows. In which instances does the response dier when the
order of cascading is reversed? Can you use this result to justify that the order in which the systems
are cascaded does not matter in finding the overall response if both systems are LTI?
(a) System 1: y[n] = x[n] x[n 1] System 2: y[n] = 0.5y[n 1] + x[n]
(b) System 1: y[n] = 0.5y[n 1] + x[n] System 2: y[n] = x[n] x[n 1]
(c) System 1: y[n] = x2 [n] System 2: y[n] = 0.5y[n 1] + x[n]
(d) System 1: y[n] = 0.5y[n 1] + x[n] System 2: y[n] = x2 [n]
5.31 (Systems in Cascade and Parallel) Consider the realization of Figure P5.31.
x [n]
+ + y [n]
+
z1
z1 +
+
z1
Figure P5.31 System realization for Problem 5.31
(a) Find its impulse response if = . Is the overall system FIR or IIR?
(b) Find its dierence equation and impulse response if = . Is the overall system FIR or IIR?
(c) Find its dierence equation and impulse response if = = 1. What is the function of the
overall system?
126 Chapter 5 Discrete-Time Systems
5.32 (Dierence Equations from Impulse Response) Find the dierence equations describing the
following systems.
(a) h[n] = [n] + 2[n 1] (b) h[n] = {2, 3, 1}
(c) h[n] = (0.3)n u[n] (d) h[n] = (0.5)n u[n] (0.5)n u[n]
5.33 (Dierence Equations from Impulse Response) A system is described by the impulse response
h[n] = (1)n u[n]. Find the dierence equation of this system. Then find the dierence equation of
the inverse system. Does the inverse system describe an FIR filter or IIR filter? What function does
it perform?
5.34 (Dierence Equations from Dierential Equations) Consider an analog system described by
the dierential equation y (t) + 3y (t) + 2y(t) = 2u(t).
(a) Confirm that this describes a stable analog system.
(b) Convert this to a dierence equation using the backward Euler algorithm and check the stability
of the resulting digital filter.
(c) Convert this to a dierence equation using the forward Euler algorithm and check the stability
of the resulting digital filter.
(d) Which algorithm is better in terms of preserving stability? Can the results be generalized to any
arbitrary analog system?
5.35 (Dierence Equations) For the filter realization shown in Figure P5.35, find the dierence equation
relating y[n] and x[n] if the impulse response of the filter is given by
z1
Figure P5.35 Filter realization for Problem 5.35
5.36 (Periodic Signal Generators) Find the dierence equation of a filter whose impulse response is a
periodic sequence with first period x[n] = {1, 2, 3, 4, 6, 7, 8}. Sketch a realization for this filter.
5.37 (Recursive and IIR Filters) The terms recursive and IIR are not always synonymous. A recursive
filter could in fact have a finite impulse response. Use recursion to find the the impulse response h[n]
for each of the following recursive filters. Which filters (if any) describe IIR filters?
(a) y[n] y[n 1] = x[n] x[n 2]
(b) y[n] y[n 1] = x[n] x[n 1] 2x[n 2] + 2x[n 3]
5.38 (Recursive Forms of FIR Filters) An FIR filter may always be recast in recursive form by the
simple expedient of including identical factors on the left-hand and right-hand side of its dierence
equation in operational form. For example, the filter y[n] = (1 z 1 )x[n] is FIR, but the identical
filter (1 + z 1 )y[n] = (1 + z 1 )(1 z 1 )x[n] has the dierence equation y[n] + y[n 1] = x[n] x[n 2]
and can be implemented recursively. Find two dierent recursive dierence equations (with dierent
orders) for each of the following filters.
(a) y[n] = x[n] x[n 2] (b) h[n] = {1, 2, 1}
Chapter 5 Problems 127
5.39 (Nonrecursive Forms of IIR Filters) An FIR filter may always be exactly represented in recursive
form, but we can only approximately represent an IIR filter by an FIR filter by truncating its impulse
response to N terms. The larger the truncation index N , the better is the approximation. Consider the
IIR filter described by y[n] 0.8y[n 1] = x[n]. Find its impulse response h[n] and truncate it to three
terms to obtain h3 [n], the impulse response of the approximate FIR equivalent. Would you expect the
greatest mismatch in the response of the two filters to identical inputs to occur for lower or higher
values of n? Compare the step response of the two filters up to n = 6 to justify your expectations.
5.40 (Nonlinear Systems) One way to solve nonlinear dierence equations is by recursion. Consider the
nonlinear dierence equation y[n]y[n 1] 0.5y 2 [n 1] = 0.5Au[n].
(a) What makes this system nonlinear?
(b) Using y[1] = 2, recursively obtain y[0], y[1], and y[2].
(c) Use A = 2, A = 4, and A = 9 in the results of part (b) to confirm that this system finds the
square root of A.
(d) Repeat parts (b) and (c) with y[1] = 1 to check whether the choice of the initial condition
aects system operation.
5.41 (LTI Concepts and Stability) Argue that neither of the following describes an LTI system. Then,
explain how you might check for their stability and determine which of the systems are stable.
(a) y[n] + 2y[n 1] = x[n] + x2 [n] (b) y[n] 0.5y[n 1] = nx[n] + x2 [n]
5.42 (Response of Causal and Noncausal Systems) A dierence equation may describe a causal or
noncausal system depending on how the initial conditions are prescribed. Consider a first-order system
governed by y[n] + y[n 1] = x[n].
(a) With y[n] = 0, n < 0, this describes a causal system. Assume y[1] = 0 and find the first few
terms y[0], y[1], . . . of the impulse response and step response, using recursion, and establish the
general form for y[n].
(b) With y[n] = 0, n > 0, we have a noncausal system. Assume y[0] = 0 and rewrite the dierence
equation as y[n 1] = {y[n] + x[n]}/ to find the first few terms y[0], y[1], y[2], . . . of the
impulse response and step response, using recursion, and establish the general form for y[n].
5.43 (Numerical Integration Algorithms) Numerical integration algorithms approximate the area y[n]
from y[n 1] or y[n 2] (one or more time steps away). Consider the following integration algorithms.
Use each of the rules to approximate the area of x(t) = sinc(t), 0 t 3, with ts = 0.1 s and ts = 0.3 s,
and compare with the expected result of 0.53309323761827. How does the choice of the time step ts
aect the results? Which algorithm yields the most accurate results?
5.44 (System Response) Use the Matlab routine filter to obtain and plot the response of the filter
described by y[n] = 0.25(x[n] + x[n 1] + x[n 2] + x[n 3]) to the following inputs and comment on
your results.
(a) x[n] = 1, 0 n 60
(b) x[n] = 0.1n, 0 n 60
(c) x[n] = sin(0.1n), 0 n 60
(d) x[n] = 0.1n + sin(0.5n), 0 n 60
$
(e) x[n] = [n 5k], 0 n 60
k=
$
(f ) x[n] = [n 4k], 0 n 60
k=
5.45 (System Response) Use the Matlab routine filter to obtain and plot the response of the filter
described by y[n] y[n 4] = 0.25(x[n] + x[n 1] + x[n 2] + x[n 3]) to the following inputs and
comment on your results.
(a) x[n] = 1, 0 n 60
(b) x[n] = 0.1n, 0 n 60
(c) x[n] = sin(0.1n), 0 n 60
(d) x[n] = 0.1n + sin(0.5n), 0 n 60
$
(e) x[n] = [n 5k], 0 n 60
k=
$
(f ) x[n] = [n 4k], 0 n 60
k=
5.46 (System Response) Use Matlab to obtain and plot the response of the following systems over the
range 0 n 199.
(a) y[n] = x[n/3], x[n] = (0.9)n u[n] (assume zero interpolation)
(b) y[n] = cos(0.2n)x[n], x[n] = cos(0.04n) (modulation)
(c) y[n] = [1 + cos(0.2n)]x[n], x[n] = cos(0.04n) (modulation)
5.47 (System Response) Use Matlab to obtain and plot the response of the following filters, using direct
commands (where possible) and also using the routine filter, and compare your results. Assume that
the input is given by x[n] = 0.1n + sin(0.1n), 0 n 60. Comment on your results.
N
$ 1
(a) y[n] = 1
N x[n k], N = 4 (moving average)
k=0
Chapter 5 Problems 129
N
$ 1
(b) y[n] = 2
N (N +1) (N k)x[n k], N = 4 (weighted moving average)
k=0
(c) y[n] y[n 1] = (1 )x[n], N = 4, = N 1
N +1 (exponential average)
5.48 (System Response) Use Matlab to obtain and plot the response of the following filters, using
direct commands and using the routine filter, and compare your results. Use an input that consists
of the sum of the signal x[n] = 0.1n + sin(0.1n), 0 n 60 and uniformly distributed random noise
with a mean of 0. Comment on your results.
N$1
(a) y[n] = N1 x[n k], N = 4 (moving average)
k=0
N
$ 1
(b) y[n] = 2
N (N +1) (N k)x[n k], N = 4 (weighted moving average)
k=0
(c) y[n] y[n 1] = (1 )x[n], N = 4, = N 1
N +1 (exponential averaging)
5.49 (System Response) Use the Matlab routine filter to obtain and plot the response of the following
FIR filters. Assume that x[n] = sin(n/8), 0 n 60. Comment on your results. From the results,
can you describe the the function of these filters?
(a) y[n] = x[n] x[n 1] (first dierence)
(b) y[n] = x[n] 2x[n 1] + x[n 2] (second dierence)
(c) y[n] = 13 (x[n] + x[n 1] + x[n 2]) (moving average)
(d) y[n] = 0.5x[n] + x[n 1] + 0.5x[n 2] (weighted average)
5.50 (System Response in Symbolic Form) The ADSP routine sysresp1 returns the system response
in symbolic form. See Chapter 21 for examples of its usage. Obtain the response of the following filters
and plot the response for 0 n 30.
(a) The step response of y[n] 0.5y[n] = x[n]
(b) The impulse response of y[n] 0.5y[n] = x[n]
(c) The zero-state response of y[n] 0.5y[n] = (0.5)n u[n]
(d) The complete response of y[n] 0.5y[n] = (0.5)n u[n], y[1] = 4
(e) The complete response of y[n] + y[n 1] + 0.5y[n 2] = (0.5)n u[n], y[1] = 4, y[2] = 3
5.51 (Inverse Systems and Echo Cancellation) A signal x(t) is passed through the echo-generating
system y(t) = x(t) + 0.9x(t ) + 0.8x(t 2 ), with = 93.75 ms. The resulting echo signal y(t) is
sampled at S = 8192 Hz to obtain the sampled signal y[n].
(a) The dierence equation of a digital filter that generates the output y[n] from x[n] may be written
as y[n] = x[n] + 0.9x[n N ] + 0.8x[n 2N ]. What is the value of the index N ?
(b) What is the dierence equation of an echo-canceling filter (inverse filter) that could be used to
recover the input signal x[n]?
(c) The echo signal is supplied as echosig.mat. Load this signal into Matlab (using the command
load echosig). Listen to this signal using the Matlab command sound. Can you hear the
echoes? Can you make out what is being said?
(d) Filter the echo signal using your inverse filter and listen to the filtered signal. Have you removed
the echoes? Can you make out what is being said? Do you agree with what is being said? If so,
please thank Prof. Tim Schulz (http://www.ee.mtu.edu/faculty/schulz) for this problem.
Chapter 6
CONTINUOUS CONVOLUTION
6.1 Introduction
The convolution method for finding the zero-state response y(t) of a system to an input x(t) applies to linear
time-invariant (LTI) systems. The system is assumed to be described by its impulse response h(t). An
informal way to establish a mathematical form for y(t) is illustrated in Figure 6.1.
t t t
t t
Input x(t)
t t
t t t
Superposition
t t
t
130
6.1 Introduction 131
We divide x(t) into narrow rectangular strips of width ts at kts , k = 0, 1, 2, . . . and replace each strip
by an impulse whose strength ts x(kts ) equals the area under each strip:
!
x(t) ts x(kts )(t kts ) (sum of shifted impulses) (6.1)
k=
Since x(t) is a sum of weighted shifted impulses, the response y(t), by superposition, is a sum of the
weighted shifted impulse responses:
!
y(t) = ts x(kts )h(t kts ) (sum of shifted impulse responses) (6.2)
k=
In the limit as ts d 0, kts describes a continuous variable , and both x(t) and y(t) may be
represented by integral forms to give
" "
x(t) = x()(t ) d y(t) = x()h(t ) d (6.3)
Note that the result for x(t) is a direct consequence of the sifting property of impulses. The result
"
y(t) = x(t) h(t) = x()h(t ) d (6.4)
describes the convolution integral for finding the zero-state response of a system. In this book, we use
the shorthand notation x(t) h(t) to describe the convolution of the signals x(t) and h(t).
"
Notation: We use x(t) h(t) (or x(t) h(t) in figures) as a shorthand notation for x()h(t ) d
h(t)
h( )
x(t) x(t )
t t
t
Increase t Increase t
t t t
Figure 6.2 Convolution as a process of sliding a folded signal past another
Apart from its physical significance, the convolution integral is just another mathematical operation. It
takes only a change of variable = t to show that
" " "
x(t) h(t) = x()h(t ) d = x(t )h() d = x(t )h() d = h(t) x(t) (6.5)
This is the commutative property, one where the order is unimportant. It says that, at least mathematically,
we can switch the roles of the input and the impulse response for any system.
For two causal signals x(t)u(t) and h(t)u(t), the product x()u()h(t )u(t ) is nonzero only over
the range 0 t (because u() is zero for < 0 and u(t ) is a left-sided step, which is zero for > t).
Since both u() and u(t ) are unity in this range, the convolution integral simplifies to
" t
y(t) = x()h(t ) d, x(t) and h(t) zero for t < 0 (6.6)
0
This result generalizes to the fact that the convolution of two right-sided signals is also right-sided and the
convolution of two left-sided signals is also left-sided.
This is simply another way of describing h(t) as the impulse response of a system. With h(t) = (t), we have
the less obvious result (t) (t) = (t). These two results are illustrated in Figure 6.3.
Convolution is a linear operation and obeys superposition. It is also a time-invariant operation and
implies that shifting the input (or the impulse response) by shifts the output (the convolution) by .
Figure E6.1 The signals of Example 6.1 and their convolution and product
134 Chapter 6 Continuous Convolution
(b) Let x(t) = et u(t + 3) and h(t) = et u(t 1). Then h() = e u( + 3) and x(t ) =
e(t) u(t 1). Since u( + 3) = 0, < 3, and u(t 1) = 0, > t 1, we obtain
" " t1
y(t) = e
u( + 1)e(t)
u(t 1) d = e e(t) d
3
Since et is not a function of , we can pull it out of the integral to get
" t1
y(t) = et
d = (t + 2)et , t 1 3 or y(t) = (t + 2)et u(t + 2)
3
(c) Consider the convolution of x(t) = u(t + 1) u(t 1) with itself. Changing the arguments to x() and
x(t ) results in the convolution
"
y(t) = [u( + 1) u( 1)][u(t + 1) u(t 1)] d
Since u(t + 1) = 0, < t + 1, and u(t 1) = 0, > t 1, the integration limits for the four
integrals can be simplified and result in
" t+1 " t1 " t+1 " t1
y(t) = d d d + d
1 1 1 1
6.3 Some Properties of Convolution 135
Based on each result and its range, we can express the convolution y(t) as
Properties Based on Linearity A linear operation on the input to a system results in a similar operation
on the response. Thus, the input x (t) results in the response y (t), and we have x (t) h(t) = y (t). In
fact, the derivative of any one of the convolved signals results in the derivative of the convolution. Repeated
derivatives of either x(t) or h(t) lead to the general result
x(m) (t) h(t) = x(t) h(m) (t) = y (m) (t) x(m) (t) h(n) (t) = y (m+n) (t) (6.8)
Integration of the input to a system results in integration of the response. The step response thus equals
the running integral of the impulse response. More generally, the convolution x(t) u(t) equals the running
integral of x(t) because
" " t
x(t) u(t) = x()u(t ) d = x() d (6.9)
Properties Based on Time Invariance If the input to a system is shifted by , so too is the response.
In other words, x(t ) h(t) = y(t ). In fact, shifting any one of the convolved signals by shifts the
convolution by . If both x(t) and h(t) are shifted, we can use this property in succession to obtain
The concepts of linearity and shift invariance lie at the heart of many other properties of convolution.
136 Chapter 6 Continuous Convolution
Time Scaling If both x(t) and h(t) are scaled by to x(t) and h(t), the duration property suggests
that the convolution y(t) is also scaled by . In fact, x(t) h(t) = | 1 |y(t), where the scale factor | 1 | is
required to satisfy the area property. The time-scaling property is valid only when both functions are scaled
by the same factor.
Symmetry If both signals are folded ( = 1), so is their convolution. As a consequence of this, the
convolution of an odd symmetric and an even symmetric signal is odd symmetric, whereas the convolution of
two even symmetric (or two odd symmetric) signals is even symmetric. Interestingly, the convolution of x(t)
with its folded version x(t) is also even symmetric, with a maximum at t = 0. The convolution x(t) x(t)
is called the autocorrelation of x(t) and is discussed later in this chapter.
" " t
1. u(t) u(t) = u()u(t ) d = d = tu(t) = r(t)
0
" " t
2. et u(t) et u(t) = e e(t) u()u(t ) d = et d = tet u(t)
0
6.3 Some Properties of Convolution 137
" " t
3. u(t) et u(t) = u(t )e u() d = e d = (1 et )u(t)
0
4. rect(t) rect(t) = [u(t + 0.5) u(t 0.5)] [u(t + 0.5) u(t 0.5)] = r(t + 1) 2r(t) + r(t 1) = tri(t)
(b) Using linearity, the convolution yr (t) = r(t) et u(t) = u(t) u(t) et u(t) is the running integral of
the step response s(t) = u(t) et u(t) and equals
" t " t
yr (t) = s(t) dt = 1 et dt = r(t) (1 et )u(t)
0 0
(c) Using shifting and superposition, the response y1 (t) to the input x(t) = u(t) u(t 2) equals
y1 (t) = [1 et ]u(t) [1 e(t2) ]u(t 2)
# #
(d) Using the area property, the area of y1 (t) equals [ et dt][ x(t) dt] = 2.
Comment: Try integrating y1 (t) directly at your own risk to arrive at the same answer!
(e) Starting with et u(t) et u(t) = tet u(t), and using the scaling property and u(t) = u(t),
1
et u(t) et u(t) = (t)et u(t) = tet u(t)
(f ) Starting with u(t) et u(t) = (1 et )u(t), and using the scaling property and u(t) = u(t),
1
u(t) et u(t) = (1 et )u(t)
(h) Let x(t) = u(t + 3) u(t 1) and h(t) = u(t + 1) u(t 1).
Using superposition, the convolution y(t) = x(t) x(t) may be described as
y(t) = u(t + 3) u(t + 1) u(t + 3) u(t 1) u(t 1) u(t + 1) + u(t 1) u(t 1)
Since u(t) u(t) = r(t), we invoke time invariance for each term to get
y(t) = r(t + 4) r(t + 2) r(t) + r(t + 2)
The signals and their convolution are shown in Figure E6.3H. The convolution y(t) is a trapezoid
extending from t = 4 to t = 2 whose duration is 6 units, whose starting time equals the sum of the
starting times of x(t) and h(t), and whose area equals the product of the areas of x(t) and h(t).
x(t) h(t) y(t)
2
1 * 1 =
t t t
3 1 1 1 4 2 2
Figure E6.3H The signals for Example 6.3(h) and their convolution
The Recipe for Convolution by Ranges is summarized in the following review panel. To sketch x()
versus , simply relabel the axes. To sketch x(t ) versus , fold x() and delay by t. For example, if the
end points of x() are (4, 3), the end points of (the folded) x(t ) will be (t 3, t + 4).
#
We used the indefinite integral e d = ( 1)e to simplify the results. The convolution results match
at the range end points. The convolution is plotted in Figure E6.5A.
h(t) x(t) y(t)
1 1
1/e
t e t =
*
t t t
1
Figure E6.5A The convolution of the signals for Example 6.5
140 Chapter 6 Continuous Convolution
The pairwise sum gives the end points of the convolution ranges as [3, 1, 1, 3]. For each range, we
superpose x(t ) = 2, t 1 t + 1, and h() = , 2 2 to obtain the following results:
The convolution results match at the range end points and are plotted in Figure E6.6A.
y(t)
4
h(t)
x(t)
2
t
2 t 3 1 t
2
t * 2
= 1 3
1 1 2
4
Figure E6.6A The convolution of the signals for Example 6.6
As a consistency check, note how the convolution results match at the end points of each range. Note
that one of the convolved signals has even symmetry, the other has odd symmetry, and the convolution result
has odd symmetry.
6.4 Convolution by Ranges (Graphical Convolution) 141
The convolution is plotted in Figure E6.7A. The convolution results match at the range end points. Since
x(t) is constant while h(t) is piecewise linear, their convolution must yield only linear or quadratic forms.
Our results also confirm this.
y(t)
5
x(t)
4
2
h(t)
1
t * = 1
t t t
1 3 -2 2 1 1 12 4
Figure E6.7A The convolution of the signals for Example 6.7
142 Chapter 6 Continuous Convolution
8
x( ) h(t ) t = 0 x( ) h(t ) t = 1 x( ) h(t ) t = 2
3 3 3 3 t
2 2 2
3 2 1 1 2 3
1 1 1
1 2 3 1 2 3 4 1 2 3 4 5
Figure E6.8 The signals for Example 6.8 and their convolution
The convolution starts at t = 3. The convolution ranges cover unit intervals up to t = 3. The area of
x()h(t ) with t chosen for each end point yields the following results:
Note that h(t) = x(t). The convolution x(t) x(t) is called the autocorrelation of x(t) and is always even
symmetric, with a maximum at the origin.
6.4 Convolution by Ranges (Graphical Convolution) 143
The response of the first system is y1 (t) = x(t) h1 (t). The response y(t) of the second system is
y(t) = y1 (t) h2 (t) = [x(t) h1 (t)] h2 (t) = x(t) [h1 (t) h2 (t)] (6.11)
If we wish to replace the cascaded system by an equivalent LTI system with impulse response h(t) such that
y(t) = x(t) h(t), it follows that h(t) = h1 (t) h2 (t). Generalizing this result, the impulse response h(t) of
N ideally cascaded LTI systems is simply the convolution of the N individual impulse responses:
If the hk (t) are energy signals, the order of cascading is unimportant. The overall impulse response of systems
in parallel equals the sum of the individual impulse responses, as shown in Figure 6.5.
2 + +
1 y(t) = x (t)
f(t) + g(t)
2e - t 1F
+
t
LTI system 1 LTI system 2
Figure E6.9A The interconnected system for Example 6.9(a)
144 Chapter 6 Continuous Convolution
The time constant of the RC circuit is = 1. Its impulse response is thus h(t) = et u(t). The input-
output relation for the second system has the form y0 (t) = x0 (t) + x0 (t). Its impulse response is thus
h2 (t) = (t) + (t).
The overall impulse response h(t) is given by their convolution:
This means that the overall system output equals the applied input and the second system acts as the
inverse of the first.
The output g(t) is thus g(t) = 2et u(t).
The output f (t) is given by the convolution f (t) = 2et u(t) et u(t) = 2tet u(t).
(b) Refer to the cascaded system shown in Figure E6.9B. Will the outputs g(t) and w(t) be equal? Explain.
2 + +
f(t) 1 g(t)
2e - t y(t) = x 2(t)
1F
2 + +
1 v(t) w(t)
2e - t y(t) = x 2(t)
1F
The impulse response of the RC circuit is h(t) = et u(t). For the first system, the output f (t) is
f (t) = 4e2t u(t). Using convolution, the output g(t) is given by
For the second system, the outputs v(t) and w(t) are
v(t) = et u(t) 2et u(t) = 2tet u(t) w(t) = v 2 (t) = 4t2 e2t u(t)
Clearly, w(t) and g(t) are not equal. The reason is that the order of cascading is unimportant only for
LTI systems and the squaring block is nonlinear.
If x(t) is bounded such that |x(t)| < M , then its folded, shifted version x(t ) is also bounded. Using
the fundamental theorem of calculus (the absolute value of any integral cannot exceed the integral of the
absolute value of its integrand), the convolution integral yields the following inequality:
" "
|y(t)| < |h( )||x(t )| d < M |h( )| d (6.14)
For BIBO stability, therefore, h(t) must be absolutely integrable. This is both a necessary and sucient
condition. If satisfied, we are guaranteed a stable LTI system. In particular, if h(t) is an energy signal, we
have a stable system.
Causal systems are also called physically realizable. Causality actually imposes a powerful constraint on
h(t). The even and odd parts of a causal h(t) cannot be independent, and h(t) can in fact be found from its
even symmetric (or odd symmetric) part alone.
where K = 1/(1 + j0 ) is a (complex) constant. The response y(t) = Kx(t) is also a harmonic at the input
frequency 0 . More generally, the response of LTI systems to any periodic input is also periodic with the
same period as the input. In the parlance of convolution, the convolution of two signals, one of which is
periodic, is also periodic and has the same period as the input.
The following review panel lists the periodic extensions of two useful signals. The area of one period of the
periodic extension xpe (t) equals the total area of x(t). In fact, adding y(t) and its infinitely many shifted
versions to obtain the periodic extension is equivalent to wrapping y(t) around in one-period segments and
adding them up instead. The wraparound method can thus be used to find the periodic output as the
periodic extension of the response to one period.
t t t
3 3 4 1 2
Figure E6.11A The pulse signal for Example 6.11(a) and its periodic extension
(b) The periodic extension of x(t) = et u(t) with period T may be expressed, using wraparound, as
! et
xpe (t) = et + e(t+T ) + e(t+2T ) + = et eT = (6.20)
1 eT
k=0
t t
T T
Figure E6.11B The exponential signal for Example 6.11(b) and its periodic extension
(a) (Periodic Extension) One period of the periodic extension of h0 (t) is given by h(t) = Aet , where
A = 1/(1 e2 ). We first find the regular convolution of one period of x(t) with h(t). The pairwise
sum gives the end points of the convolution ranges as [0, 1, 2, 3]. For each range, we superpose
x(t ) = 1, t 1 t, and h() = Ae , 0 2, to obtain the following results:
We wrap around the last 1-unit range past t = 2 (replacing t by t + 2), and add to the first term, to
get one period of the periodic output yp (t) as
& '
e(t1)
A(1 et ) + A e(t+1) e2 = 1 , 0t1
yp (t) = 1+e
& ' e(t2)
A e(t1) et = , 1t2
1+e
Ae t
* =
t t t
1 1 2 1 2 3
y(t) yp(t)
Wrap around
and add
t t
1 2 3 1 2 3
Figure E6.12A The regular and periodic convolution of the signals for Example 6.12(a)
(b) (The Cyclic Method) The output for one period may be computed using the cyclic approach by
first creating x(t ) and a one-period segment of h(), as shown in Figure E6.12B.
x(t ) h( )
1 A
Ae
t 2 t 1 t t+1 1 2
Figure E6.12B The signals x(t ) and one period of h() for Example 6.12(b)
We then slide the folded signal x(t ) past h() for a one-period duration (2 units), and find the
periodic convolution as follows:
As x(t ) slides right over one period of h(), we see portions of two pulses in partial view for
0 t 1, and one pulse in full view for 1 t 2. As expected, both methods yield identical results.
150 Chapter 6 Continuous Convolution
The process is exactly like finding the system response to periodic inputs, except that no periodic extension
is required. The periodic convolution of other power signals with non-commensurate periods must be
found from a limiting form, by averaging the convolution of one periodic signal with a finite stretch T0 of
the other, as T0 .
"
1
yp (t) = x(t)
h(t) = lim xT ()h(t ) d (for nonperiodic power signals) (6.22)
T0 T0 T0
What better choice for k (t) than one that yields a response that is just a scaled version of itself such that
k (t) = Ak k (t)? Then ! !
y(t) = k k = k Ak k (6.24)
k k
Finding the output thus reduces to finding just the scale factors k , which may be real or complex. Signals k
that are preserved in form by a system except for a scale factor k are called eigensignals, eigenfunctions,
or characteristic functions because they are intrinsic (in German, eigen) to the system. The factor k by
which the eigensignal is scaled is called the eigenvalue of the system or the system function.
The response equals the product of the eigensignal est and the system function (which is a function only of
the variable s). If we denote this system function by H(s), we have
"
H(s) = h()es d (6.26)
This is also called the transfer function. It is actually a description of h(t) by a weighted sum of complex
exponentials and is, in general, also complex. Now, the signal x(t) also yields a similar description, called
the two-sided Laplace transform:
"
X(s) = x()es d (two-sided Laplace transform) (6.27)
For a causal signal of the form x(t)u(t), we obtain the one-sided Laplace transform:
"
X(s) = x()es d (one-sided Laplace transform) (6.28)
0
Since y(t) also equals x(t) h(t), convolution in the time domain is equivalent to multiplication in the
transformed domain. This is one of the most fundamental results, and one that we shall use repeatedly in
subsequent chapters.
With only slight modifications, we can describe several other transformed-domain relations as follows:
1. With s = j2f , we use ej2f as the eigensignals and transform h(t) to the frequency domain in terms
of its steady-state transfer function H(f ) and the signal x(t) to its Fourier transform X(f ):
"
X(f ) = x()ej2f d (Fourier transform) (6.30)
2. For a single harmonic x(t) = ej2f0 t , the impulse response h(t) transforms to a complex constant
H(f0 ) = Kej . This produces the response Kej(2f0 t+) and describes the method of phasor analysis.
3. For a periodic signal xp (t) described by a combination of harmonics ejk2f0 t at the discrete frequencies
kf0 , we use superposition to obtain a frequency-domain description of xp (t) over one period in terms
of its Fourier series coecients X[k].
(a) If x(t) = h(t), the response is y(t) = x(t) h(t) = te u(t). The moments of y(t) are
t
" "
m0 (y) = y(t) dt = 1 m1 (y) = ty(t) dt = 2 Dy = 2
0 0
Comment: For a cascade of N identical lowpass filters (with/ = 1), the overall eective delay De is
De = NDh = N , and the overall eective duration Te is Te = N Th2 = Th N .
Here, mn is the sum of the individual means (delays), n2 is the sum of the individual variances, and the
constant K equals the product of the areas under each of the convolved functions.
0n "
K= xk (t) dt (6.37)
k=1
This result is one manifestation of the central limit theorem. It allows us to assert that the response
of a complex system composed of many subsystems is Gaussian, since its response is based on repeated
convolution. The individual responses need not be Gaussian and need not even be known.
The central limit theorem fails if any function has zero area, making K = 0. Sucient conditions for
it to hold require finite values of the average, the variance, and the absolute third moment. All time-
limited functions and many others satisfy these rather weak conditions. The system function H(f ) of a
large number of cascaded systems is also a Gaussian because convolution in the time domain is equivalent to
multiplication in the frequency domain. In probability theory, the central limit theorem asserts that the sum
of n statistically independent random variables approaches a Gaussian for large n, regardless of the nature
of their distributions.
6.9 Convolution Properties Based on Moments 155
(a) Repeated convolution of etu(t) for n = 40 (b) Repeated convolution of rect(t) for n = 3
0.8
0.06 Gaussian Gaussian
approximation 0.6 approximation
Amplitude
Amplitude
0 0
10 20 30 40 50 60 70 2 1 0 1 2
Time t [seconds] Time t [seconds]
Figure E6.15 The repeated convolution and its Gaussian approximation for the signals of Example 6.15
To find the Gaussian form as n , we start with the mean mh , variance 2 , and area A for et u(t).
We find "
m1
A = m0 = h(t) dt = 1 mh = =1 2 = 1
0 m0
For n cascaded systems, we have mN = nmh = n, N 2
= n 2 = n, and K = An = 1. These values lead
to the Gaussian approximation for yn (t) as
$ %
1 (t n)2
yn (t) exp
2n 2n
(b) An even more striking example is provided by the convolution of even symmetric rectangular pulses,
shown in Figure E6.15(b) for n = 3. Notice how the result begins to take on a Gaussian look after
only a few repeated convolutions.
156 Chapter 6 Continuous Convolution
6.10 Correlation
Correlation is an operation similar to convolution. It involves sliding one function past the other and finding
the area under the resulting product. Unlike convolution, however, no folding is performed. The correlation
rxx (t) of two identical functions x(t) is called autocorrelation. For two dierent functions x(t) and y(t),
the correlation rxy (t) or ryx (t) is referred to as cross-correlation.
Using the symbol to denote correlation, we define the two operations as
"
rxx (t) = x(t) x(t) = x()x( t) d (6.39)
"
rxy (t) = x(t) y(t) = x()y( t) d (6.40)
"
ryx (t) = y(t) x(t) = y()x( t) d (6.41)
The variable t is often referred to as the lag. The definitions of cross-correlation are not standard, and some
authors prefer to switch the definitions of rxy (t) and ryx (t).
At t = 0, we have "
rxy (0) = x()y() d = ryx (0) (6.43)
Thus, rxy (0) = ryx (0). The cross-correlation also satisfies the inequality
. /
|rxy (t)| rxx (0)ryy (0) = Ex Ey (6.44)
where Ex and Ey represent the signal energy in x(t) and y(t), respectively.
Correlation as Convolution
The absence of folding actually implies that the correlation of x(t) and y(t) is equivalent to the convolution
of x(t) with the folded version y(t), and we have rxy (t) = x(t) y(t) = x(t) y(t).
Commutation
The absence of folding means that the correlation depends on which function is shifted and, in general,
x(t) y(t) = y(t) x(t). Since shifting one function to the right is actually equivalent to shifting the other
function to the left by an equal amount, the correlation rxy (t) is related to ryx (t) by rxy (t) = ryx (t).
6.10 Correlation 157
Periodic Correlation
The correlation of two periodic signals or power signals is defined in the same sense as periodic convolution:
" "
1 1
rxy (t) = x()y( t) d rxy (t) = lim x()y( t) d (6.45)
T T T0 T0 T0
The first form defines the correlation of periodic signals with identical periods T , which is also periodic with
the same period T . The second form is reserved for nonperiodic power signals or random signals.
6.10.2 Autocorrelation
The autocorrelation operation involves identical functions. It can thus be performed in any order and
represents a commutative operation.
Symmetry
Since rxy (t) = ryx (t), we have rxx (t) = rxx (t). This means that the autocorrelation of a real function is
even. The autocorrelation of an even function x(t) also equals the convolution of x(t) with itself, because
the folding operation leaves an even function unchanged.
Maximum Value
It turns out that the autocorrelation function is symmetric about the origin where it attains its maximum
value. It thus satisfies
rxx (t) rxx (0) (6.46)
It follows that the autocorrelation rxx (t) is finite and nonnegative for all t.
Periodic Autocorrelation
For periodic signals, we define periodic autocorrelation in much the same way as periodic convolution. If
we shift a periodic signal with period T past itself, the two line up after every period, and the periodic
autocorrelation also has period T .
158 Chapter 6 Continuous Convolution
t
**
t
= t
*
t
= t
1 1 1 1 1 1
Figure E6.16A The signal for Example 6.16(a) and its autocorrelation
(b) Consider the autocorrelation of x(t) = et u(t). As we shift x(t) = et u(t) past x() = e u(),
we obtain two ranges (t < 0 and t > 0) over which the autocorrelation results are described as follows:
x( t)
t <0 Range: t < 0 y() = 0
x( ) "
e et d = 0.5et
y(0) = 0.5
0
t
x( ) Range: t > 0
x( t) y(0) = 0.5
t >0
"
e et d = 0.5et
y() = 0
t
t
(c) The cross-correlation of the signals x(t) and h(t), shown in Figure E6.16C, may be found using the
convolution of one signal and the folded version of the other. Observe that rxh (t) = rhx (t).
6.10 Correlation 159
Target
R
R Matched filter Output of matched filter
Transmitted signal Received signal y(t)
s(t) s(t t 0) h(t) = s( t)
t t t t
t0 t0
A transmitter sends out an interrogating signal s(t), and the reflected and delayed signal (the echo)
s(t t0 ) is processed by a correlation receiver, or matched filter, whose impulse response is matched to
the signal to obtain the target range. In fact, its impulse response is chosen as h(t) = s(t), a folded version
of the transmitted signal, in order to maximize the signal-to-noise ratio. The response y(t) of the matched
filter is the convolution of the received echo and the folded signal h(t) = s(t) or the correlation of s(t t0 )
(the echo) and s(t) (the signal). This response attains a maximum at t = t0 , which represents the time taken
to cover the round-trip distance 2R. The target range R is then given by R = 0.5ct0 , where c is the velocity
of signal propagation.
Why not use the received signal directly to estimate the delay? The reason is that we may not be able
to detect the presence (let alone the exact onset) of the received signal because it is usually much weaker
than the transmitted signal and also contaminated by additive noise. However, if the noise is uncorrelated
with the original signal (as it usually is), their cross-correlation is very small (ideally zero), and the cross-
correlation of the original signal with the noisy echo yields a peak (at t = t0 ) that stands out and is much
easier to detect. Ideally, of course, we would like to transmit narrow pulses (approximating impulses) whose
autocorrelation attains a sharp peak.
160 Chapter 6 Continuous Convolution
CHAPTER 6 PROBLEMS
DRILL AND REINFORCEMENT
6.1 (Convolution Kernel) For each signal x(t), sketch x() vs. and x(t ) vs. , and identify
significant points along each axis.
(a) x(t) = r(t) (b) x(t) = u(t 2) (c) x(t) = 2tri[0.5(t 1)] (d) x(t) = e|t|
6.2 (Convolution Concepts) Using the defining relation, compute y(t) = x(t) h(t) at t = 0.
(a) x(t) = u(t 1) h(t) = u(t + 2)
(b) x(t) = u(t) h(t) = tu(t 1)
(c) x(t) = tu(t + 1) h(t) = (t + 1)u(t)
(d) x(t) = u(t) h(t) = cos(0.5t)rect(0.5t)
6.3 (Analytical Convolution) Evaluate each convolution y(t) = x(t) h(t) and sketch y(t).
(a) x(t) = et u(t) h(t) = r(t)
(b) x(t) = tet u(t) h(t) = u(t)
(c) x(t) = et u(t) h(t) = cos(t)u(t)
(d) x(t) = et u(t) h(t) = cos(t)
(e) x(t) = 2t[u(t + 2) u(t 2)] h(t) = u(t) u(t 4)
(f ) x(t) = 2tu(t) h(t) = rect(t/2)
1
(g) x(t) = r(t) h(t) = u(t 1)
t
6.4 (Convolution with Impulses) Sketch the convolution y(t) = x(t) h(t) for each pair of signals
shown in Figure P6.4.
x(t) Convolution 1 h(t) x(t) Convolution 2 h(t)
3 3 (3)
* (2) (1)
* (2) (2)
t t t t
2 2 2 2 2 2 2 2
Figure P6.4 The signals x(t) and h(t) for Problem 6.4
6.5 (Convolution by Ranges) For each pair of signals x(t) and h(t) shown in Figure P6.5, establish
the convolution ranges. Then sketch x()h(t ) for each range, evaluate the convolution over each
range, and sketch the convolution result y(t).
Convolution 1 Convolution 2
x(t) h(t) x(t) h(t)
4 4 4 e 2 t
2 * 1 *
t t t t
2 2 1 3 2 2 1
Convolution 3
x(t) h(t) x(t) Convolution 4 h(t)
4
* 3 2
* 2
t t t t
2 2 2 2 2 1 1 2 4
Figure P6.5 The signals x(t) and h(t) for Problem 6.5
Chapter 6 Problems 161
6.7 (Properties) The step response of a system is s(t) = et u(t). What is the system impulse response
h(t)? Compute the response of this system to the following inputs.
(a) x(t) = r(t) (b) x(t) = rect(t/2) (c) x(t) = tri[(t 2)/2] (d) x(t) = (t + 1) (t 1)
6.8 (Properties) The step response of each system is s(t) and the input is x(t). Compute the response
of each system.
(a) s(t) = r(t) r(t 1) x(t) = sin(2t)u(t)
(b) s(t) = et u(t) x(t) = et u(t)
6.10 (Cascaded Systems) Find the response y(t) of the following cascaded systems.
(a) x(t) = u(t) h1 (t) = et u(t) h2 (t) = et u(t) y(t)
6.11 (Stability) Investigate the stability and causality of the following systems.
(a) h(t) = et+1 u(t 1) (b) h(t) = et u(t + 1) (c) h(t) = (t)
(d) h(t) = (1 et )u(t) (e) h(t) = (t) et u(t) (f ) h(t) = sinc(t 1)
6.12 (Causality) Argue that the impulse response h(t) of a causal system must be zero for t < 0. Based
on this result, if the input to a causal system starts at t = t0 , at what time does the response start?
6.13 (Signal-Averaging1 Filter)
2 Consider a a signal-averaging filter whose impulse response is described
by h(t) = T1 rect t0.5T
T .
(a) What is the response of this filter to the unit step input x(t) = u(t)?
(b) What is the response of this filter to a periodic sawtooth signal x(t) with peak value A, duty
ratio D, and period T ?
6.14 (Periodic Extension) For each signal shown in Figure P6.14, sketch the periodic extension with
period T = 6 and T = 4.
x(t) Signal 1 x(t) Signal 2 x(t) Signal 3 x(t) Signal 4
4
4 4 4
6 t
2
t t t 2 4 8
2 4 4 8 2 6 4
Figure P6.14 The signals for Problem 6.14
162 Chapter 6 Continuous Convolution
6.15 (Convolution and Periodic Inputs) The voltage input to a series RC circuit with time constant
= 1 is a rectangular pulse train starting at t = 0. The pulses are of unit width and unit height and
repeat every 2 seconds. The output is the capacitor voltage.
(a) Use convolution to compute the output at t = 1 s and t = 2 s.
(b) Assume that the input has been applied for a long time. What is the steady-state output?
6.16 (Periodic Convolution) Find and sketch the periodic convolution yp (t) = x(t)h(t)
of each pair
of periodic signals shown in Figure P6.16.
Convolution 1 Convolution 2
x(t) h(t) x(t) h(t)
(1) (2) 1 1
t
* t t
* t
1 2 3 1 2 3 1 2 3 1 2 3
Convolution 3 Convolution 4
x(t) h(t) x(t) h(t)
1 1
t
* t 1 t
* 1
t
1 2 3
1
2 4 6 3 4 7 1 2 3
Figure P6.16 The periodic signals for Problem 6.16
6.17 (Inverse Systems) Given a system whose impulse response is h(t) = et u(t), we wish to find the
impulse response hI (t) of an inverse system such that h(t) hI (t) = (t). The form that we require for
the inverse system hI (t) is hI (t) = K1 (t) + K2 (t).
(a) For what values of K1 and K2 will h(t) hI (t) = (t)?
(b) Is the inverse system stable? Is it causal?
(c) What is the impulse response hI (t) of the inverse system if h(t) = 2e3t u(t).
6.18 (Correlation) Let x(t) = rect(t + 0.5) and h(t) = t rect(t 0.5).
(a) Find the autocorrelation rxx (t).
(b) Find the autocorrelation rhh (t).
(c) Find the cross-correlation rhx (t).
(d) Find the cross-correlation rxh (t).
(e) How are the results of parts (c) and (d) related?
6.20 (Operations on the Impulse) Explain the dierence between each of the following operations on
the impulse (t 1). Use sketches to plot results if appropriate.
"
(a) [e u(t)](t 1)
t
(b) et (t 1) dt (c) et u(t) (t 1)
6.21 (Convolution) Compute and sketch the convolution of the following pairs of signals.
!
(a) x(t) = (t k) h(t) = rect(t)
k=
!
(b) x(t) = (t 3k) h(t) = tri(t)
k=
!
(c) x(t) = rect(t 2k) h(t) = rect(t)
k=
6.22 (Impulse Response and Step Response) Find the step response s(t) of each system whose impulse
response h(t) is given.
(a) h(t) = rect(t 0.5) (b) h(t) = sin(2t)u(t)
(c) h(t) = sin(2t)rect(t 0.5) (d) h(t) = e|t|
6.23 (Convolution and System Response) Consider a system described by the dierential equation
y (t) + 2y(t) = x(t).
(a) What is the impulse response h(t) of this system?
(b) Find its output if x(t) = e2t u(t) by convolution.
(c) Find its output if x(t) = e2t u(t) and y(0) = 0 by solving the dierential equation.
(d) Find its output if x(t) = e2t u(t) and y(0) = 1 by solving the dierential equation.
(e) Are any of the outputs identical? Should they be? Explain.
6.24 (System Response) Consider the two inputs and two circuits shown in Figure P6.24.
(a) Find the impulse response of each circuit.
(b) Use convolution to find the response of circuit 1 to input 1. Assume R = 1 , C =1 F.
(c) Use convolution to find the response of circuit 2 to input 1. Assume R = 1 , C =1 F.
(d) Use convolution to find the response of circuit 1 to input 2. Assume R = 1 , C =1 F.
(e) Use convolution to find the response of circuit 2 to input 2. Assume R = 1 , C =1 F.
(f ) Use convolution to find the response of circuit 1 to input 1. Assume R = 2 , C =1 F.
(g) Use convolution to find the response of circuit 1 to input 2. Assume R = 2 , C =1 F.
Input 1 Input 2 + + + +
R C
Input Output Input R Output
e t et C
t t
Circuit 1 Circuit 2
Figure P6.24 The circuits for Problem 6.24
6.25 (Impulse Response and Step Response) The step response of a system is s(t) = (t). The input
to the system is a periodic square wave described for one period by x(t) = sgn(t), 1 t 1. Sketch
the system output.
164 Chapter 6 Continuous Convolution
6.26 (Impulse Response and Step Response) The input to a system is a periodic square wave with
period T = 2 s described for one period by xp (t) = sgn(t), 1 t 1. The output is a periodic
triangular wave described by yp (t) = tri(t) 0.5, 1 t 1. What is the impulse response of the
system? What is the response of this system to the single pulse x(t) = rect(t 0.5)?
6.27 (Convolution) An RC lowpass filter has the impulse response h(t) = 1 et/ u(t), where is the time
constant. Find its response to the following inputs for = 0.5 and = 1.
(a) x(t) = e2t u(t) (b) x(t) = e2t u(t) (c) x(t) = e2|t|
6.28 (Convolution) Find the convolution of each pair of signals.
(a) x(t) = e|t| h(t) = e|t|
(b) x(t) = et u(t) et u(t) h(t) = x(t)
(c) x(t) = et u(t) et u(t) h(t) = x(t)
6.29 (Convolution by Ranges) Consider a series RC lowpass filter with = 1. Use convolution by
ranges to find the capacitor voltage, its maximum value, and time of maximum for each input x(t).
(a) x(t) = rect(t 0.5) (b) x(t) = t rect(t 0.5) (c) x(t) = (1 t)rect(t 0.5)
6.30 (Cascading) The impulse response of two cascaded systems equals the convolution of their impulse
responses. Does the step response sC (t) of two cascaded systems equal s1 (t) s2 (t), the convolution of
their step responses? If not, how is sC (t) related to s1 (t) and s2 (t)?
6.31 (Cascading) System 1 compresses a signal by a factor of 2, and system 2 is an RC lowpass filter with
= 1. Find the output of each cascaded combination. Will their output be identical? Should it be?
Explain.
(a) x(t) = 2et u(t) system 1 system 2 y(t)
(b) x(t) = 2et u(t) system 2 system 1 y(t)
6.32 (Cascading) System 1 is a squaring circuit, and system 2 is an RC lowpass filter with = 1. Find
the output of each cascaded combination. Will their output be identical? Should it be? Explain.
(a) x(t) = 2et u(t) system 1 system 2 y(t)
(b) x(t) = 2et u(t) system 2 system 1 y(t)
6.33 (Cascading) System 1 is a highpass RC circuit with h(t) = (t) et u(t), and system 2 is an
RC lowpass filter with = 1. Find the output of each cascaded combination. Will their output be
identical? Should it be? Explain.
(a) x(t) = 2et u(t) system 1 system 2 y(t)
(b) x(t) = 2et u(t) system 2 system 1 y(t)
6.34 (Cascading) System 1 is a highpass RC circuit with h(t) = (t) et u(t), and system 2 is an RC
lowpass filter with = 1.
(a) Find the impulse response hP (t) of their parallel connection.
(b) Find the impulse response h12 (t) of the cascade of system 1 and system 2.
(c) Find the impulse response h21 (t) of the cascade of system 2 and system 1.
(d) Are h12 (t) and h21 (t) identical? Should they be? Explain.
(e) Find the impulse response hI (t) of a system whose parallel connection with h12 (t) yields hP (t).
Chapter 6 Problems 165
6.35 (Cascading) System 1 is described by y(t) = x (t) + x(t), and system 2 is an RC lowpass filter with
= 1.
(a) What is the output of the cascaded system to the input x(t) = 2et u(t)?
(b) What is the output of the cascaded system to the input x(t) = (t)?
(c) How are system 1 and system 2 related? Should they be? Explain.
+ 1H
+ + R
+ + 1H R +
Input 1 Output Input Output Input 1 2R Output
2R
Circuit 1 Circuit 2 Circuit 3
Figure P6.36 The circuits for Problem 6.36
6.37 (Stability and Causality) Check for the causality and stability of each of the following systems.
(a) h(t) = e(t+1) u(t) (b) h(t) = et1 u(t + 1)
(c) h(t) = (t) et u(t) (d) h(t) = (t) et u(1 t)
6.38 (Stability and Causality) Check for the causality and stability of the parallel connection and
cascade connection of each pair of systems.
(a) h1 (t) = et u(t) h2 (t) = (t)
(b) h1 (t) = et+3 u(t 3) h2 (t) = (t + 2)
(c) h1 (t) = et u(t) h2 (t) = et+2 u(t 1)
(d) h1 (t) = et u(t) h2 (t) = et u(t)
(e) h1 (t) = e|t| h2 (t) = e|t1|
(f ) h1 (t) = e|t| h2 (t) = e|t1|
(g) h1 (t) = e|t1| h2 (t) = e|t1|
(a) y (t) = x(t) (b) y (t) + y(t) = x(t) (c) y (n) (t) = x(t) (d) y(t) = x(n) (t)
6.40 (Convolution and System Classification) The impulse response of three systems is
h1 (t) = 2(t) h2 (t) = (t) + (t 3) h3 (t) = et u(t)
(a) Find the response of each to the input x(t) = u(t) u(t 1).
(b) For system 1, the input is zero at t = 2 s, and so is the response. Does the statement zero
output if zero input apply to dynamic or instantaneous systems or both? Explain.
(c) Argue that system 1 is instantaneous. What about the other two?
(d) What must be the form of h(t) for an instantaneous system?
166 Chapter 6 Continuous Convolution
6.41 (Convolution and Smoothing) Convolution is usually a smoothing operation unless one signal is
an impulse or its derivative, but exceptions occur even for smooth signals. Evaluate and comment on
the duration and smoothing eects of the following convolutions.
(a) y(t) = rect(t) tri(t) (b) y(t) = rect(t) (t)
(c) y(t) = rect(t) (t) (d) y(t) = sinc(t) sinc(t)
2 2
(e) y(t) = et et (f ) y(t) = sin(2t) rect(t)
6.42 (Eigensignals) The input x(t) and response y(t) of two systems are given. Which of the systems are
linear, and why?
(a) x(t) = cos(t), y(t) = 0.5 sin(t 0.25)
(b) x(t) = cos(t), y(t) = cos(2t)
6.43 (Eigensignals) If the input to a system is its eigensignal, the response has the same form as the
eigensignal. Justify the following statements by computing the system response by convolution (if
the impulse response is given) or by solving the given dierential equation. You may pick convenient
numerical values for the parameters.
(a) Every signal is an eigensignal of the system described by h(t) = A(t).
(b) The signal x(t) = ejt is an eigensignal of any LTI system such as that described by the impulse
response h(t) = et u(t).
(c) The signal x(t) = cos(t) is an eigensignal of any LTI system described by a dierential equation
such as y (t) + y(t) = x(t).
(d) The signal x(t) = sinc(t) is an eigensignal of ideal filters described by h(t) = sinc(t), .
6.44 (Eigensignals) Which of the following can be the eigensignal of an LTI system?
(a) x(t) = e2t u(t) (b) x(t) = ej2t (c) x(t) = cos(2t) (d) x(t) = ejt + ej2t
6.45 (Stability) Investigate the causality and stability of the following systems.
(a) h(t) = u(t) (b) h(t) = e2t u(t) (c) h(t) = (t 1)
(d) h(t) = rect(t) (e) h(t) = sinc(t) (f ) h(t) = sinc2 (t)
6.46 (Invertibility) Determine which of the following systems are invertible and, for those that are, find
the impulse response of the inverse system.
(a) h(t) = e2t u(t) (b) h(t) = (t 1) (c) h(t) = sinc(t)
6.47 (Periodic Extension) The periodic extension x(t) with period T has the same form and area as
x(t). Use this concept to find the constants in the following assumed form for xpe (t) of each signal.
(a) Signal: x(t) = et/ u(t) Periodic extension for 0 t T : xpe (t) = Ket/
(b) Signal: x(t) = tet/ u(t) Periodic extension for 0 t T : xpe (t) = (A + Bt)et/
6.48 (The Duration Property) The convolution duration usually equals the sum of the durations of the
convolved signals. But consider the following convolutions.
y1 (t) = u(t) sin(t)[u(t) u(t 2)] y2 (t) = rect(t) cos(2t)u(t)
(a) Evaluate each convolution and find its duration. Is the duration infinite? If not, what causes it
to be finite?
(b) In the first convolution, replace the sine pulse by an arbitrary signal x(t) of zero area and finite
duration Td and argue that the convolution is nonzero only for a duration Td .
Chapter 6 Problems 167
(c) In the second convolution, replace the cosine by an arbitrary periodic signal xp (t) with zero
average value and period T = 1. Argue that the convolution is nonzero for only for 1 unit.
6.49 (Convolutions that Replicate) Signals that replicate under self-convolution include the impulse,
sinc, Gaussian, and Lorentzian. For each of the following known results, determine the constant A
using the area property of convolution.
(a) (t) (t) = A(t)
(b) sinc(t) sinc(t) = A sinc(t)
2 2 2
(c) et et = Aet /2
1 1 A
(d) =
1+t 2 1+t 2 1 + 0.25t2
6.50 (Convolution and Moments) For each of the following signal pairs, find the moments m0 , m1 , and
m2 and verify each of the convolution properties based on moments (as discussed in the text).
(a) x(t) = h(t) h(t) = rect(t)
(b) x(t) = et u(t) h(t) = e2t u(t)
6.51 (Central Limit Theorem) Show that the n-fold repeated convolution of the signal h(t) = et u(t)
with itself has the form
tn et
hn (t) =
u(t).
n!
(a) Show that hn (t) has a maximum at t = n.
(b) Assume a Gaussian approximation hn (t) gn (t) = K exp[(t n)2 ]. Equating hn (t) and gn (t)
at t = n, show that K = (en nn /n!). /
(c) Use the Stirling limit to show that K = 1/(2n). The Stirling limit is defined by
$ %
nn n 1
lim n e =
n n! 2
(d) Show that = 1/(2n) by equating the areas of hn (t) and gn (t).
6.52 (Matched Filters) A folded, shifted version of a signal s(t) defines the impulse response of a matched
filter corresponding to s(t).
(a) Find and sketch the impulse response of a matched filter for the signal s(t) = u(t) u(t 1).
(b) Find the response y(t) of this matched filter to the signal x(t) = s(t D), where D = 2 s
corresponds to the signal delay.
(c) At what time tm does the response y(t) attain its maximum value, and how is tm related to the
signal delay D?
6.53 (Autocorrelation Functions) A signal x(t) can qualify as an autocorrelation function only if it
satisfies certain properties. Which of the following qualify as valid autocorrelation functions, and why?
(a) rxx (t) = et u(t) (b) rxx (t) = e|t| (c) rxx (t) = tet u(t)
1
(d) rxx (t) = |t|e|t| (e) rxx (t) = sinc2 (t) (f ) rxx (t) =
1 + t2
t 1 + t2 t2 1
(g) rxx (t) = (h) rxx (t) = (i) rxx (t) = 2
1 + t2 4 + t2 t +4
168 Chapter 6 Continuous Convolution
6.54 (Correlation) Find the cross-correlations rxh (t) and rhx (t) for each pair of signals.
(a) x(t) = et u(t) h(t) = et u(t)
(b) x(t) = et u(t) h(t) = et u(t)
(c) x(t) = e|t| h(t) = e|t|
(d) x(t) = e(t1) u(t 1) h(t) = et u(t)
6.55 (Animation of Convolution) Use ctcongui to animate the convolution of each of the following
pairs of signals and determine whether it is possible to visually identify the convolution ranges.
(a) x(t) = rect(t) h(t) = rect(t)
(b) x(t) = rect(t) h(t) = et [u(t) u(t 5)]
(c) x(t) = rect(t) h(t) = tri(t)
(d) x(t) = et u(t) h(t) = (1 t)[u(t) u(t 1)]
(e) x(t) = et u(t) h(t) = t[u(t) u(t 1)]
6.56 (Convolution of Sinc Functions) It is claimed that the convolution of the two identical signals
x(t) = h(t) = sinc(t) is y(t) = x(t) h(t) = sinc(t). Let both x(t) and h(t) be described over the
symmetric limits t . Use ctcongui to animate the convolution of x(t) and h(t) for = 1,
= 5, and = 10 (you may want to choose a smaller time step in ctcongui for larger values of ).
Does the convolution begin to approach the required result as increases?
Chapter 7
DISCRETE CONVOLUTION
By superposition, the response to x[n] is the sum of scaled and shifted versions of the impulse response:
!
y[n] = x[k]h[n k] = x[n] h[n] (7.2)
k=
This is the defining relation for the convolution operation, which we call linear convolution, and denote
by y[n] = x[n] h[n] (or by x[n] h[n] in the figures) in this book. The expression for computing y[n] is called
the convolution sum. As with continuous-time convolution, the order in which we perform the operation
does not matter, and we can interchange the arguments of x and h without aecting the result. Thus,
!
y[n] = x[n k]h[k] = h[n] x[n] (7.3)
k=
!
Notation: We use x[n] h[n] to denote x[k]h[n k]
k=
169
170 Chapter 7 Discrete Convolution