100% found this document useful (1 vote)
1K views519 pages

Signal Processing Essentials by Ambardar

This document is the table of contents for a textbook on digital signal processing. It provides an overview of topics that will be covered in the book, including discrete signals, digital filters, the discrete Fourier transform, filter design, and applications of digital signal processing. The table of contents outlines the chapters and sections that make up the textbook, with sections covering discrete signals and operations on discrete signals, common discrete signals like the discrete impulse and sinc functions, random signals, and the sampling theorem.

Uploaded by

Arham Fawwaz
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
1K views519 pages

Signal Processing Essentials by Ambardar

This document is the table of contents for a textbook on digital signal processing. It provides an overview of topics that will be covered in the book, including discrete signals, digital filters, the discrete Fourier transform, filter design, and applications of digital signal processing. The table of contents outlines the chapters and sections that make up the textbook, with sections covering discrete signals and operations on discrete signals, common discrete signals like the discrete impulse and sinc functions, random signals, and the sampling theorem.

Uploaded by

Arham Fawwaz
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 519

DIGITAL SIGNAL PROCESSING:

A MODERN INTRODUCTION
by
Ashok Ambardar
Michigan Technological University
CONTENTS
PREFACE xiii
1 OVERVIEW 2
1.0 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1 Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Digital Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.1 The z-Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.2 The Frequency Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.3 Filter Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Signal Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.1 Digital Processing of Analog Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.2 Filter Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3.3 The Design of IIR Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3.4 The Design of FIR lters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 The DFT and FFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5 Advantages of DSP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5.1 Applications of DSP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2 DISCRETE SIGNALS 8
2.0 Scope and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1 Discrete Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1.1 Signal Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Operations on Discrete Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.1 Symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.2 Even and Odd Parts of Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Decimation and Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3.1 Decimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3.2 Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3.3 Fractional Delays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.4 Common Discrete Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4.1 Properties of the Discrete Impulse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4.2 Signal Representation by Impulses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
c Ashok Ambardar, September 1, 2003 v
vi Contents
2.4.3 Discrete Pulse Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4.4 The Discrete Sinc Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.4.5 Discrete Exponentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.5 Discrete-Time Harmonics and Sinusoids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.5.1 Discrete-Time Harmonics Are Not Always Periodic in Time . . . . . . . . . . . . . . . 20
2.5.2 Discrete-Time Harmonics Are Always Periodic in Frequency . . . . . . . . . . . . . . . 21
2.6 The Sampling Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.6.1 Signal Reconstruction and Aliasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.6.2 Reconstruction at Dierent Sampling Rates . . . . . . . . . . . . . . . . . . . . . . . . 25
2.7 An Introduction to Random Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.7.1 Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.7.2 Measures for Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.7.3 The Chebyshev Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.7.4 Probability Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.7.5 The Uniform Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.7.6 The Gaussian or Normal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.7.7 Discrete Probability Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.7.8 Distributions for Deterministic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.7.9 Stationary, Ergodic, and Pseudorandom Signals . . . . . . . . . . . . . . . . . . . . . . 34
2.7.10 Statistical Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.7.11 Random Signal Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3 TIME-DOMAIN ANALYSIS 47
3.0 Scope and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.1 Discrete-Time Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.1.1 Linearity and Superposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.1.2 Time Invariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.1.3 LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.1.4 Causality and Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.2 Digital Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.2.1 Digital Filter Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.2.2 Digital Filter Realization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.3 Response of Digital Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.3.1 Response of Nonrecursive Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.3.2 Response of Recursive Filters by Recursion . . . . . . . . . . . . . . . . . . . . . . . . 56
3.4 The Natural and Forced Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.4.1 The Single-Input Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.4.2 The Zero-Input Response and Zero-State Response . . . . . . . . . . . . . . . . . . . . 61
3.4.3 Solution of the General Dierence Equation . . . . . . . . . . . . . . . . . . . . . . . 64
3.5 The Impulse Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.5.1 Impulse Response of Nonrecursive Filters . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.5.2 Impulse Response by Recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.5.3 Impulse Response for the Single-Input Case . . . . . . . . . . . . . . . . . . . . . . . . 66
c Ashok Ambardar, September 1, 2003
Contents vii
3.5.4 Impulse Response for the General Case . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3.5.5 Recursive Forms for Nonrecursive Digital Filters . . . . . . . . . . . . . . . . . . . . . 68
3.5.6 The Response of Anti-Causal Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.6 System Representation in Various Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.6.1 Dierence Equations from the Impulse Response . . . . . . . . . . . . . . . . . . . . . 70
3.6.2 Dierence Equations from Input-Output Data . . . . . . . . . . . . . . . . . . . . . . . 70
3.7 Application-Oriented Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.7.1 Moving Average Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.7.2 Inverse Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
3.7.3 Echo and Reverb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.7.4 Periodic Sequences and Wave-Table Synthesis . . . . . . . . . . . . . . . . . . . . . . . 75
3.7.5 How Dierence Equations Arise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.8 Discrete Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.8.1 Analytical Evaluation of Discrete Convolution . . . . . . . . . . . . . . . . . . . . . . . 77
3.9 Convolution Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
3.10 Convolution of Finite Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
3.10.1 The Sum-by-Column Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
3.10.2 The Fold, Shift, Multiply, and Sum Concept . . . . . . . . . . . . . . . . . . . . . . . . 82
3.10.3 Discrete Convolution, Multiplication, and Zero Insertion . . . . . . . . . . . . . . . . . 83
3.10.4 Impulse Response of LTI Systems in Cascade and Parallel . . . . . . . . . . . . . . . . 84
3.11 Stability and Causality of LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
3.11.1 Stability of FIR Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
3.11.2 Stability of LTI Systems Described by Dierence Equations . . . . . . . . . . . . . . . 86
3.11.3 Stability of LTI Systems Described by the Impulse Response . . . . . . . . . . . . . . 86
3.11.4 Causality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
3.12 System Response to Periodic Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
3.13 Periodic Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
3.13.1 Periodic Convolution By the Cyclic Method . . . . . . . . . . . . . . . . . . . . . . . . 92
3.13.2 Periodic Convolution By the Circulant Matrix . . . . . . . . . . . . . . . . . . . . . . 92
3.13.3 Regular Convolution from Periodic Convolution . . . . . . . . . . . . . . . . . . . . . . 94
3.14 Deconvolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
3.14.1 Deconvolution By Recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
3.15 Discrete Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
3.15.1 Autocorrelation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
3.15.2 Periodic Discrete Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
3.15.3 Matched Filtering and Target Ranging . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
3.16 Discrete Convolution and Transform Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
3.16.1 The z-Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
3.16.2 The Discrete-Time Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
4 z-TRANSFORM ANALYSIS 124
4.0 Scope and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
4.1 The Two-Sided z-Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
c Ashok Ambardar, September 1, 2003
viii Contents
4.1.1 What the z-Transform Reveals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
4.1.2 Some z-Transform Pairs Using the Dening Relation . . . . . . . . . . . . . . . . . . . 125
4.1.3 More on the ROC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
4.2 Properties of the Two-Sided z-Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
4.3 Poles, Zeros, and the z-Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
4.4 The Transfer Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
4.5 Interconnected Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
4.6 Transfer Function Realization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
4.6.1 Transposed Realization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
4.6.2 Cascaded and Parallel Realization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
4.7 Causality and Stability of LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
4.7.1 Stability and the ROC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
4.7.2 Inverse Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
4.8 The Inverse z-Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
4.8.1 Inverse z-Transform of Finite Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . 145
4.8.2 Inverse z-Transform by Long Division . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
4.8.3 Inverse z-Transform from Partial Fractions . . . . . . . . . . . . . . . . . . . . . . . . 147
4.8.4 The ROC and Inversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
4.9 The One-Sided z-Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
4.9.1 The Right-Shift Property of the One-Sided z-Transform . . . . . . . . . . . . . . . . . 154
4.9.2 The Left-Shift Property of the One-Sided z-Transform . . . . . . . . . . . . . . . . . . 155
4.9.3 The Initial Value Theorem and Final Value Theorem . . . . . . . . . . . . . . . . . . . 156
4.9.4 The z-Transform of Switched Periodic Signals . . . . . . . . . . . . . . . . . . . . . . . 157
4.10 The z-Transform and System Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
4.10.1 Systems Described by Dierence Equations . . . . . . . . . . . . . . . . . . . . . . . . 158
4.10.2 Systems Described by the Transfer Function . . . . . . . . . . . . . . . . . . . . . . . . 159
4.10.3 Forced and Steady-State Response from the Transfer Function . . . . . . . . . . . . . 161
5 FREQUENCY DOMAIN ANALYSIS 176
5.0 Scope and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
5.1 The DTFT from the z-Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
5.1.1 Symmetry of the Spectrum for a Real Signal . . . . . . . . . . . . . . . . . . . . . . . 177
5.1.2 Some DTFT Pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
5.1.3 Relating the z-Transform and DTFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
5.2 Properties of the DTFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
5.2.1 Folding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
5.2.2 Time Shift of x[n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
5.2.3 Frequency Shift of X(F) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
5.2.4 Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
5.2.5 Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
5.2.6 The times-n property: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
5.2.7 Parsevals relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
5.2.8 Central ordinate theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
c Ashok Ambardar, September 1, 2003
Contents ix
5.3 The DTFT of Discrete-Time Periodic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
5.3.1 The DFS and DFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
5.4 The Inverse DTFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
5.5 The Frequency Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
5.6 System Analysis Using the DTFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
5.6.1 The Steady-State Response to Discrete-Time Harmonics . . . . . . . . . . . . . . . . . 195
5.7 Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
5.8 Ideal Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
5.8.1 Frequency Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
5.8.2 Truncation and Windowing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
5.8.3 The Rectangular Window and its Spectrum . . . . . . . . . . . . . . . . . . . . . . . . 202
5.8.4 The Triangular Window and its Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . 202
5.8.5 The Consequences of Windowing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
6 FILTER CONCEPTS 215
6.0 Scope and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
6.1 Frequency Response and Filter Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . 215
6.1.1 Phase Delay and Group Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
6.1.2 Minimum-Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
6.1.3 Minimum-Phase Filters from the Magnitude Spectrum . . . . . . . . . . . . . . . . . . 216
6.1.4 The Frequency Response: A Graphical View . . . . . . . . . . . . . . . . . . . . . . . 217
6.1.5 The Rubber Sheet Analogy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
6.2 FIR Filters and Linear-Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
6.2.1 Pole-Zero Patterns of Linear-Phase Filters . . . . . . . . . . . . . . . . . . . . . . . . . 220
6.2.2 Types of Linear-Phase Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
6.2.3 Averaging Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
6.2.4 Zeros of Averaging Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
6.2.5 FIR Comb Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
6.3 IIR Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
6.3.1 First-Order Highpass Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
6.3.2 Pole-Zero Placement and Filter Design . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
6.3.3 Second-Order IIR Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
6.3.4 Digital Resonators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
6.3.5 Periodic Notch Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
6.4 Allpass Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
6.4.1 Transfer Function of Allpass Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
6.4.2 Stabilization of Unstable Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
6.4.3 Minimum-Phase Filters Using Allpass Filters . . . . . . . . . . . . . . . . . . . . . . . 238
6.4.4 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
7 DIGITAL PROCESSING OF ANALOG SIGNALS 251
7.0 Scope and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
7.1 Ideal Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
7.1.1 Sampling of Sinusoids and Periodic Signals . . . . . . . . . . . . . . . . . . . . . . . . 254
c Ashok Ambardar, September 1, 2003
x Contents
7.1.2 Application Example: The Sampling Oscilloscope . . . . . . . . . . . . . . . . . . . . . 256
7.1.3 Sampling of Bandpass Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
7.1.4 Natural Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
7.1.5 Zero-Order-Hold Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
7.2 Sampling, Interpolation, and Signal Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
7.2.1 Ideal Recovery and the Sinc Interpolating Function . . . . . . . . . . . . . . . . . . . . 262
7.2.2 Interpolating Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
7.2.3 Interpolation in Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
7.3 Sampling Rate Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
7.3.1 Zero Interpolation and Spectrum Compression . . . . . . . . . . . . . . . . . . . . . . 266
7.3.2 Sampling Rate Increase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
7.3.3 Sampling Rate Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
7.4 Quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
7.4.1 Uniform Quantizers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
7.4.2 Quantization Error and Quantization Noise . . . . . . . . . . . . . . . . . . . . . . . . 271
7.4.3 Quantization and Oversampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
7.5 Digital Processing of Analog Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
7.5.1 Multirate Signal Processing and Oversampling . . . . . . . . . . . . . . . . . . . . . . 276
7.5.2 Practical ADC Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
7.5.3 Anti-Aliasing Filter Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
7.5.4 Anti-Imaging Filter Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
7.6 Compact Disc Digital Audio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
7.6.1 Recording . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
7.6.2 Playback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
7.7 Dynamic Range Processors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
7.7.1 Companders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
7.8 Audio Equalizers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
7.8.1 Shelving Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
7.8.2 Graphic Equalizers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
7.8.3 Parametric Equalizers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
7.9 Digital Audio Eects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
7.9.1 Gated Reverb and Reverse Reverb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
7.9.2 Chorusing, Flanging, and Phasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
7.9.3 Plucked-String Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
7.10 Digital Oscillators and DTMF Receivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
7.10.1 DTMF Receivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
8 DESIGN OF FIR FILTERS 311
8.0 Scope and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
8.1.1 The Design Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
8.1.2 Techniques of Digital Filter Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
8.2 Symmetric Sequences and Linear Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
c Ashok Ambardar, September 1, 2003
Contents xi
8.2.1 Classication of Linear-Phase Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . 313
8.2.2 Applications of Linear-Phase Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . 315
8.2.3 FIR Filter Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
8.3 Window-Based Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
8.3.1 Characteristics of Window Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
8.3.2 Some Other Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
8.3.3 What Windowing Means . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
8.3.4 Some Design Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
8.3.5 Characteristics of the Windowed Spectrum . . . . . . . . . . . . . . . . . . . . . . . . 322
8.3.6 Selection of Window and Design Parameters . . . . . . . . . . . . . . . . . . . . . . . 323
8.3.7 Spectral Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
8.4 Half-Band FIR Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
8.5 FIR Filter Design by Frequency Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
8.5.1 Frequency Sampling and Windowing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
8.5.2 Implementing Frequency-Sampling FIR Filters . . . . . . . . . . . . . . . . . . . . . . 337
8.6 Design of Optimal Linear-Phase FIR Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
8.6.1 The Alternation Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
8.6.2 Optimal Half-Band Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
8.7 Application: Multistage Interpolation and Decimation . . . . . . . . . . . . . . . . . . . . . . 342
8.7.1 Multistage Decimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
8.8 Maximally Flat FIR Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
8.9 FIR Dierentiators and Hilbert Transformers . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
8.9.1 Hilbert Transformers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
8.9.2 Design of FIR Dierentiators and Hilbert Transformers . . . . . . . . . . . . . . . . . 348
8.10 Least Squares and Adaptive Signal Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
8.10.1 Adaptive Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
8.10.2 Applications of Adaptive Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
9 DESIGN OF IIR FILTERS 361
9.0 Scope and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
9.2 IIR Filter Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
9.2.1 Equivalence of Analog and Digital Systems . . . . . . . . . . . . . . . . . . . . . . . . 361
9.2.2 The Eects of Aliasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
9.2.3 Practical Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
9.3 Response Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
9.3.1 The Impulse-Invariant Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
9.3.2 Modications to Impulse-Invariant Design . . . . . . . . . . . . . . . . . . . . . . . . . 368
9.4 The Matched z-Transform for Factored Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
9.4.1 Modications to Matched z-Transform Design . . . . . . . . . . . . . . . . . . . . . . . 372
9.5 Mappings from Discrete Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
9.5.1 Mappings from Dierence Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
9.5.2 Stability Properties of the Backward-Dierence Algorithm . . . . . . . . . . . . . . . . 374
c Ashok Ambardar, September 1, 2003
xii Contents
9.5.3 The Forward-Dierence Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
9.5.4 Mappings from Integration Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
9.5.5 Stability Properties of Integration-Algorithm Mappings . . . . . . . . . . . . . . . . . 376
9.5.6 Frequency Response of Discrete Algorithms . . . . . . . . . . . . . . . . . . . . . . . . 378
9.5.7 Mappings from Rational Approximations . . . . . . . . . . . . . . . . . . . . . . . . . 381
9.6 The Bilinear Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
9.6.1 Using the Bilinear Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
9.7 Spectral Transformations for IIR Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
9.7.1 Digital-to-Digital Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
9.7.2 Direct (A2D) Transformations for Bilinear Design . . . . . . . . . . . . . . . . . . . . 386
9.7.3 Bilinear Transformation for Peaking and Notch Filters . . . . . . . . . . . . . . . . . . 386
9.8 Design Recipe for IIR Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
9.8.1 Finite-Word-Length Eects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
9.8.2 Eects of Coecient Quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
9.8.3 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
10 THE DISCRETE FOURIER TRANSFORM AND ITS APPLICATIONS 405
10.0 Scope and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
10.1.1 Connections Between Frequency-Domain Transforms . . . . . . . . . . . . . . . . . . . 405
10.2 The DFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
10.3 Properties of the DFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
10.3.1 Symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
10.3.2 Central Ordinates and Special DFT Values . . . . . . . . . . . . . . . . . . . . . . . . 409
10.3.3 Circular Shift and Circular Symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
10.3.4 Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
10.3.5 The FFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
10.3.6 Signal Replication and Spectrum Zero Interpolation . . . . . . . . . . . . . . . . . . . 413
10.3.7 Some Useful DFT Pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
10.4 Some Practical Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
10.5 Approximating the DTFT by the DFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
10.6 The DFT of Periodic Signals and the DFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
10.6.1 The Inverse DFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
10.6.2 Understanding the DFS Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
10.6.3 The DFS and DFT of Sinusoids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
10.6.4 The DFT and DFS of Sampled Periodic Signals . . . . . . . . . . . . . . . . . . . . . . 420
10.6.5 The Eects of Leakage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
10.7 The DFT of Nonperiodic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
10.7.1 Spectral Spacing and Zero-Padding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
10.8 Spectral Smoothing by Time Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
10.8.1 Performance Characteristics of Windows . . . . . . . . . . . . . . . . . . . . . . . . . . 426
10.8.2 The Spectrum of Windowed Sinusoids . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
10.8.3 Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
c Ashok Ambardar, September 1, 2003
Contents xiii
10.8.4 Detecting Hidden Periodicity Using the DFT . . . . . . . . . . . . . . . . . . . . . . . 432
10.9 Applications in Signal Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
10.9.1 Convolution of Long Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
10.9.2 Deconvolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
10.9.3 Band-Limited Signal Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
10.9.4 The Discrete Hilbert Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
10.10Spectrum Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
10.10.1The Periodogram Estimate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
10.10.2PSD Estimation by the Welch Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
10.10.3PSD Estimation by the Blackman-Tukey Method . . . . . . . . . . . . . . . . . . . . . 438
10.10.4Non-Parametric System Identication . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
10.10.5Time-Frequency Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
10.11The Cepstrum and Homomorphic Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
10.11.1Homomorphic Filters and Deconvolution . . . . . . . . . . . . . . . . . . . . . . . . . . 441
10.11.2Echo Detection and Cancellation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
10.12Optimal Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
10.13Matrix Formulation of the DFT and IDFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
10.13.1The IDFT from the Matrix Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446
10.13.2Using the DFT to Find the IDFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
10.14The FFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
10.14.1Some Fundamental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
10.14.2The Decimation-in-Frequency FFT Algorithm . . . . . . . . . . . . . . . . . . . . . . . 449
10.14.3The Decimation-in-Time FFT Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 451
10.14.4Computational Cost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
10.15Why Equal Lengths for the DFT and IDFT? . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
10.15.1The Inverse DFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
10.15.2How Unequal Lengths Aect the DFT Results . . . . . . . . . . . . . . . . . . . . . . 456
A USEFUL CONCEPTS FROM ANALOG THEORY 470
A.0 Scope and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470
A.1 Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470
A.2 System Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
A.2.1 The Zero-State Response and Zero-Input Response . . . . . . . . . . . . . . . . . . . . 476
A.2.2 Step Response and Impulse Response . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
A.3 Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
A.3.1 Useful Convolution Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
A.4 The Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479
A.4.1 The Inverse Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480
A.4.2 Interconnected Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
A.4.3 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
A.4.4 The Laplace Transform and System Analysis . . . . . . . . . . . . . . . . . . . . . . . 482
A.4.5 The Steady-State Response to Harmonic Inputs . . . . . . . . . . . . . . . . . . . . . . 483
A.5 Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
c Ashok Ambardar, September 1, 2003
xiv Contents
A.5.1 Some Useful Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
A.6 The Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
A.6.1 Connections between Laplace and Fourier Transforms . . . . . . . . . . . . . . . . . . 486
A.6.2 Amplitude Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
A.6.3 Fourier Transform of Periodic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
A.6.4 Spectral Density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
A.6.5 Ideal Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
A.6.6 Measures for Real Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
A.6.7 A First Order Lowpass Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
A.6.8 A Second-Order Lowpass Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
A.7 Bode Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
A.8 Classical Analog Filter Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
c Ashok Ambardar, September 1, 2003
PREFACE
This book provides a modern and self-contained introduction to digital signal processing (DSP) and is written
with several audiences in mind. First and foremost, it is intended to serve as a textbook suitable for a one-
semester junior or senior level undergraduate course. To this extent, it includes the relevant topics covered
in a typical undergraduate curriculum and is supplemented by a vast number of worked examples, drill
exercises and problems. It also attempts to provide a broader perspective by introducing useful applications
and additional special topics in each chapter. These form the background for more advanced graduate courses
in this area and also allow the book to be used as a source of basic reference for professionals across various
disciplines interested in DSP.
Scope
The text stresses the fundamental principles and applications of digital signal processing. The relevant con-
cepts are explained and illustrated by worked examples and applications are introduced where appropriate.
Since many applications of DSP relate to the processing of analog signals, some familiarity with basic analog
theory, at the level taught in a typical undergraduate signals and systems course, is assumed and expected.
In order to make the book self-contained, the key concepts and results from analog theory that are relevant
to a study of DSP are outlined and included in an appendix. The topics covered in this book may be grouped
into the following broad areas:
1. The rst chapter starts with a brief overview. An introduction to discrete signals, their representation
and their classication is provided in Chapter 2.
2. Chapter 3 details the analysis of digital lters in the time-domain using the solution of dierence
equations or the process of convolution that also serves to link the time domain and the frequency
domain.
3. Chapter 4 covers the analysis in the transformed domain using the z-transform that forms a powerful
tool for studying discrete-time signals and systems.
4. Chapter 5 describes the analysis of discrete signals and digital lters in the frequency domain using
the discrete-time Fourier transform (DTFT) that arises as a special case of the z-transform.
5. Chapter 6 introduces the jargon, terminology and variety of digital lters and studies and compares
the various methods of studying them.
6. Chapter 7 discusses the digital processing of analog signals based on the concepts of sampling and
quantization and the spectral representation of sampled signals.
7. Chapter 8 and Chapter 9 describe the the design of FIR and IIR lters for various applications using
well established techniques.
8. Chapter 10 provides an introduction to the spectral analysis of both analog and discrete signals based
on numerical computation of the DFT and the FFT and its applications.
c Ashok Ambardar, September 1, 2003 xv
xvi Preface
One of the concerns often voiced about undergraduate textbooks is the level of mathematical detail. This
book takes the approach that even though mathematical rigor need not be sacriced, it does not have to get
in the way of understanding and applying useful DSP concepts. To this extent, the book attempts to preserve
a rational approach and include all the necessary mathematical details. However, whenever possible, the
results are also described and then applied to problem solving on the basis of simple heuristic explanations.
In each chapter, a short opening section outlines the objectives and topical coverage. Central concepts
are highlighted in review panels, illustrated by worked examples and followed by drill exercises with answers.
Many gures have been included to help the student grasp and visualize critical concepts. Results are
tabulated and summarized for easy reference and access. End-of-chapter problems include a variety of drills
and exercises. Application oriented problems require the use of computational resources such as Matlab.
Since our primary intent is to present the principles of digital signal processing, not software, we have made
no attempt to integrate Matlab into the text. This approach maintains the continuity and logical ow of
the textual material. However, for those interested, a suite of Matlab-based routines that may be used
to illustrate the principles and concepts presented in the book are available on the authors website. We
hasten to add two disclaimers. First, the choice of Matlab is not to be construed as an endorsement of
this product. Second, the routines are supplied in good faith and the author is not responsible for any
consequences arising from their use! A solutions manual for instructors is available from the publisher.
Acknowledgments
This book has gained immensely from the incisive, sometimes provoking, but always constructive, criticism
of the following reviewers:
Many other individuals have also contributed in various ways to this eort. Special thanks are due, in
particular, to
If you come across any errors in the text or discover any bugs in the software, we would appreciate hearing
from you. Any errata will be posted on the authors website.
Ashok Ambardar Michigan Technological University
Internet: http://www.ee.mtu.edu/faculty/akambard.html
e-mail: [email protected]
c Ashok Ambardar, September 1, 2003
Chapter 1
OVERVIEW
1.0 Introduction
Few other technologies have revolutionized the world as profoundly as those based on digital signal processing.
For example, the technology of recorded music was, until recently, completely analog from end to end, and
the most important commercial source of recorded music used to be the LP (long-playing) record. The
advent of the digital compact disc changed all that in the span of just a few short years and made the
long-playing record practically obsolete. with the advent and proliferation of high speed, low cost computers
and powerful, user-friendly software packages, digital signal processing (DSP) has truly come of age. This
chapter provides an overview of the terminology of digital signal processing and of the connections between
the various topics and concepts covered in the text.
1.1 Signals
Our world is full of signals, both natural and man-made. Examples are the variation in air pressure when we
speak, the daily highs and lows in temperature, and the periodic electrical signals generated by the heart.
Signals represent information. Often, signals may not convey the required information directly and may
not be free from disturbances. It is in this context that signal processing forms the basis for enhancing,
extracting, storing, or transmitting useful information. Electrical signals perhaps oer the widest scope for
such manipulations. In fact, it is commonplace to convert signals to electrical form for processing.
The signals we encounter in practice are often very dicult to characterize. So, we choose simple
mathematical models to approximate their behavior. Such models also give us the ability to make predictions
about future signal behaviour. Of course, an added advantage of using models is that they are much easier
to generate and manipulate. What is more, we can gradually increase the complexity of our model to obtain
better approximations, if needed. The simplest signal models are a constant variation, an exponential decay
and a sinusoidal or periodic variation. Such signals form the building blocks from which we can develop
representations for more complex forms.
This book starts with a quick overview of discrete signals, how they arise and how they are modeled.
We review some typical measures (such as power and energy) used to characterize discrete signals and the
operations of interpolation and decimation which are often used to change the sampling rate of an already
sampled signal.
c Ashok Ambardar, September 1, 2003 1
2 Chapter 1 Overview
1.2 Digital Filters
The processing of discrete signals is accomplished by discrete-time systems, also called digital lters. In
the time domain, such systems may be modeled by dierence equations in much the same way that analog
systems are modeled by dierential equations. We concentrate on models of linear time-invariant (LTI)
systems whose dierence equations have constant coecients. The processing of discrete signals by such
systems can be achieved by resorting to the well known mathematical techniques. For input signals that can
be described as a sum of simpler forms, linearity allows us to nd the response as the sum of the response
to each of the simpler forms. This is superposition. Many systems are actually nonlinear. The study of
nonlinear systems often involves making simplifying assumptions, such as linearity. The system response can
also be obtained using convolution, a method based on superposition: if the response of a system is known
to a unit sample (or impulse) input, then it is also known to any arbitrary input which can be expressed as
a sum of such impulses.
Two important classes of digital lters are FIR (nite impulse response) lters whose impulse response
(response to an impulse input) is a nite sequence (lasts only for nite time) and IIR (innite impulse
response) lters whose response to an impulse input lasts forever.
1.2.1 The z-Transform
The z-transform is a powerful method of analysis for discrete signals and systems. It is analogous to the
Laplace transform used to study analog systems. The transfer function of an LTI system is a ratio of
polynomials in the complex variable z. The roots of the numerator polynomial are called zeros and of the
denominator polynomial are called poles. The pole-zero description of a transfer function is quite useful if
we want a qualitative picture of the frequency response. For example, the frequency response goes to zero if
z equals one of the zero locations and becomes unbounded if z equals one of the pole locations.
1.2.2 The Frequency Domain
It turns out that discrete sinusoids and harmonic signals dier from their analog cousins in some striking
ways. A discrete sinusoid is not periodic for any choice of frequency. Yet it has a periodic spectrum. An
important consequence of this result is that if the spectrum is periodic for a sampled sinusoid, it should also
be periodic for a sampled combination of sinusoids. This concept forms the basis for the frequency domain
description of discrete signals called the Discrete-Time Fourier Transform (DTFT). And since analog
signals can be described as a combination of sinusoids (periodic ones by their Fourier series and others by
their Fourier transform), their sampled combinations (and consequently any sampled signal) have a periodic
spectrum in the frequency domain. The central period corresponds to the true spectrum of the analog signal
if the sampling rate exceeds the Nyquist rate.
1.2.3 Filter Concepts
The term lter is often used to denote systems that process the input in a specied way. In this context,
ltering describes a signal-processing operation that allows signal enhancement, noise reduction, or increased
signal-to-noise ratio. Systems for the processing of discrete-time signals are also called digital lters.
Depending on the requirements and application, the analysis of a digital lter may be carried out in the time
domain, the z-domain or the frequency domain. A common application of digital lters is to modify the
frequency response in some specied way. An ideal lowpass lter passes frequencies up to a specied value
and totally blocks all others. Its spectrum shows an abrupt transition from unity (perfect transmission) in
the passband to zero (perfect suppression) in the stopband. An important consideration is that a symmetric
impulse response sequence possesses linear phase (in its frequency response) which results only in a constant
c Ashok Ambardar, September 1, 2003
1.3 Signal Processing 3
delay with no amplitude distortion. An ideal lowpass lter possesses linear phase because its impulse response
happens to be a symmetric sequence but unfortunately, it cannot be realized in practice.
One way to approximate an ideal lowpass lter is by symmetric truncation of its impulse response (which
ensures linear phase). Truncation is equivalent to multiplying (windowing) the impulse response by a nite
duration sequence (window) of unit samples. The abrupt truncation imposed by such a window results in
an overshoot and oscillation in the frequency response that persists no matter how large the truncation
index. To eliminate overshoot and reduce the oscillations, we use tapered windows. The impulse response
and frequency response of highpass, bandpass and bandstop lters may be related to those of a lowpass lter
using frequency transformations based on the properties of the DTFT.
Filters that possess constant gain but whose phase varies with frequencies are called allpass lters and
may be used to modify the phase characteristics of a system. A lter whose gain is zero at a selected
frequency is called a notch lter and may be used to remove the unwanted frequency from a signal. A lters
whose gain is zero at multiples of a selected frequency is called a comb lter and may be used to remove an
unwanted frequency and its harmonics from a signal.
1.3 Signal Processing
Two conceptual schemes for the processing of signals are illustrated in Figure 1.2. The digital processing
of analog signals requires that we use an analog-to-digital converter (ADC) for sampling the analog signal
prior to processing and a digital-to-analog converter (DAC) to convert the processed digital signal back to
analog form.
processor
signal
Analog Digital
signal
processor
Analog signal processing
Analog
signal
Analog
signal
Digital signal processing of analog signals
Digital
signal
Digital
signal
ADC
Analog
signal
DAC
Analog
signal
Figure 1.1 Analog and digital signal processing
1.3.1 Digital Processing of Analog Signals
Many DSP applications involve the processing of digital signals obtained by sampling analog signals and
the subsequent reconstruction of analog signals from their samples. For example, the music you hear from
your compact disc (CD) player is due to changes in the air pressure caused by the vibration of the speaker
diaphragm. It is an analog signal because the pressure variation is a continuous function of time. However,
the information stored on the compact disk is in digital form. It must be processed and converted to analog
form before you can hear the music. A record of the yearly increase in the world population describes time
measured in increments of one (year) while the population increase is measured in increments of one (person).
It is a digital signal with discrete values for both time and population.
For digital signal processing we need digital signals. To process an analog signal by digital means, we
must convert it to a digital signal in two steps. First, we must sample it, typically at uniform intervals
t
s
(every 2 ms, for example). The discrete quantity nt
s
is related to the integer index n. Next, we must
quantize the sample values (amplitudes) (by rounding to the nearest millivolt, for example). The central
concept in the digital processing of analog signals is that the sampled signal must be a unique representation
of the underlying analog signal. Even though sampling leads to a potential loss of information, all is not lost!
Often, it turns out that if we choose the sampling interval wisely, the processing of an analog signal is entirely
c Ashok Ambardar, September 1, 2003
4 Chapter 1 Overview
equivalent to the processing of the corresponding digital signal; there is no loss of information! This is one of
the wonders of the sampling theorem that makes digital signal processing such an attractive option. For a
unique correspondence between an analog signal and the version reconstructed from its samples, the sampling
rate S must exceed twice the highest signal frequency f
0
. The value S = 2f
0
is called the Nyquist sampling
rate. If the sampling rate is less than the Nyquist rate, a phenomenon known as aliasing manifests itself.
Components of the analog signal at high frequencies appear at (alias to) lower frequencies in the sampled
signal. This results in a sampled signal with a smaller highest frequency. Aliasing eects are impossible to
undo once the samples are acquired. It is thus commonplace to band-limit the signal before sampling (using
lowpass lters).
Numerical processing using digital computers requires nite data with nite precision. We must limit
signal amplitudes to a nite number of levels. This process, called quantization, produces nonlinear eects
that can be described only in statistical terms. Quantization also leads to an irreversible loss of information
and is typically considered only in the nal stage in any design.
A typical system for the digital processing of analog signals consists of the following:
An analog lowpass pre-lter or anti-aliasing lter which limits the highest signal frequency to ensure
freedom from aliasing.
A sampler which operates above the Nyquist sampling rate.
A quantizer which quantizes the sampled signal values to a nite number of levels. Currently, 16-bit
quantizers are quite commonplace.
An encoder which converts the quantized signal values to a string of binary bits or zeros and ones (words)
whose length is determined by the number of quantization levels of the quantizer.
The digital processing system itself (hardware or software) which processes the encoded digital signal (or
bit stream) in a desired fashion.
A decoder which converts the processed bit stream to a DT signal with quantized signal values.
A reconstruction lter which reconstructs a staircase approximation of the discrete time signal.
A lowpass analog anti-imaging lter which extracts the central period from the periodic spectrum, removes
the unwanted replicas and results in a smoothed reconstructed signal.
1.3.2 Filter Design
The design of lters is typically based on a set of specications in the frequency domain corresponding to
the magnitude spectrum or lter gain. The design of IIR lters typically starts with a lowpass prototype
from which other forms may be readily developed using frequency transformations.
1.3.3 The Design of IIR Filters
The design of IIR lters starts with an analog lowpass prototype based on the given specications. Classical
analog lters include Butterworth (maximally at passband), Chebyshev I (rippled passband), Chebyshev
II (rippled stopband) and elliptic (rippled passband and stopband). The analog lowpass prototype is then
converted to a lowpass digital lter using an appropriate mapping, and nally to the required form using
an appropriate spectral transformation. Practical mappings are based on response-invariance or equivalence
of ideal operations such as integration and their numerical counterparts. Not all of these avoid the eects
of aliasing. The most commonly used mapping is based on the trapezoidal rule for numerical integration
and is called the bilinear transformation. It compresses the entire innite analog frequency range into a
nite range and thus avoids aliasing at the expense of warping (distorting) the analog frequencies. We can
compensate for this warping if we prewarp (stretch) the analog frequency specications before designing
the analog lter.
c Ashok Ambardar, September 1, 2003
1.4 The DFT and FFT 5
1.3.4 The Design of FIR lters
FIR lters are inherently stable and can be designed with linear phase leading to no phase distortion but
their realization often involves a large lter length to meet given requirements. Their design is typically
based on selecting a symmetric (linear phase) impulse response sequence of the smallest length that meets
design specications and involves iterative techniques. Even though the spectrum of the truncated ideal
lter is, in fact, the best approximation (in the mean square sense) compared to the spectrum of any other
lter of the same length, it shows the undesirable oscillations and overshoot which can be eliminated by
modifying (windowing) the impulse response sequence using tapered windows. The smallest length that
meets specications depends on the choice of window and is often estimated by empirical means.
1.4 The DFT and FFT
The periodicity of the DTFT is a consequence of the fundamental result that sampling a signal in one
domain leads to periodicity in the other. Just as a periodic signal has a discrete spectrum, a discrete-time
signal has a periodic spectrum. This duality also characterizes several other transforms. If the time signal
is both discrete and periodic, its spectrum is also discrete and periodic and describes the discrete Fourier
transform (DFT). The DFT is essentially the DTFT evaluated at a nite number of frequencies and is
also periodic. The DFT can be used to approximate the spectrum of analog signals from their samples,
provided the relations are understood in their proper context using the notion of implied periodicity. The
Fast Fourier Transform (FFT) is a set of fast practical algorithms for computing the DFT. The DFT and
FFT nd extensive applications in fast convolution, signal interpolation, spectrum estimation, and transfer
function estimation.
1.5 Advantages of DSP
In situations where signals are encountered in digital form, their processing is performed digitally. In other
situations that relate to the processing of analog signals, DSP oers many advantages.
Processing
DSP oers a wide variety of processing techniques that can be implemented easily and eciently. Some
techniques (such as processing by linear phase lters) have no counterpart in the analog domain.
Storage
Digital data can be stored and later retrieved with no degradation or loss of information. Data recorded by
analog devices is subject to the noise inherent in the recording media (such as tape) and degradation due to
aging and environmental eects.
Transmission
Digital signals are more robust and oer much better noise immunity during transmission as compared to
analog signals.
Implementation
A circuit for processing analog signals is typically designed for a specic application. It is sensitive to
component tolerances, aging, and environmental eects (such as changes in the temperature and humidity)
c Ashok Ambardar, September 1, 2003
6 Chapter 1 Overview
and not easily reproducible. A digital lter, on the other hand, is extremely easy to implement and highly
reproducible. It may be designed to perform a variety of tasks without replacing or modifying any hardware
but simply by changing the lter coecients on the y.
Cost
With the proliferation of low-cost, high-speed digital computers, DSP oers eective alternatives for a wide
variety of applications. High-frequency analog applications may still require analog signal processing but their
number continues to shrink. As long as the criteria of the sampling theorem are satised and quantization
is carried out to the desired precision (using the devices available), the digital processing of analog signals
has become the method of choice unless compelling reasons dictate otherwise.
In the early days of the digital revolution, DSP did suer form disadvantages such as speed, cost, and
quantization eects but these continue to pale into insignicance with advances in semiconductor technology
and processing and computing power.
1.5.1 Applications of DSP
Digital signal processing nds applications in almost every conceivable eld. Its impact on consumer electron-
ics is evidenced by the proliferation of digital communication, digital audio, digital (high-denition) television
and digital imaging (cameras). Its applications to biomedical signal processing include the enhancement and
interpretation of tomographic images and analysis of ECG and EEG signals. Space applications include
satellite navigation and guidance systems and analysis of satellite imagery obtained by various means. And,
the list goes on and continues to grow.
c Ashok Ambardar, September 1, 2003
Chapter 2
DISCRETE SIGNALS
2.0 Scope and Objectives
This chapter begins with an overview of discrete signals. It starts with various ways of signal classication,
shows how discrete signals can be manipulated by various operations, and quanties the measures used to
characterize such signals. It introduces the concept of sampling and describes the sampling theorem as the
basis for sampling analog signals without loss of information. It concludes with an introduction to random
signals.
2.1 Discrete Signals
Discrete signals (such as the annual population) may arise naturally or as a consequence of sampling con-
tinuous signals (typically at a uniform sampling interval t
s
). A sampled or discrete signal x[n] is just an
ordered sequence of values corresponding to the integer index n that embodies the time history of the signal.
It contains no direct information about the sampling interval t
s
, except through the index n of the sample
locations. A discrete signal x[n] is plotted as lines against the index n. When comparing analog signals with
their sampled versions, we shall assume that the origin t = 0 also corresponds to the sampling instant n = 0.
We need information about t
s
only in a few situations such as plotting the signal explicitly against time t
(at t = nt
s
) or approximating the area of the underlying analog signal from its samples (as t
s

x[n]).
REVIEW PANEL 2.1
Notation for a Numeric Sequence x[n]
A marker () indicates the origin n = 0. Example: x[n] = 1, 2,

4, 8, . . ..
Ellipses (. . .) denote innite extent on either side. Example: x[n] = 2, 4,

6, 8, . . .
A discrete signal x[n] is called right-sided if it is zero for n < N (where N is nite), causal if it is zero
for n < 0, left-sided if it is zero for n > N, and anti-causal if it is zero for n 0.
REVIEW PANEL 2.2
Discrete Signals Can Be Left-Sided, Right-Sided, Causal, or Anti-Causal
>
n 0 ) (zero for n>N ) (zero for n<N ) (zero for n< 0 ) (zero for
n
Anti-causal
N
n
Left-sided
N
n
Right-sided
n
Causal
c Ashok Ambardar, September 1, 2003 7
8 Chapter 2 Discrete Signals
A discrete periodic signal repeats every N samples and is described by
x[n] = x[n kN], k = 0, 1, 2, 3, . . . (2.1)
The period N is the smallest number of samples that repeats. Unlike its analog counterpart, the period N of
discrete signals is always an integer. The common period of a linear combination of periodic discrete signals
is given by the least common multiple (LCM) of the individual periods.
REVIEW PANEL 2.3
The Period of a Discrete Periodic Signal Is the Number of Samples per Period
The period N is always an integer. For combinations, N is the LCM of the individual periods.
DRILL PROBLEM 2.1
(a) Let x[n] = . . . ,

1, 2, 0, 0, 4, 1, 2, 0, 0, 4, 1, 2, 0, 0, 4, 1, 2, 0, 0, 4, . . .. What is its period?


(b) Let x[n] = . . . ,

1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, . . . and
y[n] = . . . ,

1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2, . . ..
Let g[n] = x[n] +y[n]. What is the period of g[n]? What are the sample values in one period of g[n]?
Answers: (a) 5 (b) 6,

2, 4, 4, 3, 3, 5
2.1.1 Signal Measures
Signal measures for discrete signals are often based on summations. Summation is the discrete-time equiv-
alent of integration. The discrete sum S
D
, the absolute sum S
A
, and the cumulative sum (running sum)
s
C
[n] of a signal x[n] are dened by
S
D
=

n=
x[n] S
A
=

n=
[x[n][ s
C
[n] =
n

k=
x[k] (2.2)
Signals for which the absolute sum [x[n][ is nite are called absolutely summable. For nonperiodic signals,
the signal energy E is a useful measure. It is dened as the sum of the squares of the signal values
E =

n=
[x[n][
2
(2.3)
The absolute value allows us to extend this relation to complex-valued signals. Measures for periodic signals
are based on averages since their signal energy is innite. The average value x
av
and signal power P of a
periodic signal x[n] with period N are dened as the average sum per period and average energy per period,
respectively:
x
av
=
1
N
N1

n=0
x[n] P =
1
N
N1

n=0
[x[n][
2
(2.4)
Note that the index runs from n = 0 to n = N 1 and includes all N samples in one period. Only for
nonperiodic signals is it useful to use the limiting forms
x
av
= lim
M
1
2M + 1
M

n=M
x[n] P = lim
M
1
2M + 1
M

n=M
[x[n][
2
(2.5)
c Ashok Ambardar, September 1, 2003
2.1 Discrete Signals 9
Signals with nite energy are called energy signals (or square summable). Signals with nite power are called
power signals. All periodic signals are power signals.
REVIEW PANEL 2.4
Energy and Power in Discrete Signals
Energy: E =

n=
[x[n][
2
Power (if periodic with period N): P =
1
N
N1

n=0
[x[n][
2
EXAMPLE 2.1 (Signal Energy and Power)
(a) Find the energy in the signal x[n] = 3(0.5)
n
, n 0.
This describes a one-sided decaying exponential. Its signal energy is
E =

n=
x
2
[n] =

n=0
[3(0.5)
n
[
2
=

n=0
9(0.25)
n
=
9
1 0.25
= 12 J

Note:

n=0

n
=
1
1

(b) Consider the periodic signal x[n] = 6 cos(2n/4) whose period is N = 4.


One period of this signal is x
1
[n] =

6, 0, 6, 0. The average value and signal power of x[n] is


x
av
=
1
4
3

n=0
x[n] = 0 P =
1
4
3

n=0
x
2
[n] =
1
4

36 + 36

= 18 W
(c) Consider the periodic signal x[n] = 6e
j2n/4
whose period is N = 4.
This signal is complex-valued, with [x[n][ = 6. One period of this signal is x
1
[n] =

6, 6, 6, 6.
The signal power of x[n] is
P =
1
4
3

n=0
[x[n][
2
=
1
4

36 + 36 + 36 + 36

= 36 W
DRILL PROBLEM 2.2
(a) Let x[n] =

1, 2, 0, 0, 4. What is the energy in x[n]?


(b) Let x[n] = . . . ,

1, 2, 0, 0, 4, 1, 2, 0, 0, 4, 1, 2, 0, 0, 4, 1, 2, . . .. What is the power in x[n]?


(c) Let x[n] = . . . ,

1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, . . . and
y[n] = . . . ,

1, 2, 2, 1, 2, 2, 1, 2, 2, 1, 2, 2, 1, 2, 2, 1, 2, 2, 1, 2, . . ..
Let g[n] = x[n] +y[n]. What is the power in x[n], y[n] and g[n]? What is the average value of g[n]?
Answers: (a) 21 (b) 4.2 (c) 2.5, 3, 10, 3
c Ashok Ambardar, September 1, 2003
10 Chapter 2 Discrete Signals
2.2 Operations on Discrete Signals
Common operations on discrete signals include element-wise addition and multiplication. Two other useful
operations are shifting and folding (or time reversal).
Time Shift: The signal y[n] = x[n] describes a delayed version of x[n] for > 0. In other words, if x[n]
starts at n = N, then its shifted version y[n] = x[n] starts at n = N +. Thus, the signal y[n] = x[n2]
is a delayed (shifted right by 2) version of x[n], and the signal g[n] = x[n + 2] is an advanced (shifted left
by 2) version of x[n]. A useful consistency check for sketching shifted signals is based on the fact that if
y[n] = x[n ], a sample of x[n] at the original index n gets relocated to the new index n
N
based on the
operation n = n
N
.
Folding: The signal y[n] = x[n] represents a folded version of x[n], a mirror image of the signal x[n] about
the origin n = 0. The signal y[n] = x[n ] may be obtained from x[n] in one of two ways:
(a) x[n] delay (shift right) by x[n ] fold x[n ].
(b) x[n] fold x[n] advance (shift left) by x[n ].
In either case, a sample of x[n] at the original index n will be plotted at a new index n
N
given by
n = n
N
, and this can serve as a consistency check in sketches.
REVIEW PANEL 2.5
Time delay means x[n] x[n M], M > 0, and folding means x[n] x[n]
You can generate x[n M] from x[n] in one of two ways:
1. Shift right M units: x[n] x[n M]. Then fold: x[n M] x[n M].
2. Fold: x[n] x[n]. Then shift left M units: x[n] x[(n +M)] = x[n M].
Check: Use n n
N
M to conrm new locations n
N
for the origin n = 0 and end points of x[n].
EXAMPLE 2.2 (Operations on Discrete Signals)
Let x[n] = 2, 3,

4, 5, 6, 7. Find and sketch the following:


y[n] = x[n 3], f[n] = x[n + 2], g[n] = x[n], h[n] = x[n + 1], s[n] = x[n 2]
Here is how we obtain the various signals:
y[n] = x[n 3] =

0, 2, 3, 4, 5, 6, 7 (shift x[n] right 3 units)


f[n] = x[n + 2] = 2, 3, 4, 5,

6, 7 (shift x[n] left 2 units)


g[n] = x[n] = 7, 6, 5,

4, 3, 2 (fold x[n] about n = 0)


h[n] = x[n + 1] = 7, 6,

5, 4, 3, 2 (fold x[n], then delay by 1)


s[n] = x[n 2] = 7, 6, 5, 4, 3,

2 (fold x[n], then advance by 2)


Refer to Figure E2.2 for the sketches.
c Ashok Ambardar, September 1, 2003
2.2 Operations on Discrete Signals 11
[n] x [n x 3] [n x +2] [n x ] [n x +1] [n x 2]
2 3 6 4 1 3 2 2 3 5
2
7
n
2
7
n
2
7
n
2
7
n
2
7
n n
2
7
Figure E2.2 The signals for Example 2.2
DRILL PROBLEM 2.3
(a) Let x[n] =

1, 4, 2, 3. Express g[n] = x[n + 2] as a sequence and sketch.


(b) Let x[n] = 3,

1, 4. Express y[n] = x[n + 2] and f[n] = x[n 1] as sequences and sketch?


Answers: (a) g[n] = 3,

2, 4, 1 (b) y[n] = 3, 1, 4,

0 f[n] = 4, 1,

3.
2.2.1 Symmetry
If a signal x[n] is identical to its mirror image x[n], it is called an even symmetric signal. If x[n]
diers from its mirror image x[n] only in sign, it is called an odd symmetric or antisymmetric signal.
Mathematically,
x
e
[n] = x
e
[n] x
o
[n] = x
o
[n] (2.6)
In either case, the signal extends over symmetric limits N n N. For an odd symmetric signal, note
that x
o
[0] = 0 and the sum of samples in x
o
[n] over symmetric limits (, ) equals zero:
M

k=M
x
o
[k] = 0 (2.7)
REVIEW PANEL 2.6
Characteristics of Symmetric Signals
x
e
n [ ] x
e
[n]
= x
o
[0] = 0 and x
o
[n] x
o
n [ ] =
x
e
[n]
x
o
[n]
Even symmetry: Odd symmetry:
n n
2.2.2 Even and Odd Parts of Signals
Even symmetry and odd symmetry are mutually exclusive. Consequently, if a signal x[n] is formed by
summing an even symmetric signal x
e
[n] and an odd symmetric signal x
o
[n], it will be devoid of either
symmetry. Turning things around, any signal x[n] may be expressed as the sum of an even symmetric part
x
e
[n] and an odd symmetric part x
o
[n]:
x[n] = x
e
[n] +x
o
[n] (2.8)
To nd x
e
[n] and x
o
[n] from x[n], we fold x[n] and invoke symmetry to get
x[n] = x
e
[n] +x
o
[n] = x
e
[n] x
o
[n] (2.9)
c Ashok Ambardar, September 1, 2003
12 Chapter 2 Discrete Signals
Adding and subtracting the two preceding equations, we obtain
x
e
[n] = 0.5x[n] + 0.5x[n] x
o
[n] = 0.5x[n] 0.5x[n] (2.10)
This means that the even part x
e
[n] equals half the sum of the original and folded version and the odd part
x
o
[n] equals half the dierence between the original and folded version. Naturally, if a signal x[n] has even
symmetry, its odd part x
o
[n] will equal zero, and if x[n] has odd symmetry, its even part x
e
[n] will equal
zero.
REVIEW PANEL 2.7
Any Discrete Signal Is the Sum of an Even Symmetric and an Odd Symmetric Part
x[n] = x
e
[n] +x
o
[n] where x
e
[n] = 0.5x[n] + 0.5x[n] and x
o
[n] = 0.5x[n] 0.5x[n]
How to implement: Graphically, if possible. How to check: Does x
e
[n] +x
o
[n] give x[n]?
EXAMPLE 2.3 (Signal Symmetry)
(a) Let x[n] = 4, 2,

4, 6. Find and sketch its odd and even parts.


We zero-pad the signal to x[n] = 4, 2,

4, 6, 0 so that it covers symmetric limits. Then


0.5x[n] = 2, 1,

2, 3, 0 0.5x[n] = 0, 3,

2, 1, 2
Zero-padding, though not essential, allows us to perform element-wise addition or subtraction with
ease to obtain
x
e
[n] = 0.5x[n] + 0.5x[n] = 2, 4,

4, 4, 2
x
o
[n] = 0.5x[n] 0.5x[n] = 2, 2,

0, 2, 2
The various signals are sketched in Figure E2.3A. As a consistency check you should conrm that
x
o
[0] = 0,

x
o
[n] = 0, and that the sum x
e
[n] +x
o
[n] recovers x[n].
[n] x
[n] x 0.5 x [n] 0.5
x
e
[n]
x
o
[n]
4
2
4
6
n
3
2 2
1
n
3
2 2
1
n
4
4 4
2 2
n
2
2
2
2
n
Figure E2.3A The signal x[n] and its odd and even parts for Example 2.3(a)
(b) Let x[n] = u[n] u[n 5]. Find and sketch its odd and even parts.
The signal x[n] and the genesis of its odd and even parts are shown in Figure E2.3B. Note the value
of x
e
[n] at n = 0 in the sketch.
[n] x [n] x 0.5 x [n] 0.5 x
e
[n]
x
o
[n]
2
4
n
1
4
n
1
4
n
1
2
4 4
n
1
4
4
1
n
Figure E2.3B The signal x[n] and its odd and even parts for Example 2.3(b)
c Ashok Ambardar, September 1, 2003
2.3 Decimation and Interpolation 13
DRILL PROBLEM 2.4
(a) Find and sketch the even and odd parts of x[n] =

8, 4, 2.
(b) Find and sketch the odd part of y[n] = 8,

4, 2.
Answers: (a) x
e
[n] = 1, 2,

8, 2, 1 x
o
[n] = 1, 2,

0, 2, 1 (b) y
o
[n] = 3,

0, 3.
2.3 Decimation and Interpolation
The time scaling of discrete-time signals must be viewed with care. For discrete-time signals, time scaling is
equivalent to decreasing or increasing the signal length. The problems that arise in time scaling are not in
what happens but how it happens.
2.3.1 Decimation
Decimation refers to a process of reducing the signal length by discarding signal samples. Suppose x[n]
corresponds to an analog signal x(t) sampled at intervals t
s
. The signal y[n] = x[2n] then corresponds to
the compressed signal x(2t) sampled at t
s
and contains only alternate samples of x[n] (corresponding to
x[0], x[2], x[4], . . .). We can also obtain y[n] directly from x(t) (not its compressed version) if we sample it
at intervals 2t
s
(or at a sampling rate S = 1/2t
s
). This means a twofold reduction in the sampling rate.
Decimation by a factor of N is equivalent to sampling x(t) at intervals Nt
s
and implies an N-fold reduction
in the sampling rate. The decimated signal x[Nn] is generated from x[n] by retaining every Nth sample
corresponding to the indices k = Nn and discarding all others.
2.3.2 Interpolation
If x[n] corresponds to x(t) sampled at intervals t
s
, then y[n] = x[n/2] corresponds to x(t) sampled at t
s
/2
and has twice the length of x[n] with one new sample between adjacent samples of x[n]. If an expression for
x[n] (or the underlying analog signal) were known, it would be no problem to determine these new sample
values. If we are only given the sample values of x[n] (without its analytical form), the best we can do is
interpolate between samples. For example, we may choose each new sample value as zero (zero interpolation),
a constant equal to the previous sample value (step interpolation), or the average of adjacent sample values
(linear interpolation). Zero interpolation is referred to as up-sampling and plays an important role in
practical interpolation schemes. Interpolation by a factor of N is equivalent to sampling x(t) at intervals
t
s
/N and implies an N-fold increase in both the sampling rate and the signal length.
Some Caveats
It may appear that decimation (discarding signal samples) and interpolation (inserting signal samples) are
inverse operations but this is not always the case. Consider the two sets of operations shown below:
x[n] decimate by 2 x[2n] interpolate by 2 x[n]
x[n] interpolate by 2 x[n/2] decimate by 2 x[n]
On the face of it, both sets of operations start with x[n] and appear to recover x[n], suggesting that interpo-
lation and decimation are inverse operations. In fact, only the second sequence of operations (interpolation
followed by decimation) recovers x[n] exactly. To see why, let x[n] =

1, 2, 6, 4, 8. Using step interpolation,


c Ashok Ambardar, September 1, 2003
14 Chapter 2 Discrete Signals
for example, the two sequences of operations result in

1, 2, 6, 4, 8
decimate
n 2n

1, 6, 8
interpolate
n n/2

1, 1, 6, 6, 8, 8

1, 2, 6, 4, 8
interpolate
n n/2

1, 1, 2, 2, 6, 6, 4, 4, 8, 8
decimate
n 2n

1, 2, 6, 4, 8
We see that decimation is indeed the inverse of interpolation, but the converse is not necessarily true.
After all, it is highly unlikely for any interpolation scheme to recover or predict the exact value of the
samples that were discarded during decimation. In situations where both interpolation and decimation are
to be performed in succession, it is therefore best to interpolate rst. In practice, of course, interpolation or
decimation should preserve the information content of the original signal, and this imposes constraints on
the rate at which the original samples were acquired.
2.3.3 Fractional Delays
Fractional (typically half-sample) delays are sometimes required in practice and can be implemented using
interpolation and decimation. If we require that interpolation be followed by decimation and integer shifts,
the correct result is obtained by using interpolation followed by an integer shift and decimation. To generate
the signal y[n] = x[n
M
N
] = x[
NnM
N
] from x[n], we use the following sequence of operations.
x[n] interpolate by N x[
n
N
] delay by M x[
nM
N
] decimate by N x[
NnM
N
] = y[n]
The idea is to ensure that each operation (interpolation, shift, and decimation) involves integers.
REVIEW PANEL 2.8
Decimation by N, Interpolation by N, and Fractional Delay by M/N
Decimation: Keep only every Nth sample (at n = kN). This leads to potential loss of information.
Interpolation: Insert N1 new values after each sample. The new sample values may equal
zero (zero interpolation), or the previous value (step interpolation), or linearly interpolated values.
Fractional Delay (y[n] = x[n
M
N
]): Interpolate x[n] by N, then delay by M, then decimate by N.
EXAMPLE 2.4 (Decimation and Interpolation)
(a) Let x[n] = 1,

2, 5, 1. Generate x[2n] and various interpolated versions of x[n/3].


To generate y[n] = x[2n], we remove samples at the odd indices to obtain x[2n] =

2, 1.
The zero-interpolated signal is g[n] = x[
n
3
] = 1, 0, 0,

2, 0, 0, 5, 0, 0, 1, 0, 0.
The step-interpolated signal is h[n] = x[
n
3
] = 1, 1, 1,

2, 2, 2, 5, 5, 5, 1, 1, 1.
The linearly interpolated signal is s[n] = x[
n
3
] = 1,
4
3
,
5
3
,

2, 3, 4, 5, 3, 1, 1,
2
3
,
1
3
.
In linear interpolation, note that we interpolated the last two values toward zero.
c Ashok Ambardar, September 1, 2003
2.4 Common Discrete Signals 15
(b) Let x[n] = 3, 4,

5, 6. Find g[n] = x[2n 1] and the step-interpolated signal h[n] = x[0.5n 1].
In either case, we rst nd y[n] = x[n 1] = 3,

4, 5, 6. Then
g[n] = y[2n] = x[2n 1] =

4, 6.
h[n] = y[
n
2
] = x[0.5n 1] = 3, 3,

4, 4, 5, 5, 6, 6.
(c) Let x[n] = 3, 4,

5, 6. Find y[n] = x[2n/3] assuming step interpolation where needed.


Since we require both interpolation and decimation, we rst interpolate and then decimate to get
After interpolation: g[n] = x[
n
3
] = 3, 3, 3, 4, 4, 4,

5, 5, 5, 6, 6, 6.
After decimation: y[n] = g[2n] = x[
2
3
n] = 3, 3, 4,

5, 5, 6.
(d) Let x[n] = 2, 4,

6, 8. Find the signal y[n] = x[n 0.5] assuming linear interpolation where needed.
We rst interpolate by 2, then delay by 1, and then decimate by 2 to get
After interpolation: g[n] = x[
n
2
] = 2, 3, 4, 5,

6, 7, 8, 4 (last sample interpolated to zero).


After delay: h[n] = g[n 1] = x[
n1
2
] = 2, 3, 4,

5, 6, 7, 8, 4.
After decimation: y[n] = h[2n] = x[
2n1
2
] = x[n 0.5] = 3,

5, 7, 4.
DRILL PROBLEM 2.5
Let x[n] =

8, 4, 2, 6. Find y[n] = x[2n], g[n] = x[2n + 1], h[n] = x[0.5n], and f[n] = x[n + 0.5].
Assume linear interpolation where required.
Answers: y[n] =

8, 2 g[n] =

4, 6 h[n] =

8, 6, 4, 3, 2, 4, 6, 3 f[n] =

6, 3, 4, 3.
2.4 Common Discrete Signals
The unit impulse (or unit sample) [n], the unit step u[n], and the unit ramp are dened as
[n] =

0, n = 0
1, n = 0
u[n] =

0, n < 0
1, n 0
r[n] = nu[n] =

0, n < 0
n, n 0
(2.11)
The discrete impulse is just a unit sample at n = 0. It is completely free of the kind of ambiguities associated
with the analog impulse (t) at t = 0. The discrete unit step u[n] also has a well-dened, unique value of
u[0] = 1 (unlike its analog counterpart u(t)). The signal x[n] = Anu[n] = Ar[n] describes a discrete ramp
whose slope A is given by x[k] x[k 1], the dierence between adjacent sample values.
c Ashok Ambardar, September 1, 2003
16 Chapter 2 Discrete Signals
REVIEW PANEL 2.9
The Discrete Impulse, Step, and Ramp Are Well Dened at n = 0
[n] [n] u [n] r
1
n n
1
n
1
2
3
4
5
2.4.1 Properties of the Discrete Impulse
The product of a signal x[n] with the impulse [n k] results in
x[n][n k] = x[k][n k] (2.12)
This is because [n k] is nonzero only at n = k where the value of x[n] corresponds to x[k]. The result is
an impulse with strength x[k]. The product property leads directly to

n=
x[n][n k] = x[k] (2.13)
This is the sifting property. The impulse extracts (sifts out) the value x[k] from x[n] at the impulse
location n = k.
2.4.2 Signal Representation by Impulses
A discrete signal x[n] may be expressed as a sum of shifted impulses [nk] whose sample values correspond
to x[k], the values of x[n] at n = k. Thus,
x[n] =

k=
x[k][n k] (2.14)
For example, the signals u[n] and r[n] may be expressed as a train of shifted impulses:
u[n] =

k=0
[n k] r[n] =

k=0
k[n k] (2.15)
The signal u[n] may also be expressed as the cumulative sum of [n], and the signal r[n] may be described
as the cumulative sum of u[n]:
u[n] =
n

k=
[k] r[n] =
n

k=
u[k] (2.16)
2.4.3 Discrete Pulse Signals
The discrete rectangular pulse rect(n/2N) and the discrete triangular pulse tri(n/N) are dened by
rect

n
2N

1, [n[ N
0, elsewhere
tri

n
N

1
|n|
N
, [n[ N
0, elsewhere
(2.17)
c Ashok Ambardar, September 1, 2003
2.4 Common Discrete Signals 17
The signal rect(
n
2N
) has 2N + 1 unit samples over N n N. The factor 2N in rect(
n
2N
) gets around
the problem of having to deal with half-integer values of n when N is odd. The signal x[n] = tri(n/N) also
has 2N + 1 samples over N n N, with its end samples x[N] and x[N] being zero. It is sometimes
convient to express pulse-like signals in terms of these standard forms.
REVIEW PANEL 2.10
The Discrete rect and tri Functions
(N = 5) [n/N] [n/2N ] rect (N = 5)
n
N
1
-N
tri
N
1
-N
n
EXAMPLE 2.5 (Describing Sequences and Signals)
(a) Let x[n] = (2)
n
and y[n] = [n 3]. Find z[n] = x[n]y[n] and evaluate the sum A =

z[n].
The product, z[n] = x[n]y[n] = (2)
3
[n 3] = 8[n 3], is an impulse.
The sum, A =

z[n], is given by

(2)
n
[n 3] = (2)
3
= 8.
(b) Mathematically describe the signals of Figure E2.5B in at least two dierent ways.
[n] x [n] y
3 2 1 1 2 3
[n] h
1
1
2
n
4
2
6
2
4
2
4
6
n n
4
2
3
1
2 3 4 6 1 5
Figure E2.5B The signals for Example 2.5(b)
1. The signal x[n] may be described as the sequence x[n] = 4,

2, 1, 3.
It may also be written as x[n] = 4[n + 1] + 2[n] [n 1] + 3[n 2].
2. The signal y[n] may be represented variously as
A numeric sequence: y[n] = 0, 0, 2, 4, 6, 6, 6.
A sum of shifted impulses: y[n] = 2[n 2] + 4[n 3] + 6[n 4] + 6[n 5] + 6[n 6].
A sum of steps and ramps: y[n] = 2r[n 1] 2r[n 4] 6u[n 7].
Note carefully that the argument of the step function is [n 7] (and not [n 6]).
3. The signal h[n] may be described as h[n] = 6 tri(n/3) or variously as
A numeric sequence: h[n] = 0, 2, 4,

6, 4, 2, 0.
A sum of impulses: h[n] = 2[n + 2] + 4[n + 1] + 6[n] + 4[n 1] + 2[n 2].
A sum of steps and ramps: h[n] = 2r[n + 3] 4r[n] + 2r[n 3].
c Ashok Ambardar, September 1, 2003
18 Chapter 2 Discrete Signals
DRILL PROBLEM 2.6
(a) Sketch the signals x[n] = [n + 2] + 2[n 1] and y[n] = 2u[n + 1] u[n 3].
(b) Express the signal h[n] = 3, 3,

3, 5, 5, 5 using step functions.


(c) Express the signal g[n] =

2, 4, 6, 4, 2 using tri functions.


Answers: (b) h[n] = 3u[n + 2] + 2u[n 1] 5u[n 4] (c) g[n] = 6 tri(
n2
3
)
2.4.4 The Discrete Sinc Function
The discrete sinc function is dened by
sinc

n
N

=
sin(n/N)
(n/N)
, sinc(0) = 1 (2.18)
The signal sinc(n/N) equals zero at n = kN, k = 1, 2, . . .. At n = 0, sinc(0) = 0/0 and cannot be
evaluated in the limit since n can take on only integer values. We therefore dene sinc(0) = 1. The envelope
of the sinc function shows a mainlobe and gradually decaying sidelobes. The denition of sinc(n/N) also
implies that sinc(n) = [n].
2.4.5 Discrete Exponentials
Discrete exponentials are often described using a rational base. For example, the signal x[n] = 2
n
u[n] shows
exponential growth while y[n] = (0.5)
n
u[n] is a decaying exponential. The signal f[n] = (0.5)
n
u[n] shows
values that alternate in sign. The exponential x[n] =
n
u[n], where = re
j
is complex, may be described
using the various formulations of a complex number as
x[n] =
n
u[n] =

re
j

n
u[n] = r
n
e
jn
u[n] = r
n
[cos(n) +j sin(n)]u[n] (2.19)
This complex-valued signal requires two separate plots (the real and imaginary parts, for example) for a
graphical description. If 0 < r < 1, x[n] describes a signal whose real and imaginary parts are exponentially
decaying cosines and sines. If r = 1, the real and imaginary parts are pure cosines and sines with a peak
value of unity. If r > 1, we obtain exponentially growing sinusoids.
2.5 Discrete-Time Harmonics and Sinusoids
If we sample an analog sinusoid x(t) = cos(2f
0
t) at intervals of t
s
corresponding to a sampling rate of
S = 1/t
s
samples/s (or S Hz), we obtain the sampled sinusoid
x[n] = cos(2fnt
s
+) = cos(2n
f
S
+) = cos(2nF +) (2.20)
The quantities f and = 2f describe analog frequencies. The normalized frequency F = f/S is called the
digital frequency and has units of cycles/sample. The frequency = 2F is the digital radian frequency
with units of radians/sample. The various analog and digital frequencies are compared in Figure 2.1. Note
that the analog frequency f = S (or = 2S) corresponds to the digital frequency F = 1 (or = 2).
More generally, we nd it useful to deal with complex valued signals of the form
x[n] = e
j(2nF+)
= cos(2nF +) +j sin(2nF +) (2.21)
This allows us to regard the real sinusoid x[n] = cos(2nF +) as the real part of the complex-valued signal
x[n].
c Ashok Ambardar, September 1, 2003
2.5 Discrete-Time Harmonics and Sinusoids 19
= 2 f
= 2 F
S 0.5 S S 0.5
2S S S 2S
F = f/S ( ) F
f (Hz)
1 0.5 0.5 1
2 2
S
0
0
0
0
Analog frequency
Digital frequency
Analog frequency
Digital frequency
Connection between analog and digital frequencies
Figure 2.1 Comparison of analog and digital frequencies
REVIEW PANEL 2.11
The Digital Frequency Is the Analog Frequency Normalized by Sampling Rate S
F(cycles/sample) =
f(cycles/sec)
S(samples/sec)
(radians/sample) =
(radians/sec)
S(samples/sec)
= 2F
2.5.1 Discrete-Time Harmonics Are Not Always Periodic in Time
An analog sinusoid x(t) = cos(2f
0
t + ) has two remarkable properties. It is unique for every frequency.
And it is periodic in time for every choice of the frequency f
0
. Its sampled version, however, is a beast of a
dierent kind.
Are all discrete-time sinusoids and harmonics periodic in time? Not always! To understand this idea,
suppose x[n] is periodic with period N such that x[n] = x[n +N]. This leads to
cos(2nF
0
+) = cos[2(n +N)F
0
+] = cos(2nF
0
+ + 2NF
0
) (2.22)
The two sides are equal provided NF
0
equals an integer k. In other words, F
0
must be a rational fraction
(ratio of integers) of the form k/N. What we are really saying is that a DT sinusoid is not always periodic but
only if its digital frequency is a ratio of integers or a rational fraction. The period N equals the denominator
of k/N, provided common factors have been canceled from its numerator and denominator. The signicance
of k is that it takes k full periods of the analog sinusoid to yield one full period of the sampled sinusoid.
The common period of a combination of periodic DT sinusoids equals the least common multiple (LCM)
of their individual periods. If F
0
is not a rational fraction, there is no periodicity, and the DT sinusoid is
classied as nonperiodic or almost periodic. Examples of periodic and nonperiodic DT sinusoids appear in
Figure 2.2. Even though a DT sinusoid may not always be periodic, it will always have a periodic envelope.
This discussion also applies to complex-valued harmonics of the type e
j(2nF
0
+)
.
REVIEW PANEL 2.12
The Discrete Harmonic cos(2nF
0
+) or e
j(2nF
0
+)
Is Not Always Periodic in Time
It is periodic only if its digital frequency F
0
= k/N can be expressed as a ratio of integers.
Its period equals N if common factors have been canceled in k/N.
One period of the sampled sinusoid is obtained from k full periods of the analog sinusoid.
c Ashok Ambardar, September 1, 2003
20 Chapter 2 Discrete Signals
0 4 8 12 16 20 24 28
1
0.5
0
0.5
1
DT Index n
A
m
p
l
i
t
u
d
e
(a) cos(0.125n) is periodic. Period N=16
Envelope
is periodic
0 4 8 12 16 20 24 28
1
0.5
0
0.5
1
DT Index n
A
m
p
l
i
t
u
d
e
(b) cos(0.5n) is not periodic. Check peaks or zeros.
Envelope
is periodic
Figure 2.2 Discrete-time sinusoids are not always periodic
EXAMPLE 2.6 (Discrete-Time Harmonics and Periodicity)
(a) Is x[n] = cos(2Fn) periodic if F = 0.32? If F =

3? If periodic, what is its period N?


If F = 0.32, x[n] is periodic because F = 0.32 =
32
100
=
8
25
=
k
N
. The period is N = 25.
If F =

3, x[n] is not periodic because F is irrational and cannot be expressed as a ratio of integers.
(b) What is the period of the harmonic signal x[n] = e
j0.2n
+e
j0.3n
?
The digital frequencies in x[n] are F
1
= 0.1 =
1
10
=
k
1
N
1
and F
2
= 0.15 =
3
20
=
k
2
N
2
.
Their periods are N
1
= 10 and N
2
= 20.
The common period is thus N = LCM(N
1
, N
2
) = LCM(10, 20) = 20.
(c) The signal x(t) = 2 cos(40t) + sin(60t) is sampled at 75 Hz. What is the common period of the
sampled signal x[n], and how many full periods of x(t) does it take to obtain one period of x[n]?
The frequencies in x(t) are f
1
= 20 Hz and f
2
= 30 Hz. The digital frequencies of the individual
components are F
1
=
20
75
=
4
15
=
k1
N1
and F
2
=
30
75
=
2
5
=
k2
N2
. Their periods are N
1
= 15 and N
2
= 5.
The common period is thus N = LCM(N
1
, N
2
) = LCM(15, 5) = 15.
The fundamental frequency of x(t) is f
0
= GCD(20, 30) = 10 Hz. One period of x(t) is T =
1
f0
= 0.1 s.
Since N = 15 corresponds to a duration of Nt
s
=
N
S
= 0.2 s, it takes two full periods of x(t) to obtain
one period of x[n]. We also get the same result by computing GCD(k
1
, k
2
) = GCD(4, 2) = 2.
DRILL PROBLEM 2.7
(a) What is the digital frequency of x[n] = 2e
j(0.25n+30

)
? Is x[n] periodic?
(b) What is the digital frequency of y[n] = cos(0.5n + 30

)? Is y[n] periodic?
(c) What is the common period N of the signal f[n] = cos(0.4n) + sin(0.5n + 30

)?
Answers: (a) F
0
= 0.125, yes (b) F
0
= 0.25/, no (c) N = 20
c Ashok Ambardar, September 1, 2003
2.6 The Sampling Theorem 21
2.5.2 Discrete-Time Harmonics Are Always Periodic in Frequency
Unlike analog sinusoids, discrete-time sinusoids and harmonics are always periodic in frequency. If we start
with the sinusoid x[n] = cos(2nF
0
+) and add an integer m to F
0
, we get
cos[2n(F
0
+m) +] = cos(2nF
0
+ + 2nm) = cos(2nF
0
+) = x[n]
This result says that discrete-time (DT) sinusoids at the frequencies F
0
m are identical. Put another way,
a DT sinusoid is periodic in frequency (has a periodic spectrum) with unit period. The range 0.5 F 0.5
denes the principal period or principal range. A DT sinusoid can be uniquely identied only if its
frequency falls in the principal range. A DT sinusoid with a frequency F
0
outside this range can always be
expressed as a DT sinusoid with a frequency that falls in the principal period by subtracting out an integer
M from F
0
such that the new frequency F
a
= F
0
M satises 0.5 F
u
0.5). The frequency F
a
is called
the aliased digital frequency and it is always smaller than original frequency F
0
. This discussion also applies
to complex-valued harmonics of the type e
j(2nF
0
+)
.
To summarize, a discrete-time sinusoid or harmonic is periodic in time only if its digital frequency F
0
is
a rational fraction, but it is always periodic in frequency (with unit period.
REVIEW PANEL 2.13
The Discrete Harmonic cos(2nF
0
+) or e
j(2nF
0
+)
Is Always Periodic in Frequency
Its frequency period is unity (harmonics at F
0
and F
0
K are identical for integer K).
It is unique only if F
0
lies in the principal period 0.5 < F
0
0.5.
If F
0
> 0.5, the unique frequency is F
a
= F
0
M, where the integer M is chosen to ensure 0.5 < F
a
0.5.
DRILL PROBLEM 2.8
(a) Let x[n] = e
j1.4n
= e
j2F
u
n
where F
u
is in the principal range. What is the value of F
u
?
(b) Let y[n] = cos(2.4n + 30

). Rewrite y[n] in terms of its frequency in the principal range.


(c) Rewrite f[n] = cos(1.4n + 20

) + cos(2.4n + 30

) using frequencies in the principal range.


Answers: (a) 0.3 (b) cos(0.4n + 30

) (c) cos(0.6n 20

) + cos(0.4n + 30

)
2.6 The Sampling Theorem
The central concept in the digital processing of analog signals is that the sampled signal must be a unique
representation of the underlying analog signal. When the sinusoid x(t) = cos(2f
0
t + ) is sampled at the
sampling rate S, the digital frequency of the sampled signal is F
0
= f
0
/S. In order for the sampled sinusoid
to permit a unique correspondence with the underlying analog sinusoid, the digital frequency F
0
must lie in
the principal range, i.e., [F
0
[ < 0.5. This implies S > 2[f
0
[ and suggests that we must choose a sampling
rate S that exceeds [2f
0
[. More generally, the sampling theorem says that for a unique correspondence
between an analog signal and the version reconstructed from its samples (using the same sampling rate),
the sampling rate must exceed twice the highest signal frequency f
max
. The value S = 2f
max
is called the
critical sampling rate or Nyquist rate or Nyquist frequency. The time interval t
s
=
1
2fmax
is called
the Nyquist interval. For the sinusoid x(t) = cos(2f
0
t + ), the Nyquist rate is S
N
= 2f
0
=
2
T
and this
rate is equivalent to taking exactly two samples per period (because the sampling interval is t
s
=
T
2
). In
order to exceed the Nyquist rate, we should obtain more than two signal samples per period.
c Ashok Ambardar, September 1, 2003
22 Chapter 2 Discrete Signals
REVIEW PANEL 2.14
The Sampling Theorem: How to Sample an Analog Signal Without Loss of Information
For an analog signal band-limited to f
max
Hz, the sampling rate S must exceed 2f
max
.
S = 2f
max
denes the Nyquist rate. t
s
=
1
2f
max
denes the Nyquist interval.
For an analog sinusoid: The Nyquist rate corresponds to taking two samples per period.
DRILL PROBLEM 2.9
(a) What is the critical sampling rate in Hz for the following signals:
x(t) = cos(10t) y(t) = cos(10t) + sin(15t) f(t) = cos(10t) sin(15t) g(t) = cos
2
(10t)
(b) A 50 Hz sinusoid is sampled at twice the Nyquist rate. How many samples are obtained in 3 s?
Answers: (a) 10, 15, 25, 20 (b) 600
2.6.1 Signal Reconstruction and Aliasing
Consider an analog signal x(t) = cos(2f
0
t + ) and its sampled version x[n] = cos(2nF
0
+ ), where
F
0
= f
0
/S. If x[n] is to be a unique representation of x(t), we must be able to reconstruct x(t) from x[n]. In
practice, reconstruction uses only the central copy or image of the periodic spectrum of x[n] in the principal
period 0.5 F 0.5, which corresponds to the analog frequency range 0.5S f 0.5S. We use a
lowpass lter to remove all other replicas or images, and the output of the lowpass lter corresponds to the
reconstructed analog signal. As a result, the highest frequency f
H
we can identify in the signal reconstructed
from its samples is f
H
= 0.5S.
Whether the reconstructed analog signal matches x(t) or not depends on the sampling rate S. If we
exceed the Nyquist rate (i.e. S > 2f
0
), the digital frequency F
0
= f
0
/S is always in the principal range
0.5 F 0.5, and the reconstructed analog signal is identical to x(t). If the sampling rate is below the
Nyquist rate (i.e. S < 2f
0
), the digital frequency exceeds 0.5. Its image in the principal range appears at
the lower digital frequency F
a
= F
0
M (corresponding to the lower analog frequency f
a
= f
0
MS), where
M is an integer that places the aliased digital frequency F
a
between 0.5 and 0.5 (or the aliased analog
frequency f
a
between 0.5S and 0.5S). The reconstructed aliased signal x
a
(t) = cos(2f
a
t +) is at a lower
frequency f
a
= SF
a
than f
0
and is no longer a replica of x(t). The phenomenon, where a reconstructed
sinusoid appears at a lower frequency than the original, is what aliasing is all about. The real problem
is that the original signal x(t) and the aliased signal x
a
(t) yield identical sampled representations at the
sampling frequency S and prevent unique identication of the original signal x(t) from its samples!
REVIEW PANEL 2.15
Aliasing Occurs if the Analog Signal cos(2f
0
t +) Is Sampled Below the Nyquist Rate
If S < 2f
0
, the reconstructed analog signal is aliased to a lower frequency [f
a
[ < 0.5S. We nd f
a
as
f
a
= f
0
MS, where M is an integer that places f
a
in the principal period (0.5S < f
a
0.5S).
Before reconstruction, all frequencies must be brought into the principal period.
The highest frequency of the reconstructed signal cannot exceed half the reconstruction rate.
EXAMPLE 2.7 (Aliasing and Its Eects)
(a) A 100-Hz sinusoid x(t) is sampled at 240 Hz. Has aliasing occurred? How many full periods of x(t)
are required to obtain one period of the sampled signal?
c Ashok Ambardar, September 1, 2003
2.6 The Sampling Theorem 23
The sampling rate exceeds 200 Hz, so there is no aliasing. The digital frequency is F =
100
240
=
5
12
.
Thus, ve periods of x(t) yield 12 samples (one period) of the sampled signal.
(b) A 100-Hz sinusoid is sampled at rates of 240 Hz, 140 Hz, 90 Hz, and 35 Hz. In each case, has aliasing
occurred, and if so, what is the aliased frequency?
To avoid aliasing, the sampling rate must exceed 200 Hz. If S = 240 Hz, there is no aliasing, and
the reconstructed signal (from its samples) appears at the original frequency of 100 Hz. For all other
choices of S, the sampling rate is too low and leads to aliasing. The aliased signal shows up at a lower
frequency. The aliased frequencies corresponding to each sampling rate S are found by subtracting out
multiples of S from 100 Hz to place the result in the range 0.5S f 0.5S. If the original signal
has the form x(t) = cos(200t +), we obtain the following aliased frequencies and aliased signals:
1. S = 140 Hz, f
a
= 100 140 = 40 Hz, x
a
(t) = cos(80t +) = cos(80t )
2. S = 90 Hz, f
a
= 100 90 = 10 Hz, x
a
(t) = cos(20t +)
3. S = 35 Hz, f
a
= 100 3(35) = 5 Hz, x
a
(t) = cos(10t +) = cos(10t )
We thus obtain a 40-Hz sinusoid (with reversed phase), a 10-Hz sinusoid, and a 5-Hz sinusoid (with
reversed phase), respectively. Notice that negative aliased frequencies simply lead to a phase reversal
and do not represent any new information. Finally, had we used a sampling rate exceeding the Nyquist
rate of 200 Hz, we would have recovered the original 100-Hz signal every time. Yes, it pays to play by
the rules of the sampling theorem!
(c) Two analog sinusoids x
1
(t) (shown light) and x
2
(t) (shown dark) lead to an identical sampled version as
illustrated in Figure E2.7C. Has aliasing occurred? Identify the original and aliased signal. Identify the
digital frequency of the sampled signal corresponding to each sinusoid. What is the analog frequency
of each sinusoid if S = 50 Hz? Can you provide exact expressions for each sinusoid?
0 0.05 0.1 0.15 0.2 0.25 0.3
1
0.5
0
0.5
1
Time t [seconds]
A
m
p
l
i
t
u
d
e
Two analog signals and their sampled version
Figure E2.7C The sinusoids for Example 2.7(c)
Examine the interval (0, 0.1) s. The sampled signal shows ve samples per period. This covers three
full periods of x
1
(t) and so F
1
=
3
5
. This also covers two full periods of x
2
(t), and so F
2
=
2
5
. Clearly,
x
1
(t) (with [F
1
[ > 0.5) is the original signal that is aliased to x
2
(t). The sampling interval is 0.02 s.
So, the sampling rate is S = 50 Hz. The original and aliased frequencies are f
1
= SF
1
= 30 Hz and
f
2
= SF
2
= 20 Hz.
From the gure, we can identify exact expressions for x
1
(t) and x
2
(t) as follows. Since x
1
(t) is a delayed
cosine with x
1
(0) = 0.5, we have x
1
(t) = cos(60t

3
). With S = 50 Hz, the frequency f
1
= 30 Hz
c Ashok Ambardar, September 1, 2003
24 Chapter 2 Discrete Signals
actually aliases to f
2
= 20 Hz, and thus x
2
(t) = cos(40t

3
) = cos(40t +

3
). With F =
30
50
= 0.6
(or F = 0.4), the expression for the sampled signal is x[n] = cos(2nF

3
).
(d) A 100-Hz sinusoid is sampled, and the reconstructed signal (from its samples) shows up at 10 Hz.
What was the sampling rate S?
One choice is to set 100 S = 10 and obtain S = 90 Hz. Another possibility is to set 100 S = 10 to
give S = 110 Hz. In fact, we can also subtract out integer multiples of S from 100 Hz, set 100MS = 10
and compute S for various choices of M. For example, if M = 2, we get S = 45 Hz and if M = 3,
we get S = 30 Hz. We can also set 100 NS = 10 and get S = 55 Hz for N = 2. Which of these
sampling rates was actually used? We have no way of knowing!
DRILL PROBLEM 2.10
(a) A 60-Hz sinusoid x(t) is sampled at 200 Hz. What is the period N of the sampled signal? How many
full periods of x(t) are required to obtain these N samples? What is the frequency (in Hz) of the analog
signal reconstructed from the samples?
(b) A 160-Hz sinusoid x(t) is sampled at 200 Hz. What is the period N of the sampled signal? How many
full periods of x(t) are required to obtain these N samples? What is the frequency (in Hz) of the analog
signal reconstructed from the samples?
(c) The signal x(t) = cos(60t + 30

) is sampled at 50 Hz. What is the expression for the analog signal


y(t) reconstructed from the samples?
(d) A 150-Hz sinusoid is to be sampled. Pick the range of sampling rates (in Hz) closest to 150 Hz that
will cause aliasing but prevent phase reversal of the analog signal reconstructed from the samples.
Answers: (a) 10, 3, 60 (b) 5, 4, 40 (c) y(t) = cos(40t 30

) (d) 100 < S < 150


2.6.2 Reconstruction at Dierent Sampling Rates
There are situations when we sample a signal using one sampling rate S
1
but reconstruct the analog signal
from samples using a dierent sampling rate S
2
. In such situations, a frequency f
0
in the original signal
will result in a digital frequency F
0
= f
0
/S
1
. If S
1
> 2f
0
, there is no aliasing, F
0
is in the principal range
and the recovered frequency is just f
r
= F
0
S
2
= f
0
(S
2
/S
1
). If S
1
< 2f
0
, there is aliasing and the recovered
frequency is f
r
= F
a
S
2
== f
a
(S
2
/S
1
), where F
a
corresponds to the aliased digital frequency in the principal
range. In other words, all frequencies, aliased or recovered, should be identied by their principal period.
REVIEW PANEL 2.16
Aliased or Reconstructed Frequencies Are Always Identied From The Principal Period
Sampling: Unique digital frequencies always lie in the principal period 0.5 < F
a
0.5.
Reconstruction at S
R
: Analog frequencies lie in the principal period 0.5S
R
< f
a
0.5S
R
.
The reconstructed frequency is f
R
= S
R
F
a
= S
R
fa
S
.
EXAMPLE 2.8 (Signal Reconstruction at Dierent Sampling Rates)
A 100-Hz sinusoid is sampled at S Hz, and the sampled signal is then reconstructed at 540 Hz. What is the
frequency of the reconstructed signal if S = 270 Hz? If S = 70 Hz?
c Ashok Ambardar, September 1, 2003
2.7 An Introduction to Random Signals 25
1. If S = 270 Hz, the digital frequency of the sampled signal is F =
100
270
=
10
27
, which lies in the principal
period. The frequency f
r
of the reconstructed signal is then f
r
= SF = 540F = 200 Hz.
2. If S = 70 Hz, the digital frequency of the sampled signal is F =
100
70
=
10
7
, which does not lie in the
principal period. The frequency of the principal period is F =
10
7
1 =
3
7
, and the frequency f
r
of
reconstructed signal is then f
r
= 70F = SF = 30 Hz. The negative sign simply translates to a phase
reversal in the reconstructed signal.
DRILL PROBLEM 2.11
(a) A 60 Hz signal is sampled at 200 Hz. What is the frequency (in Hz) of the signal reconstructed from
the samples if the reconstruction rate is 300 Hz?
(b) A 160 Hz signal is sampled at 200 Hz. If the frequency of the signal reconstructed from the samples
is also 160 Hz, what reconstruction rate (in Hz) was used?
Answers: (a) 90 (b) 800
2.7 An Introduction to Random Signals
The signals we have studied so far are called deterministic or predictable. They are governed by a unique
mathematical representation that, once established, allows us to completely characterize the signal for all
time, past, present, or future. In contrast to this is the class of signals known as random or stochastic
whose precise value cannot be predicted in advance. We stress that only the future values of a random signal
pose a problem since past values of any signal, random or otherwise, are exactly known once they have
occurred. Randomness or uncertainty about future signal values is inherent in many practical situations. In
fact, a degree of uncertainty is essential for communicating information. The longer we observe a random
signal, the more the additional information we gain and the less the uncertainty. To fully understand the
nature of random signals requires the use of probability theory, random variables, and statistics. Even with
such tools, the best we can do is to characterize random signals only on the average based on their past
behavior.
2.7.1 Probability
Figure 2.3 shows the results of two experiments, each repeated under identical conditions. The rst exper-
iment always yields identical results no matter how many times it is run and yields a deterministic signal.
We need to run the experiment only once to predict what the next, or any other run, will yield.
The second experiment gives a dierent result or realization x(t) every time the experiment is repeated
and describes a stochastic or random system. A random signal or random process X(t) comprises
the family or ensemble of all such realizations obtained by repeating the experiment many times. Each
realization x(t), once obtained, ceases to be random and can be subjected to the same operations as we use
for deterministic signals (such as derivatives, integrals, and the like). The randomness of the signal stems
from the fact that one realization provides no clue as to what the next, or any other, realization might yield.
At a given instant t, each realization of a random signal can assume a dierent value, and the collection of
all such values denes a random variable. Some values are more likely to occur, or more probable, than
others. The concept of probability is tied to the idea of repeating an experiment a large number of times
in order to estimate this probability. Thus, if the value 2 V occurs 600 times in 1000 runs, we say that the
probability of occurrence of 2 V is 0.6.
c Ashok Ambardar, September 1, 2003
26 Chapter 2 Discrete Signals
(a) Four realizations of a deterministic signal
Time
A
m
p
l
i
t
u
d
e
(b) Four realizations of a random signal
Time
A
m
p
l
i
t
u
d
e
Figure 2.3 Realizations of a deterministic and stochastic process
The probability of an event A, denoted Pr(A), is the proportion of successful outcomes to the (very
large) number of times the experiment is run and is a fraction between 0 and 1 since the number of successful
runs cannot exceed the total number of runs. The larger the probability Pr(A), the more the chance of event
A occurring. To fully characterize a random variable, we must answer two questions:
1. What is the range of all possible (nonrandom) values it can acquire? This denes an ensemble space,
which may be nite or innite.
2. What are the probabilities for all the possible values in this range? This denes the probability
distribution function F(x). Clearly, F(x) must always lie between 0 and 1.
It is common to work with the derivative of the probability distribution function called the probability
density function f(x). The distribution function F(x) is simply the running integral of the density f(x):
f(x) =
d F(x)
dx
or F(x) =

f() d (2.23)
The probability F(x
1
) = Pr[X x
1
] that X is less than x
1
is given by
Pr[X x
1
] =

x1

f(x) dx (2.24)
The probability that X lies between x
1
and x
2
is Pr[x
1
< X x
2
] = F(x
2
) F(x
1
). The area of f(x) is 1.
2.7.2 Measures for Random Variables
Measures or features of a random variable X are based on its distribution. The mean, or expectation, is a
measure of where the distribution is centered and is dened by

E(x) = m
x
=

xf(x) dx (mean or expectation) (2.25)


The mean square value is similarly dened by

E(x
2
) =

x
2
f(x) dx (mean square value) (2.26)
c Ashok Ambardar, September 1, 2003
2.7 An Introduction to Random Signals 27
Many of the features of deterministic or random signals are based on moments. The nth moment m
n
is
dened by
m
n
=

x
n
f(x) dx (nth moment) (2.27)
We see that the zeroth moment m
0
gives the signal area, the rst moment m
1
corresponds to the mean,
and the second moment m
2
denes the mean square value. Moments about the mean are called central
moments and also nd widespread use. The nth central moment
n
is dened by

n
=

(x m
x
)
n
f(x) dx (nth central moment) (2.28)
A very commonly used feature is the second central moment
2
. It is also called the variance, denoted
2
x
,
and dened by

2
x
=

E[(x m
x
)
2
] =
2
=

(x m
x
)
2
f(x) dx (variance) (2.29)
The variance may be expressed in terms of the mean and the mean square values as

2
x
=

E[(x m
x
)
2
] =

E(x
2
) m
2
x
=

x
2
f(x) dx m
2
x
(2.30)
The variance measures the spread (or dispersion) of the distribution about its mean. The less the spread,
the smaller is the variance. The quantity is known as the standard deviation and provides a measure
of the uncertainty in a physical measurement. The variance is also a measure of the ac power in a signal.
For a periodic deterministic signal x(t) with period T, the variance can be readily found by evaluating the
signal power (and subtracting the power due to the dc component if present)

2
x
=
1
T

T
0
x
2
(t) dt
. .. .
total signal power

1
T

T
0
x(t) dt

2
. .. .
dc power
(2.31)
This equation can be used to obtain the results listed in the following review panel.
REVIEW PANEL 2.17
The Variance of Some Useful Periodic Signals With Period T
Sinusoid: If x(t) = Acos(2
t
T
t +), then
2
=
A
2
2
.
Triangular Wave: If x(t) = A
t
T
, 0 t < T, or x(t) = A(1
|t|
0.5T
), [t[ 0.5T, then
2
=
A
2
12
.
Square Wave: If x(t) = A, 0 t < 0.5T and x(t) = 0, 0.5 t < T, then
2
=
A
2
4
.
DRILL PROBLEM 2.12
(a) Find the variance of the periodic signal x(t) = A, 0 t < 0.25T and x(t) = 0, 0.25T t < T.
(b) Find the variance of the raised cosine signal x(t) = A[1 + cos(2
t
T
+)].
Answers: (a)
3
16
A
2
(b)
5
4
A
2
c Ashok Ambardar, September 1, 2003
28 Chapter 2 Discrete Signals
2.7.3 The Chebyshev Inequality
The measurement of the variance or standard deviation gives us some idea of how much the actual values
will deviate from the mean but provides no indication of how often we might encounter large deviations from
the mean. The Chebyshev inequality allows us to estimate the probability for a deviation to be within
certain bounds (given by ) as
Pr([x m
x
[ > ) (
x
/)
2
or Pr([x m
x
[ ) > 1

2
x

2
(Chebyshev inequality) (2.32)
It assumes that the variance or standard deviation is known. To nd the probability for the deviation to be
within k standard deviations, we set = k
x
to give
Pr([x m
x
[ k
x
) > 1
1
k
2
The Law of Large Numbers
Chebyshevs inequality, in turn, leads to the so called law of large numbers which, in essence, states that
while an individual random variable may take on values quite far from its mean (show a large spread), the
arithmetic mean of a large number of random values shows little spread, taking on values very close to its
common mean with a very high probability.
2.7.4 Probability Distributions
Two of the most commonly encountered probability distributions are the uniform distribution and normal
(or Gaussian) distribution. These are illustrated in Figure 2.4.
m
x
m
x
x
f(x)

x
F(x)
1

x
f(x)
0.5
Uniform density Uniform distribution Gaussian density
x
F(x)
1
Gaussian distribution

1
Figure 2.4 The uniform and normal probability distributions
2.7.5 The Uniform Distribution
In a uniform distribution, every value is equally likely, since the random variable shows no preference for a
particular value. The density function f(x) of a typical uniform distribution is just a rectangular pulse with
unit area dened by
f(x) =

1

, x
0, otherwise
(uniform density function) (2.33)
Its mean and variance are given by
m
x
= 0.5( +)
2
x
=
(b a)
2
12
(2.34)
c Ashok Ambardar, September 1, 2003
2.7 An Introduction to Random Signals 29
The distribution function F(x) is given by
F(x) =

0 x < a
xa
ba
a x b
1 x > b
(uniform distribution function) (2.35)
This is a nite ramp that rises from a value of zero at x = to a value of unity at x = and equals unity
for x . For a uniform distribution in which values are equally likely to fall between 0.5 and 0.5, the
density function is f(x) = 1, 0.5 x < 0.5 with a mean of zero and a variance of
1
12
.
Uniform distributions occur frequently in practice. When quantizing signals in uniform steps, the error
in representing a signal value is assumed to be uniformly distributed between 0.5 and 0.5, where is
the quantization step. The density function of the phase of a sinusoid with random phase is also uniformly
distributed between and .
DRILL PROBLEM 2.13
(a) If the quantization error is assumed to be uniformly distributed between 0.5 and 0.5, sketch
the density function f() and compute the variance.
(b) The phase of a sinusoid with random phase is uniformly distributed between and . Compute the
variance.
Answers: (a)
2

=

2
12
(b)

2
3
f ()
/2 /2
1/

2.7.6 The Gaussian or Normal Distribution


The Gaussian or normal probability density is bell shaped and dened by
f(x) =
1

2
2
x
exp

(x m
x
)
2
2
2
x

(normal distribution) (2.36)


It has a mean of m
x
and a variance of
2
x
. This density function has a single peak at x = m
x
, even symmetry
about the peak (x = m
x
) and inection points at m
x
.
The Gaussian process exhibits several useful properties
1. The variable ax +b where a > 0 is also normally distributed.
2. The sum of normally distributed random signals is also normally distributed. The mean of the sum
equals the sum of the individual means and, if the statistically independent, the variance of the sum equals
the sum of the individual variances.
3. The ratio of the mean deviation (x m
x
) and the standard deviation
x
for all normal distributions
equals

2/. Thus, for normal distributions, one may equally well work with the mean deviation rather
than the standard deviation.
4. All the higher order moments of a Gaussian random variable may be obtained from a knowledge of the
rst two moments alone. In particular, the n
th
central moments are zero for odd values of n and the following
relation then obtains for the even order (n = 2k) central moments
E[(x m
x
)
2k
] = (1)(3)(5) . . . (2k 1)
2k
=
(2k)!
k!2
k
, n = 2k (central moments of Gaussian) (2.37)
c Ashok Ambardar, September 1, 2003
30 Chapter 2 Discrete Signals
The Gaussian Distribution Function
The distribution function F(x) of a Gaussian distribution cannot be written in closed form and is given by
F(x) =
1

2
2

exp

(x m
x
)
2
2
2

dx (Gaussian distribution function) (2.38)


The Standard Gaussian Distribution
A Gaussian distribution with a mean of zero and a variance of unity (m
x
= 0,
2
= 1) is called a standard
Gaussian and denoted P(x).
P(x) =
1

e
x
2
/2
dx (standard Gaussian distribution) (2.39)
The Error Function
Another function that is used extensively is the error function dened by
erf(x) =
2

x
0
e
x
2
dx (error function) (2.40)
The Gaussian distribution F(x) may be expressed in terms of the standard form or the error function as
F(x) = P

x m
x

= 0.5 + 0.5 erf

x m
x

(2.41)
The probability that x lies between x
1
and x
2
may be expressed in terms of the error function as
Pr[x
1
x x
2
] = F(x
2
) F(x
1
) = 0.5 erf

x
2
m
x

0.5 erf

x
1
m
x

(2.42)
This is a particularly useful form since tables of error functions are widely available. A note of caution to the
unwary, however. Several dierent, though functionally equivalent, denitions of P(x), erf(x) and related
functions are also prevalent in the literature.
The Q-Function
The Q-function describes the area of the tail of a Gaussian distribution. For the standard Gaussian, the
area Q(x) of the tail is given by
Q(x) = 1 P(x) =
1


x
e
x
2
/2
dx
The Q-function and may also be expressed in terms of the error function as
Q(x) = 0.5 0.5 erf(x/

2) (2.43)
The probability that x lies between x
1
and x
2
may also be expressed in terms of the Q-function as
Pr[x
1
x x
2
] = Q

x
1
m
x

x
2
m
x

(2.44)
The results for the standard distribution may be carried over to a distribution with arbitrary mean m
x
and
arbitrary standard deviation
x
via the simple change of variable x (x m
x
)/.
c Ashok Ambardar, September 1, 2003
2.7 An Introduction to Random Signals 31
The Central Limit Theorem
The central limit theorem asserts that the probability density function of the sum of many random signals
approaches a Gaussian as long as their means are nite and their variance is small compared to the total
variance (but nonzero). The individual processes need not even be Gaussian.
2.7.7 Discrete Probability Distributions
The central limit theorem is even useful for discrete variables. When the variables that make up a given
process s are discrete and the number n of such variables is large, that is,
s =
n

i=1
x
i
, n 1
we may approximate s by a Gaussian whose mean equals nm
x
and whose variance equals n
2
x
. Thus,
f
n
(s)
1

2n
2
x
exp

(s nm
x
)
2
2n
2
x

, n 1 (2.45)
Its distribution allows us to compute the probability Pr[s
1
s s
2
] in terms of the error function or
Q-function as
Pr[s
1
s s
2
] 0.5 erf

s
2
nm
x

2n

0.5 erf

s
1
nm
x

2n

= Q

s
1
nm
x

s
2
nm
x

(2.46)
This relation forms the basis for numerical approximations involving discrete probabilities.
The Binomial Distribution
Consider an experiment with two outcomes which result in mutually independent and complementary events.
If the probability of a success is p, and the probability of a failure is q = 1 p, the probability of exactly k
successes in n trials follows the binomial distribution and is given by
Pr[s = k] = p
n
(k) = C
n
k
(p)
k
(1 p)
nk
(binomial probability) (2.47)
Here, C
n
k
represents the binomial coecient and may be expressed in terms of factorials or gamma
functions as
C
n
k
=
n!
k! (n k)!
=
(n + 1)
(k + 1) (n k + 1)
The probability of at least k successes in n trials is given by
Pr[s k] =
n

i=k
p
n
(i) = I
p
(k, n k + 1)
where I
x
(a, b) is the incomplete beta function dened by
I
x
(a, b) =
1
B(a, b)

x
0
t
a1
(1 t)
b1
dt B(a, b) =
ab
(a +b)
, a > 0, b > 0
c Ashok Ambardar, September 1, 2003
32 Chapter 2 Discrete Signals
The probability of getting between k
1
and k
2
successes in n trials describes a cumulative probability
because we must sum the probabilities of all possible outcomes for the event (the probability of exactly k
1
,
then k
1
+ 1, then k
1
+ 2 successes, and so on to k
2
successes) and is given by
Pr[k
1
s k
2
] =
k
2

i=k
1
p
n
(i)
For large n, its evaluation can become a computational nightmare.
Some Useful Approximations
When n is large and neither p 0 nor q 0, the probability p
n
(k) of exactly k successes in n trials may
be approximated by the Gaussian
p
n
(k) = C
n
k
(p)
k
(1 p)
nk

2
2
exp

(k m)
2
2
2

, m = np,
2
= np(1 p)
This result is based on the central limit theorem and called the de Moivre-Laplace approximation. It
assumes that
2
1 and [k m[ is of the same order as or less. Using this result, the probability of at
least k successes in n trials may be written in terms of the error function or Q-function as
Pr[s k] 0.5 0.5 erf

k m

= Q

k m

Similarly, the cumulative probability of between k


1
and k
2
successes in n trials is
Pr[k
1
s k
2
] 0.5 erf

k
2
m

0.5 erf

k
1
m

= Q

k
1
m

k
2
m

The Poisson probability


If the number of trials n approaches innity and the probability p of success in each trial approaches zero
but their product m = np remains constant, the binomial probability is well approximated by
Pr[s = k] = C
n
k
(p)
k
(1 p)
nk

m
k
e
m
k!
= p
m
(k) (Poisson probability) (2.48)
where n 1, p <1 and m = np. This is known as the Poisson probability. The mean and variance of this
distribution are both equal to m. In practical situations, the Poisson approximation is often used whenever
n is large and p is small and not just under the stringent limiting conditions imposed in its derivation. Unlike
the binomial distribution which requires probabilities for both success and failure, the Poisson distribution
requires only the probability of a success p (through the parameter m = np that describes the expected
number of successes) and may thus be used even when the number of unsuccessful outcomes is unknown.
The probability that the number of successes will lie between 0 and k inclusive, if the expected number is
m, is given by the summation
Pr[s k] =
k

i=0
kp
m
(i) =
k

i=0
m
i
e
m
i!
= 1 P(k + 1, m), k 1
where P(a, x) is the so called incomplete gamma function dened by
P(a, x) =
1
(a)

x
0
t
a1
e
t
dt a > 0
c Ashok Ambardar, September 1, 2003
2.7 An Introduction to Random Signals 33
Note that Pr[s 0] = e
m
. For large m, the Poisson probability of exactly k successes may also be
approximated using the central limit theorem to give
p
m
(k)
1

2m
exp

(k m)
2
2m

Using this approximation, the probability of between k


1
and k
2
successes is
Pr[k
1
s k
2
] 0.5 erf

k
2
m

2m

0.5 erf

k
1
m

2m

= Q

k
1
m

k
2
m

2.7.8 Distributions for Deterministic Signals


The idea of distributions also applies to deterministic periodic signals for which they can be found as exact
analytical expressions. Consider the periodic signal x(t) of Figure 2.5. The probability Pr[X < 0] that
x(t) < 0 is zero. The probability Pr[X < 3] that x(t) is less than 3 is 1. Since x(t) varies linearly over one
period (T = 3), all values in this range are equally likely. This means that the density function is constant
over the range 0 x 3 with an area of unity, and the distribution F(x) rises linearly from zero to unity
over this range. The distribution F(x) and density f(x) are also shown in Figure 2.5. The variance can be
computed either from the signal x(t) itself or from its density function f(x). The results are repeated for
convenience.

2
x
=
1
T

T
0
x
2
(t) dt
. .. .
total signal power

1
T

T
0
x(t) dt

2
. .. .
dc power
or
2
x
=

x
2
f(x) dx m
2
x
(2.49)
T=3
Periodic signal
3
t
x(t)
Distribution
3
1
x
F(x)
Density
3
1/3
x
f(x)
Figure 2.5 A periodic signal and its density and distribution functions
DRILL PROBLEM 2.14
(a) Refer to Figure 2.5. Compute the variance from x(t) itself.
(b) Refer to Figure 2.5. Compute the variance from its density function f(x).
Answers: (a) 0.75 (b) 0.75
2.7.9 Stationary, Ergodic, and Pseudorandom Signals
A random signal is called stationary if its statistical features do not change over time. Thus, dierent (non-
overlapping) segments of a single realization are more or less identical in the statistical sense. Signals that are
non-stationary do not possess this property and may indeed exhibit a trend (a linear trend, for example)
with time. Stationarity suggests a state of statistical equilibrium akin to the steady-state for deterministic
c Ashok Ambardar, September 1, 2003
34 Chapter 2 Discrete Signals
situations. A stationary process is typically characterized by a constant mean and constant variance. The
statistical properties of a stationary random signal may be found as ensemble averages across the process
by averaging over all realizations at one specic time instant or as time averages along the process by
averaging a single realization over time. The two are not always equal. If they are, a stationary process is
said to be ergodic. The biggest advantage of ergodicity is that we can use features from a single realization
to describe the whole process. It is very dicult to establish whether a stationary process is ergodic but,
because of the advantages it oers, ergodicity is often assumed in most practical situations! For an ergodic
signal, the mean equals the time average, and the variance equals the ac power (the power in the signal with
its dc component removed).
2.7.10 Statistical Estimates
Probability theory allows us to fully characterize a random signal from an a priori knowledge of its probability
distribution. This yields features like the mean and variance of the random variable. In practice, we are
faced with exactly the opposite problem of nding such features from a set of discrete data, often in the
absence of a probability distribution. The best we can do is get an estimate of such features and perhaps
even the distribution itself. This is what statistical estimation achieves. The mean and variance are typically
estimated directly from the observations x
k
, k = 0, 1, 2, . . . , N 1 as
m
x
=
1
N
N1

k=0
x
k

2
x
=
1
N 1
N1

k=0
(x
k
m
x
)
2
(2.50)
Histograms
The estimates f
k
of a probability distribution are obtained by constructing a histogram from a large number
of observations. A histogram is a bar graph of the number of observations falling within specied amplitude
levels, or bins, as illustrated in Figure 2.6.
Number of observations
Number of observations
Histogram of a uniform random signal Histogram of a Gaussian random signal
Bin width
Bin width
Figure 2.6 Histograms of a uniformly distributed and a normally distributed random signal
Pseudorandom Signals
In many situations, we use articially generated signals (which can never be truly random) with prescribed
statistical features called pseudorandom signals. Such signals are actually periodic (with a very long
period), but over one period their statistical features approximate those of random signals.
2.7.11 Random Signal Analysis
If a random signal forms the input to a system, the best we can do is to develop features that describe the
output on the average and estimate the response of a system under the inuence of random signals. Such
c Ashok Ambardar, September 1, 2003
2.7 An Introduction to Random Signals 35
estimates may be developed either in the time domain or in the frequency domain.
Signal-to-Noise Ratio
For a noisy signal x(t) = s(t) +An(t) with a signal component s(t) and a noise component An(t) (with noise
amplitude A), the signal-to-noise ratio (SNR) is the ratio of the signal power
2
s
and noise power A
2

2
n
and usually dened in decibels (dB) as
SNR = 10 log


2
s
A
2

2
n

dB (2.51)
The decibel value of is dened as 20 log . We can adjust the SNR by varying the noise amplitude A.
Application: Coherent Signal Averaging
Coherent signal averaging is a method of extracting signals from noise and assumes that the experiment can
be repeated and the noise corrupting the signal is random (and uncorrelated). Averaging the results of many
runs tends to average out the noise to zero, and the signal quality (or signal-to-noise ratio) improves. The
more the number of runs, the smoother and less noisy the averaged signal. We often remove the mean or any
linear trend before averaging. Figure 2.7 shows one realization of a noisy sine wave and the much smoother
results of averaging 8 and 48 such realizations. This method is called coherent because it requires time
coherence (time alignment of the signal for each run). It relies, for its success, on perfect synchronization of
each run and on the statistical independence of the contaminating noise.
0 5 10
5
0
5
(a) One realization of noisy sine
Time
A
m
p
l
i
t
u
d
e
0 5 10
5
0
5
(b) Average of 8 realizations
Time
A
m
p
l
i
t
u
d
e
0 5 10
5
0
5
(c) Average of 48 realizations
Time
A
m
p
l
i
t
u
d
e
Figure 2.7 Coherent averaging of a noisy sine wave
c Ashok Ambardar, September 1, 2003
36 Chapter 2 Discrete Signals
CHAPTER 2 PROBLEMS
2.1 (Discrete Signals) Sketch each signal and nd its energy or power as appropriate.
(a) x[n] =

6, 4, 2, 2 (b) y[n] = 3, 2,

1, 0, 1
(c) f[n] =

0, 2, 4, 6 (d) g[n] = u[n] u[n 4]


(e) p[n] = cos(n/2) (f ) q[n] = 8(0.5)
n
u[n]
[Hints and Suggestions: Only p[n] is a power signal. The rest have nite energy.]
2.2 (Signal Duration) Use examples to argue that the product of a right-sided and a left-sided discrete-
time signal is always time-limited or identically zero.
[Hints and Suggestions: Select simple signals that either overlap or do not overlap.]
2.3 (Operations) Let x[n] =

6, 4, 2, 2. Sketch the following signals and nd their signal energy.


(a) y[n] = x[n 2] (b) f[n] = x[n + 2] (c) g[n] = x[n + 2] (d) h[n] = x[n 2]
[Hints and Suggestions: Note that g[n] is a folded version of f[n].]
2.4 (Operations) Let x[n] = 8(0.5)
n
(u[n + 1] u[n 3]). Sketch the following signals.
(a) y[n] = x[n 3] (b) f[n] = x[n + 1] (c) g[n] = x[n + 4] (d) h[n] = x[n 2]
[Hints and Suggestions: Note that x[n] contains 5 samples (from n = 1 to n = 3). To display the
marker for y[n] (which starts at n = 2), we include two zeros at n = 0 (the marker) and n = 1.]
2.5 (Energy and Power) Classify the following as energy signals, power signals, or neither and nd the
energy or power as appropriate.
(a) x[n] = 2
n
u[n] (b) y[n] = 2
n
u[n 1] (c) f[n] = cos(n)
(d) g[n] = cos(n/2) (e) p[n] =
1
n
u[n 1] (f ) q[n] =
1

n
u[n 1]
(g) r[n] =
1
n
2
u[n 1] (h) s[n] = e
jn
(i) d[n] = e
jn/2
(j) t[n] = e
(j+1)n/4
(k) v[n] = j
n/4
(l) w[n] = (

j)
n
+ (

j)
n
[Hints and Suggestions: For x[n] and y[n], 2
2n
= 4
n
= (0.25)
n
. Sum this from n = to n = 0
(or n = 1) using a change of variable (n n) in the summation. For p[n], sum 1/n
2
over n = 1
to n = using tables. For q[n], the sum of 1/n from n = 1 to n = does not converge! For t[n],
separate the exponentials. To compute the power for s[n] and d[n], note that [s[n][ = [d[n][ = 1. For
v[n], use j = e
j/2
. For w[n], set

j = e
j/4
and use Eulers relation to convert to a sinusoid.]
2.6 (Energy and Power) Sketch each of the following signals, classify as an energy signal or power
signal, and nd the energy or power as appropriate.
(a) x[n] =

k=
y[n kN], where y[n] = u[n] u[n 3] and N = 6
(b) f[n] =

k=
(2)
n5k
(u[n 5k] u[n 5k 4])
c Ashok Ambardar, September 1, 2003
Chapter 2 Problems 37
[Hints and Suggestions: The period of x[n] is N = 6. With y[n] = u[n] u[n 3], one period of
x[n] (starting at n = 0) is 1, 1, 1, 0, 0, 0. The period of f[n] is N = 5. Its one period (starting at
n = 0) contains four samples from 2
n
(u[n] u[n 4]) and one trailing zero.]
2.7 (Decimation and Interpolation) Let x[n] = 4, 0,

2, 1, 3. Find and sketch the following


signals and compare their signal energy with the energy in x[n].
(a) The decimated signal d[n] = x[2n]
(b) The zero-interpolated signal f[n] = x[
n
2
]
(c) The step-interpolated signal g[n] = x[
n
2
]
(d) The linearly interpolated signal h[n] = x[
n
2
]
[Hints and Suggestions: To get d[n], retain the samples of x[n] at n = 0, 2, 4, . . .. Assuming that
the interpolated signals will be twice the length of x[n], the last sample will be 0 for f[n], 3 for g[n]
and 1.5 (the linearly interpolated value with x[n] = 0, n > 2) for h[n].]
2.8 (Interpolation and Decimation) Let x[n] = 4 tri(n/4). Sketch the following signals and describe
how they dier.
(a) x[
2
3
n], using zero interpolation followed by decimation
(b) x[
2
3
n], using step interpolation followed by decimation
(c) x[
2
3
n], using decimation followed by zero interpolation
(d) x[
2
3
n], using decimation followed by step interpolation
2.9 (Fractional Delay) Starting with x[n], we can generate the signal x[n 2] (using a delay of 2) or
x[2n 3] (using a delay of 3 followed by decimation). However, to generate a fractional delay of the
form x[n
M
N
] requires a delay, interpolation, and decimation!
(a) Describe the sequence of operations required to generate x[n
2
3
] from x[n].
(b) Let x[n] =

1, 4, 7, 10, 13. Sketch x[n] and x[n


2
3
]. Use linear interpolation where required.
(c) Generalize the results of part (a) to generate x[n
M
N
] from x[n]. Any restrictions on M and N?
[Hints and Suggestions: In part (a) the sequence of operations requires interpolation, delay (by 2)
and decimation. The interpolation and decimation factors are identical.]
2.10 (Symmetry) Sketch each signal and its even and odd parts.
(a) x[n] = 8(0.5)
n
u[n] (b) y[n] = u[n] (c) f[n] = 1 +u[n]
(d) g[n] = u[n] u[n 4] (e) p[n] = tri(
n3
3
) (f ) q[n] = 6,

4, 2, 2
[Hints and Suggestions: Conrm the appropriate symmetry for each even part and each odd part.
For each even part, the sample at n = 0 must equal the original sample value. For each odd part, the
sample at n = 0 must equal zero.]
2.11 (Sketching Discrete Signals) Sketch each of the following signals:
(a) x[n] = r[n + 2] r[n 2] 4u[n 6] (b) y[n] = rect(
n
6
)
(c) f[n] = rect(
n2
4
) (d) g[n] = 6 tri(
n4
3
)
[Hints and Suggestions: Note that f[n] is a rectangular pulse centered at n = 2 with 5 samples. Also,
g[n] is a triangular pulse centered at n = 4 with 7 samples (including the zero-valued end samples).]
c Ashok Ambardar, September 1, 2003
38 Chapter 2 Discrete Signals
2.12 (Sketching Signals) Sketch the following signals and describe how they are related.
(a) x[n] = [n] (b) f[n] = rect(n) (c) g[n] = tri(n) (d) h[n] = sinc(n)
2.13 (Signal Description) For each signal shown in Figure P2.13,
(a) Write out the numeric sequence, and mark the index n = 0 by an arrow.
(b) Write an expression for each signal using impulse functions.
(c) Write an expression for each signal using steps and/or ramps.
(d) Find the signal energy.
(e) Find the signal power, assuming that the sequence shown repeats itself.
[n] x [n] x [n] x [n] x
2
1
Signal 1
3 8
n
Signal 2
n
2
4
3 3
Signal 3
1
2
3
4 5
5
n
Signal 4
2
n
8
6
5
4
3
4
Figure P2.13 Signals for Problem 2.13
[Hints and Suggestions: In part (c), all signals must be turned o (by step functions) and any
ramps must be rst attened out (by other ramps) . For example, signal 3 = r[n] r[n5] 5u[n6].
The second term attens out the rst ramp and last term turns the signal o after n = 5.]
2.14 (Discrete Exponentials) A causal discrete exponential has the form x[n] =
n
u[n].
(a) Assume that is real and positive. Pick convenient values for > 1, = 1, and < 1; sketch
x[n]; and describe the nature of the sketch for each choice of .
(b) Assume that is real and negative. Pick convenient values for < 1, = 1, and > 1;
sketch x[n]; and describe the nature of the sketch for each choice of .
(c) Assume that is complex and of the form = Ae
j
, where A is a positive constant. Pick
convenient values for and for A < 1, A = 1, and A > 1; sketch the real part and imaginary
part of x[n] for each choice of A; and describe the nature of each sketch.
(d) Assume that is complex and of the form = Ae
j
, where A is a positive constant. Pick
convenient values for and for A < 1, A = 1, and A > 1; sketch the magnitude and imaginary
phase of x[n] for each choice of A; and describe the nature of each sketch.
2.15 (Signal Representation) The two signals shown in Figure P2.15 may be expressed as
(a) x[n] = A
n
(u[n] u[n N]) (b) y[n] = Acos(2Fn +)
Find the constants in each expression and then nd the signal energy or power as appropriate.
[n] x [n] y
1
1 2 3 4 5
4
n
1 1
1 1
2 2
2 2
1
1
1
1
1
n
Figure P2.15 Signals for Problem 2.15
[Hints and Suggestions: For y[n], rst nd the period to compute F. Then, evaluate y[n] at two
values of n to get two equations for, say y[0] and y[1]. These will yield (from their ratio) and A.]
c Ashok Ambardar, September 1, 2003
Chapter 2 Problems 39
2.16 (Discrete-Time Harmonics) Check for the periodicity of the following signals, and compute the
common period N if periodic.
(a) x[n] = cos(
n
2
) (b) y[n] = cos(
n
2
)
(c) f[n] = sin(
n
4
) 2 cos(
n
6
) (d) g[n] = 2 cos(
n
4
) + cos
2
(
n
4
)
(e) p[n] = 4 3 sin(
7n
4
) (f ) q[n] = cos(
5n
12
) + cos(
4n
9
)
(g) r[n] = cos(
8
3
n) + cos(
8
3
n) (h) s[n] = cos(
8n
3
) cos(
n
2
)
(i) d[n] = e
j0.3n
(j) e[n] = 2e
j0.3n
+ 3e
j0.4n
(k) v[n] = e
j0.3n
(l) w[n] = (j)
n/2
[Hints and Suggestions: There is no periodicity if F is not a rational fraction for any component.
Otherwise, work with the periods and nd their LCM. For w[n], note that j = e
j/2
.]
2.17 (The Roots of Unity) The N roots of the equation z
N
= 1 can be found by writing it as z
N
= e
j2k
to give z = e
j2k/N
, k = 0, 1, . . . , N 1. What is the magnitude and angle of each root? The roots
can be displayed as vectors directed from the origin whose tips lie on a circle.
(a) What is the length of each vector and the angular spacing between adjacent vectors? Sketch for
N = 5 and N = 6.
(b) Extend this concept to nd the roots of z
N
= 1 and sketch for N = 5 and N = 6.
[Hints and Suggestions: In part (b), note that z
N
= 1 = e
j
e
j2k
= e
j(2k+1)
.]
2.18 (Digital Frequency) Set up an expression for each signal, using a digital frequency [F[ < 0.5, and
another expression using a digital frequency in the range 4 < F < 5.
(a) x[n] = cos(
4n
3
) (b) x[n] = sin(
4n
3
) + 3 sin(
8n
3
)
[Hints and Suggestions: First nd the digital frequency of each component in the principal range
(0.5 < F 0.5). Then, add 4 or 5 as appropriate to bring each frequency into the required range.]
2.19 (Digital Sinusoids) Find the period N of each signal if periodic. Express each signal using a digital
frequency in the principal range ([F[ < 0.5) and in the range 3 F 4.
(a) x[n] = cos(
7n
3
) (b) x[n] = cos(
7n
3
) + sin(0.5n) (c) x[n] = cos(n)
2.20 (Sampling and Aliasing) Each of the following sinusoids is sampled at S = 100 Hz. Determine if
aliasing has occurred and set up an expression for each sampled signal using a digital frequency in the
principal range ([F[ < 0.5).
(a) x(t) = cos(320t +

4
) (b) x(t) = cos(140t

4
) (c) x(t) = sin(60t)
[Hints and Suggestions: Find the frequency f
0
. If S > 2f
0
there is no aliasing and F < 0.5.
Otherwise, bring F into the principal range to write the expression for the sampled signal.]
2.21 (Aliasing and Signal Reconstruction) The signal x(t) = cos(320t +

4
) is sampled at 100 Hz,
and the sampled signal x[n] is reconstructed at 200 Hz to recover the analog signal x
r
(t).
(a) Has aliasing occurred? What is the period N and the digital frequency F of x[n]?
(b) How many full periods of x(t) are required to generate one period of x[n]?
(c) What is the analog frequency of the recovered signal x
r
(t)?
(d) Write expressions for x[n] (using [F[ < 0.5) and for x
r
(t).
c Ashok Ambardar, September 1, 2003
40 Chapter 2 Discrete Signals
[Hints and Suggestions: For part (b), if the digital frequency is expressed as F = k/N where N
is the period and k is an integer, it takes k full cycles of the analog sinusoid to get N samples of
the sampled signal. In part (c), the frequency of the reconstructed signal is found from the aliased
frequency in the principal range.]
2.22 (Digital Pitch Shifting) One way to accomplish pitch shifting is to play back (or reconstruct) a
sampled signal at a dierent sampling rate. Let the analog signal x(t) = sin(15800t + 0.25) be
sampled at a sampling rate of 8 kHz.
(a) Find its sampled representation with digital frequency [F[ < 0.5.
(b) What frequencies are heard if the signal is reconstructed at a rate of 4 kHz?
(c) What frequencies are heard if the signal is reconstructed at a rate of 8 kHz?
(d) What frequencies are heard if the signal is reconstructed at a rate of 20 kHz?
[Hints and Suggestions: The frequency of the reconstructed signal is found from the aliased digital
frequency in the principal range and the appropriate reconstruction rate.]
2.23 (Discrete-Time Chirp Signals) Consider the signal x(t) = cos[(t)], where (t) = t
2
. Show that
its instantaneous frequency f
i
(t) =
1
2

(t) varies linearly with time.


(a) Choose such that the frequency varies from 0 Hz to 2 Hz in 10 seconds, and generate the
sampled signal x[n] from x(t), using a sampling rate of S = 4 Hz.
(b) It is claimed that, unlike x(t), the signal x[n] is periodic. Verify this claim, using the condition
for periodicity (x[n] = x[n +N]), and determine the period N of x[n].
(c) The signal y[n] = cos(F
0
n
2
/M), n = 0, 1, . . . , M1, describes an M-sample chirp whose digital
frequency varies linearly from 0 to F
0
. What is the period of y[n] if F
0
= 0.25 and M = 8?
[Hints and Suggestions: In part (b), if x[n] = cos(n
2
), periodicity requires x[n] = x[n + N] or
cos(n
2
) = cos[(n
2
+ 2nN +N
2
)]. Thus 2nN = 2m and N
2
= 2k where m and k are integers.
Satisfy these conditions for the smallest integer N.]
2.24 (Time Constant) For exponentially decaying discrete signals, the time constant is a measure of
how fast a signal decays. The 60-dB time constant describes the (integer) number of samples it takes
for the signal level to decay by a factor of 1000 (or 20 log 1000 = 60 dB).
(a) Let x[n] = (0.5)
n
u[n]. Compute its 60-dB time constant and 40-dB time constant.
(b) Compute the time constant in seconds if the discrete-time signal is derived from an analog signal
sampled at 1 kHz.
2.25 (Signal Delay) The delay D of a discrete-time energy signal x[n] is dened by
D =

k=
kx
2
[k]

k=
x
2
[k]
(a) Verify that the delay of the symmetric sequence x[n] = 4, 3, 2, 1,

0, 1, 2, 3, 4 is zero.
(b) Compute the delay of the signals g[n] = x[n 1] and h[n] = x[n 2].
(c) What is the delay of the signal y[n] = 1.5(0.5)
n
u[n] 2[n]?
[Hints and Suggestions: For part (c), compute the summations required in the expression for the
delay by using tables and the fact that y[n] = 0.5 for n = 0 and y[n] = 1.5(0.5)
n
for n 1.]
c Ashok Ambardar, September 1, 2003
Chapter 2 Problems 41
2.26 (Periodicity) It is claimed that the sum of an absolutely summable signal x[n] and its shifted (by
multiples of N) replicas is a periodic signal x
p
[n] with period N. Verify this claim by sketching the
following and, for each case, compute the power in the resulting periodic signal x
p
[n] and compare the
sum and energy of one period of x
p
[n] with the sum and energy of x[n].
(a) The sum of x[n] = tri(n/3) and its replicas shifted by N = 7
(b) The sum of x[n] = tri(n/3) and its replicas shifted by N = 6
(c) The sum of x[n] = tri(n/3) and its replicas shifted by N = 5
(d) The sum of x[n] = tri(n/3) and its replicas shifted by N = 4
(e) The sum of x[n] = tri(n/3) and its replicas shifted by N = 3
2.27 (Periodic Extension) The sum of an absolutely summable signal x[n] and its shifted (by multiples
of N) replicas is called the periodic extension of x[n] with period N.
(a) Show that one period of the periodic extension of the signal x[n] =
n
u[n] with period N is
y[n] =

n
1
N
, 0 n N 1
(b) How does the sum of one period of the periodic extension y[n] compare with the sum of x[n]?
(c) With = 0.5 and N = 3, compute the signal energy in x[n] and the signal power in y[n].
[Hints and Suggestions: For one period (n = 0 to n = N 1), only x[n] and the tails of the replicas
to its left contribute. So, nd the sum of x[n +kN] =
n+kN
only from k = 0 to k = .]
2.28 (Signal Norms) Norms provide a measure of the size of a signal. The p-norm, or Holder norm,
|x|
p
for discrete signals is dened by |x|
p
= (

[x[
p
)
1/p
, where 0 < p < is a positive integer. For
p = , we also dene |x|

as the peak absolute value [x[


max
.
(a) Let x[n] = 3, j4, 3 +j4. Find |x|
1
, |x|
2
, and |x|

.
(b) What is the signicance of each of these norms?
COMPUTATION AND DESIGN
2.29 (Discrete Signals) For each part, plot the signals x[n] and y[n] over 10 n 10 and compare.
(a) x[n] = u[n + 4] u[n 4] + 2[n + 6] [n 3] y[n] = x[n 4]
(b) x[n] = r[n + 6] r[n + 3] r[n 3] +r[n 6] y[n] = x[n 4]
(c) x[n] = rect(
n
10
) rect(
n3
6
) y[n] = x[n + 4]
(d) x[n] = 6 tri(
n
6
) 3 tri(
n
3
) y[n] = x[n + 4]
2.30 (Signal Interpolation) Let h[n] = sin(n/3), 0 n 10. Plot the signal h[n]. Use this to
generate and plot the zero-interpolated, step-interpolated, and linearly interpolated signals assuming
interpolation by 3.
2.31 (Discrete Exponentials) A causal discrete exponential may be expressed as x[n] =
n
u[n], where
the nature of dictates the form of x[n]. Plot the following over 0 n 40 and comment on the
nature of each plot.
(a) The signal x[n] for = 1.2, = 1, and = 0.8.
(b) The signal x[n] for = 1.2, = 1, and = 0.8.
c Ashok Ambardar, September 1, 2003
42 Chapter 2 Discrete Signals
(c) The real part and imaginary parts of x[n] for = Ae
j/4
, with A = 1.2, A = 1, and A = 0.8.
(d) The magnitude and phase of x[n] for = Ae
j/4
, with A = 1.2, A = 1, and A = 0.8.
2.32 (Discrete-Time Sinusoids) Which of the following signals are periodic and with what period? Plot
each signal over 10 n 30. Do the plots conrm your expectations?
(a) x[n] = 2 cos(
n
2
) + 5 sin(
n
5
) (b) x[n] = 2 cos(
n
2
) sin(
n
3
)
(c) x[n] = cos(0.5n) (d) x[n] = 5 sin(
n
8
+

4
) 5 cos(
n
8


4
)
2.33 (Complex-Valued Signals) A complex-valued signal x[n] requires two plots for a complete descrip-
tion in one of two formsthe magnitude and phase vs. n or the real part vs. n and imaginary part
vs. n.
(a) Let x[n] =

2, 1 +j, j2, 2 j2, 4. Sketch each form for x[n] by hand.


(b) Let x[n] = e
j0.3n
. Use Matlab to plot each form over 30 n 30. Is x[n] periodic? If so,
can you identify its period from the Matlab plots? From which form, and how?
2.34 (Complex Exponentials) Let x[n] = 5

2e
j(
n
9

4
)
. Plot the following signals and, for each case,
derive analytic expressions for the signals plotted and compare with your plots. Is the signal x[n]
periodic? What is the period N? Which plots allow you determine the period of x[n]?
(a) The real part and imaginary part of x[n] over 20 n 20
(b) The magnitude and phase of x[n] over 20 n 20
(c) The sum of the real and imaginary parts over 20 n 20
(d) The dierence of the real and imaginary parts over 20 n 20
2.35 (Complex Exponentials) Let x[n] = (

j)
n
+(

j)
n
. Plot the following signals and, for each case,
derive analytic expressions for the sequences plotted and compare with your plots. Is the signal x[n]
periodic? What is the period N? Which plots allow you determine the period of x[n]?
(a) The real part and imaginary part of x[n] over 20 n 20
(b) The magnitude and phase of x[n] over 20 n 20
2.36 (Discrete-Time Chirp Signals) An N-sample chirp signal x[n] whose digital frequency varies
linearly from F
0
to F
1
is described by
x[n] = cos

F
0
n +
F
1
F
0
2N
n
2

, n = 0, 1, . . . , N 1
(a) Generate and plot 800 samples of a chirp signal x whose digital frequency varies from F = 0 to
F = 0.5. Using the Matlab based routine timefreq (from the authors website), observe how
the frequency of x varies linearly with time.
(b) Generate and plot 800 samples of a chirp signal x whose digital frequency varies from F = 0 to
F = 1. Is the frequency always increasing? If not, what is the likely explanation?
2.37 (Chirp Signals) It is claimed that the chirp signal x[n] = cos(n
2
/6) is periodic (unlike the analog
chirp signal x(t) = cos(t
2
/6)). Plot x[n] over 0 n 20. Does x[n] appear periodic? If so, can you
identify the period N? Justify your results by trying to nd an integer N such that x[n] = x[n + N]
(the basis for periodicity).
2.38 (Signal Averaging) Extraction of signals from noise is an important signal-processing application.
Signal averaging relies on averaging the results of many runs. The noise tends to average out to zero,
and the signal quality or signal-to-noise ratio (SNR) improves.
c Ashok Ambardar, September 1, 2003
Chapter 2 Problems 43
(a) Generate samples of the sinusoid x(t) = sin(800t) sampled at S = 8192 Hz for 2 seconds. The
sampling rate is chosen so that you may also listen to the signal if your machine allows.
(b) Create a noisy signal s[n] by adding x[n] to samples of uniformly distributed noise such that s[n]
has an SNR of 10 dB. Compare the noisy signal with the original and compute the actual SNR
of the noisy signal.
(c) Sum the signal s[n] 64 times and average the result to obtain the signal s
a
[n]. Compare the
averaged signal s
a
[n], the noisy signal s[n], and the original signal x[n]. Compute the SNR of
the averaged signal x
a
[n]. Is there an improvement in the SNR? Do you notice any (visual and
audible) improvement? Should you?
(d) Create the averaged result x
b
[n] of 64 dierent noisy signals and compare the averaged signal
x
b
[n] with the original signal x[n]. Compute the SNR of the averaged signal x
b
[n]. Is there an
improvement in the SNR? Do you notice any (visual and/or audible) improvement? Explain how
the signal x
b
[n] diers from x
a
[n].
(e) The reduction in SNR is a function of the noise distribution. Generate averaged signals, using
dierent noise distributions (such as Gaussian noise) and comment on the results.
2.39 (The Central Limit Theorem) The central limit theorem asserts that the sum of independent noise
distributions tends to a Gaussian distribution as the number N of distributions in the sum increases.
In fact, one way to generate a random signal with a Gaussian distribution is to add many (typically 6
to 12) uniformly distributed signals.
(a) Generate the sum of uniformly distributed random signals using N = 2, N = 6, and N = 12 and
plot the histograms of each sum. Does the histogram begin to take on a Gaussian shape as N
increases? Comment on the shape of the histogram for N = 2.
(b) Generate the sum of random signals with dierent distributions using N = 6 and N = 12. Does
the central limit theorem appear to hold even when the distributions are not identical (as long
as you select a large enough N)? Comment on the physical signicance of this result.
2.40 (Music Synthesis I) A musical composition is a combination of notes, or signals, at various frequen-
cies. An octave covers a range of frequencies from f
0
to 2f
0
. In the western musical scale, there are 12
notes per octave, logarithmically equispaced. The frequencies of the notes from f
0
to 2f
0
correspond
to
f = 2
k/12
f
0
k = 0, 1, 2, . . . , 11
The 12 notes are as follows (the

and

stand for sharp and at, and each pair of notes in parentheses
has the same frequency):
A (A

or B

) B C (C

or D

) D (D

or E

) E F (F

or G

) G (G

or A

)
An Example: Raga Malkauns: In Indian classical music, a raga is a musical composition based on
an ascending and descending scale. The notes and their order form the musical alphabet and grammar
from which the performer constructs musical passages, using only the notes allowed. The performance
of a raga can last from a few minutes to an hour or more! Raga malkauns is a pentatonic raga (with
ve notes) and the following scales:
Ascending: D F G B

C D Descending: C B

G F D
The nal note in each scale is held twice as long as the rest. To synthesize this scale in Matlab, we
start with a frequency f
0
corresponding to the rst note D and go up in frequency to get the notes in
the ascending scale; when we reach the note D, which is an octave higher, we go down in frequency to
get the notes in the descending scale. Here is a Matlab code fragment.
c Ashok Ambardar, September 1, 2003
44 Chapter 2 Discrete Signals
f0=340; d=f0; % Pick a frequency and the note D
f=f0*(2^(3/12)); g=f0*(2^(5/12)); % The notes F and G
bf=f0*(2^(8/12)); c=f0*(2^(10/12)); % The notes B(flat) and C
d2=2*d; % The note D (an octave higher)
Generate sampled sinusoids at these frequencies, using an appropriate sampling rate (say, 8192 Hz);
concatenate them, assuming silent passages between each note; and play the resulting signal, using the
Matlab command sound. Use the following Matlab code fragment as a guide:
ts=1/8192; % Sampling interval
t=0:ts:0.4; % Time for each note (0.4 s)
s1=0*(0:ts:0.1); % Silent period (0.1 s)
s2=0*(0:ts:0.05); % Shorter silent period (0.05 s)
tl=0:ts:1; % Time for last note of each scale
d1=sin(2*pi*d*t); % Start generating the notes
f1=sin(2*pi*f*t); g1=sin(2*pi*g*t);
bf1=sin(2*pi*bf*t); c1=sin(2*pi*c*t);
dl1=sin(2*pi*d2*tl); dl2=sin(2*pi*d*tl);
asc=[d1 s1 f1 s1 g1 s1 bf1 s1 c1 s2 dl1]; % Create ascending scale
dsc=[c1 s1 bf1 s1 g1 s1 f1 s1 dl2]; % Create descending scale
y=[asc s1 dsc s1]; sound(y) % Malkauns scale (y)
2.41 (Music Synthesis II) The raw scale of raga malkauns will sound pretty dry! The reason for
this is the manner in which the sound from a musical instrument is generated. Musical instruments
produce sounds by the vibrations of a string (in string instruments) or a column of air (in woodwind
instruments). Each instrument has its characteristic sound. In a guitar, for example, the strings are
plucked, held, and then released to sound the notes. Once plucked, the sound dies out and decays.
Furthermore, the notes are never pure but contain overtones (harmonics). For a realistic sound, we
must include the overtones and the attack, sustain, and release (decay) characteristics. The sound
signal may be considered to have the form x(t) = (t)cos(2f
0
t + ), where f
0
is the pitch and (t)
c Ashok Ambardar, September 1, 2003
Chapter 2 Problems 45
is the envelope that describes the attack-sustain-release characteristics of the instrument played. A
crude representation of some envelopes is shown in Figure P2.41 (the piecewise linear approximations
will work just as well for our purposes). Woodwind instruments have a much longer sustain time and
a much shorter release time than do plucked string and keyboard instruments.
woodwind instruments
Envelopes of (t)
string and keyboard instruments
Envelopes of (t)
1
t
1
1
t
1
Figure P2.41 Envelopes and their piecewise linear approximations (dark) for Problem 2.41
Experiment with the scale of raga malkauns and try to produce a guitar-like sound, using the appro-
priate envelope form. You should be able to discern an audible improvement.
2.42 (Music Synthesis III) Synthesize the following notes, using a woodwind envelope, and synthesize
the same notes using a plucked string envelope.
F

(0.3) D(0.4) E(0.4) A(1) A(0.4) E(0.4) F

(0.3) D(1)
All the notes cover one octave, and the numbers in parentheses give a rough indication of their relative
duration. Can you identify the music? (It is Big Ben.)
2.43 (Music Synthesis IV) Synthesize the rst bar of Pictures at an Exhibition by Mussorgsky, which
has the following notes:
A(3) G(3) C(3) D(2) G

(1) E(3) D(2) G

(1) E(3) C(3) D(3) A(3) G(3)


All the notes cover one octave except the note G

, which is an octave above G. The numbers in


parentheses give a rough indication of the relative duration of the notes (for more details, you may
want to listen to an actual recording). Assume that a keyboard instrument (such as a piano) is played.
2.44 (DTMF Tones) In dual-tone multi-frequency (DTMF) or touch-tone telephone dialing, each number
is represented by a dual-frequency tone. The frequencies for each digit are listed in Chapter 18.
(a) Generate DTMF tones corresponding to the telephone number 487-2550, by sampling the sum of
two sinusoids at the required frequencies at S = 8192 Hz for each digit. Concatenate the signals
by putting 50 zeros between each signal (to represent silence) and listen to the signal using the
Matlab command sound.
(b) Write a Matlab program that generates DTMF signals corresponding to a vector input repre-
senting the digits in a phone number. Use a sampling frequency of S = 8192 Hz.
c Ashok Ambardar, September 1, 2003
Chapter 3
TIME-DOMAIN ANALYSIS
3.0 Scope and Objectives
Systems that process discrete-time signals are called discrete-time systems or digital lters. Their math-
ematical description relies heavily on how they respond to arbitrary or specic signals. This chapter starts
with the classication of discrete systems and introduces the important concepts of linearity and time invari-
ance. It presents the analysis of discrete-time systems described by dierence equations. It concludes with
the all-important concept of the impulse response and the process of convolution that forms an important
method for nding the response of linear, time-invariant systems.
3.1 Discrete-Time Systems
In the time domain, many discrete-time systems can be modeled by dierence equations relating the
input and output. Dierence equations typically involve input and output signals and their shifted versions.
For example, the system described by y[n] = y[n 1] +x[n] produces the present output y[n] as the sum
of the previous output y[n 1] and the present input x[n]. The quantities and are called coecients.
Linear, time-invariant (LTI) systems are characterized by constant coecients. Such systems have been
studied extensively and their response can be obtained by well established mathematical methods.
3.1.1 Linearity and Superposition
An operator allows us to transform one function to another. If an operator is represented by the symbol O,
the equation
Ox[n] = y[n] (3.1)
implies that if the function x[n] is treated exactly as the operator O tells us, we obtain the function y[n]. The
forward shift operator z
k
describes an advance of k units and transforms x[n] to x[n+k]. We express this
transformation in operator notation as z
k
x[n] = x[n +k]. The backward shift operator z
k
describes
a delay of k units and transforms x[n] to x[n k]. In operator notation, we have z
k
x[n] = x[n k].
The unit delay operator is represented by z
1
and transforms x[n] to x[n 1]. In operator notation,
z
1
x[n] = x[n1]. An operation may describe several steps. For example, the operation O = 4z
3
+6
says that to get y[n], we must delay x[n] by 3 units, multiply the result by 4, and then add 6 to nally obtain
4z
3
x[n] + 6 = 4x[n 3] + 6 = y[n].
If an operation on the sum of two functions is equivalent to the sum of operations applied to each
separately, the operator is said to be additive. In other words,
Ox
1
[n] +x
2
[n] = Ox
1
[n] +Ox
2
[n] (for an additive operation) (3.2)
46 c Ashok Ambardar, September 1, 2003
3.1 Discrete-Time Systems 47
If an operation on Kx[n] is equivalent to K times the linear operation on x[n] where K is a scalar, the
operator is said to be homogeneous. In other words,
OKx[n] = KOx[n] (for a homogeneous operation) (3.3)
Together, the two results describe the principle of superposition. An operator O is termed a linear
operator if it obeys superposition and is therefore both additive and homogeneous:
OAx
1
[n] +Bx
2
[n] = AOx
1
[n] +BOx
2
[n] (for a linear operation) (3.4)
If a system fails the test for either additivity or homogeneity, it is termed nonlinear. It must pass both
tests in order to be termed linear. However, in many instances, it suces to test only for homogeneity or
additivity to conrm the linearity of an operation (even though one does not imply the other). An important
concept that forms the basis for the study of linear systems is that the superposition of linear operators is
also linear.
REVIEW PANEL 3.1
A Linear Operator Obeys Superposition: aOx
1
[n] +bOx
2
[n] = Oax
1
[n] +bx
2
[n]
Superposition implies both homogeneity and additivity.
Homogeneity: Oax[n] = aOx[n] Additivity: Ox
1
[n] +Ox
2
[n] = Ox
1
[n] +x
2
[n]
EXAMPLE 3.1 (Testing for Linear Operators)
(a) Consider the operator O = C +D.
By the homogeneity test, OAx[n] = ACx[n] +D, but AOx[n] = A(Cx[n] +D) = ACx[n] +AD.
The two dier, so the operation is nonlinear (it is linear only if D = 0).
(b) Consider the squaring operator O =
2
, which transforms x[n] to x
2
[n].
By the homogeneity test, AOx[n] = Ax
2
[n], but OAx[n] = (Ax[n])
2
= A
2
x
2
[n].
The two are not equal, and the squaring operator is nonlinear.
DRILL PROBLEM 3.1
Which of the following operations are linear?
(a) O = cos (b) O = log (c) Ox[n] =
n
x[n]
Answers: Only (c) is linear.
3.1.2 Time Invariance
Time invariance implies that the shape of the response y[n] depends only on the shape of the input x[n]
and not on the time when it is applied. If the input is shifted to x[n n
0
], the response equals y[n n
0
]
and is shifted by the same amount. In other words, the system does not change with time. Formally, if the
operator O transforms the input x[n] to the output y[n] such that Ox[n] = y[n], time invariance means
Ox[n n
0
] = y[n n
0
] (for time invariance) (3.5)
In other words, if the input is delayed by n
0
units, the output is also delayed by the same amount and
is simply a shifted replica of the original output.
c Ashok Ambardar, September 1, 2003
48 Chapter 3 Time-Domain Analysis
REVIEW PANEL 3.2
Time Invariance from the Operational Relation
If Ox[n] = y[n], then Ox[n n
0
) = y[n n
0
] (shift input by shift output by ).
EXAMPLE 3.2 (Linearity and Time Invariance of Operators)
(a) y[n] = x[n]x[n 1] is nonlinear but time invariant.
The operation is O = ( )(z
1
). We nd that
AOx[n] = A(x[n]x[n 1]), but OAx[n] = (Ax[n])(Ax[n 1]). The two are not equal.
Ox[n n
0
] = x[n n
0
]x[n n
0
1], and y[n n
0
] = x[n n
0
]x[n n
0
1]. The two are equal.
(b) y[n] = nx[n] is linear but time varying.
The operation is O = n . We nd that
AOx[n] = A(nx[n]), and OAx[n] = n(Ax[n]). The two are equal.
Ox[n n
0
] = n(x[n n
0
]), but y[n n
0
] = (n n
0
)x[n n
0
]. The two are not equal.
(c) y[n] = x[2n] is linear but time varying. The operation n 2n reveals that
AOx[n] = A(x[2n]), and OAx[n] = (Ax[2n]). The two are equal.
Ox[n n
0
] = x[2n n
0
], but y[n n
0
] = x[2(n n
0
)]. The two are not equal.
(d) y[n] = x[n 2] is linear and time invariant. The operation n n 2 reveals that
AOx[n] = A(x[n 2]), and OAx[n] = (Ax[n 2]). The two are equal.
Ox[n n
0
] = x[n n
0
2], and y[n n
0
] = x[n n
0
2]. The two are equal.
(e) y[n] = 2
x[n]
x[n] is nonlinear but time invariant.
The operation is O = (2)
{ }
and reveals that
AOx[n] = A(2)
x[n]
x[n], but OAx[n] = (2)
Ax[n]
(Ax[n]). The two are not equal.
Ox[n n
0
] = (2)
x[nn0]
x[n n
0
], and y[n n
0
] = (2)
x[nn0]
x[n n
0
]. The two are equal.
DRILL PROBLEM 3.2
Which of the following operations are time-invariant?
(a) Ox[n] = cosx[n] (b) Ox[n] = Cx[n] (c) Ox[n] = C
n
x[n]
Answers: (a) and (b) are time-invariant.
c Ashok Ambardar, September 1, 2003
3.1 Discrete-Time Systems 49
3.1.3 LTI Systems
Systems that are both linear and time-invariant are termed LTI (linear, time invariant). We may check for
linearity or time-invariance by applying formal tests to the system equation as a whole or by looking at
its individual operations. A dierence equation is LTI if all its coecients are constant (and no constant
terms are present). Any nonlinear or time-varying behavior is recognized (by generalizing the results of the
previous example) as follows:
1. If a constant term is present or a term contains products of the input and/or output, the system
equation is nonlinear.
2. If a coecient is an explicit function of n or a scaled input or output (such as y[2n]) is present, the
system equation is time varying.
REVIEW PANEL 3.3
What Makes a Dierence Equation LTI or Nonlinear or Time Varying?
It is LTI if all coecients are constant and there are no constant terms.
It is nonlinear if a term is constant or a nonlinear function of x[n] or y[n].
It is time varying if a coecient is an explicit function of n or an input or output is scaled (e.g. y[2n]).
EXAMPLE 3.3 (Linearity and Time Invariance of Systems)
We check the following systems for linearity and time invariance.
(a) y[n] 2y[n 1] = 4x[n]. This is LTI.
(b) y[n] 2ny[n 1] = x[n]. This is linear but time varying.
(c) y[n] + 2y
2
[n] = 2x[n] x[n 1]. This is nonlinear but time invariant.
(d) y[n] 2y[n 1] = (2)
x[n]
x[n]. This is nonlinear but time invariant.
(e) y[n] 4y[n]y[2n] = x[n]. This is nonlinear and time varying.
DRILL PROBLEM 3.3
What can you say about the linearity (L) and time-invariance (TI) of the following?
(a) y[n] + 2y[n 1] = x[n] (b) y[n] + 2
n
y[n 1] = x[n] (c) y
2
[n] + 2
n
y[n 1] = x[n]
Answers: (a) L and TI (b) L but not TI (c) Not L, not TI
3.1.4 Causality and Memory
In many practical situations, we deal with systems whose inputs and outputs are right-sided signals. In a
causal system, the present response y[n] cannot depend on future values of the input, such as x[n + 2].
Systems whose present response requires knowledge of future values of the input are termed noncausal.
Consider an LTI system whose input and output are assumed to be right-sided signals. If such a system is
described by the dierence equation
y[n] +A
1
y[n 1] + +A
N
y[n N] = B
0
x[n +K]
c Ashok Ambardar, September 1, 2003
50 Chapter 3 Time-Domain Analysis
it is causal as long as K 0. On the other hand, if the system is described by
y[n +L] +A
1
y[n +L 1] + +A
L
y[n] = B
0
x[n +K]
it is causal as long as K L. The reason is that by time invariance, this system may also be described by
y[n] +A
1
y[n 1] + +A
L
y[n L] = B
0
x[n +K L]
For causality, we require K L 0 or K L. It is often easier to check for causality by examining the
operational transfer function H(z) derived from the dierence equation. The general form of such a
transfer function may be expressed as a ratio of polynomials in z
H(z) =
B
0
z
P
+B
1
z
P1
+ +B
P1
z +B
P
A
0
z
Q
+A
1
z
Q1
+ +A
Q1
z +A
Q
(3.6)
Assuming a right-sided input and output, this system is causal if P Q.
DRILL PROBLEM 3.4
(a) Which of the two systems are causal?
y[n] = 2
n+1
x[n] y[n] = 2
n
x[n + 1]
(b) Find the operational transfer function of each system and determine if it is causal.
y[n] 2y[n 1] = x[n + 1] y[n + 1] 2y[n 2] = x[n + 1]
Answers: (a) The rst is causal. (b) The second is causal.
Instantaneous and Dynamic Systems
If the response of a system at n = n
0
depends only on the input at n = n
0
and not at any other times
(past or future), the system is called instantaneous or static. The system equation of an instantaneous
system has the form y[n +] = Kx[n +]. Note that the arguments of the input and output are identical.
The response of a dynamic system depends on past (and/or future) inputs. Dynamic systems are usually
described by (but not limited to) dierence equations. The system y[n] + 0.5y[n 1] = x[n] (a dierence
equation) is dynamic. The system y[n] = 3x[n2] is also dynamic (because the arguments of the input and
output are dierent) but the system y[n 2] = 3x[n 2] is instantaneous.
REVIEW PANEL 3.4
What Makes a System Noncausal or Dynamic?
Noncausal if the numerator degree of the operational transfer function exceeds the denominator degree.
Dynamic if the system equation has a form dierent from y[n +] = Kx[n +].
EXAMPLE 3.4 (Causal and Dynamic Systems)
(a) y[n] = x[n + 2] is noncausal (to nd y[0], we need x[2]) and dynamic (y[n
0
] does not depend on x[n
0
]
but on x[n
0
+ 2]).
(b) y[n + 4] + y[n + 3] = x[n + 2]. By time invariance, y[n] + y[n 1] = x[n 2]. So it is causal and
dynamic.
(c) y[n] = 2x[n] is causal and instantaneous for = 1, causal and dynamic for < 1, and noncausal and
dynamic for > 1. It is also time varying if = 1.
(d) y[n] = 2(n + 1)x[n] is causal, instantaneous and time varying.
c Ashok Ambardar, September 1, 2003
3.2 Digital Filters 51
DRILL PROBLEM 3.5
Which of the following systems are instantaneous?
(a) y[n] = 2
n+1
x[n] (b) y[n] = 2
n
x[n + 1] (c) y[n] 2y[n 1] = x[n + 1]
Answers: Only (a) is instantaneous.
3.2 Digital Filters
A discrete-time system is also referred to as a digital lter and we shall use this terminology extensively. An
important formulation for digital lters is based on dierence equations. The general form of an Nth-order
dierence equation may be written as
y[n] +A
1
y[n 1] + +A
N
y[n N] = B
0
x[n] +B
1
x[n 1] + +B
M
x[n M] (3.7)
The order N describes the output term with the largest delay. It is customary to normalize the leading
coecient to unity. The coecients A
k
and B
k
are constant for an LTI digital lter. The response depends
on the applied input and initial conditions that describe its state just before the input is applied. Systems
with zero initial conditions are said to be relaxed. In general, if an input x[n] to a relaxed LTI system
undergoes a linear operation, the output y[n] undergoes the same linear operation. Often, an arbitrary
function may be decomposed into its simpler constituents, the response due to each analyzed separately
and more eectively, and the total response found using superposition. This approach is the key to several
methods of lter analysis described in subsequent chapters. An arbitrary signal x[n] can be expressed as a
weighted sum of shifted impulses. If the input to a discrete-time system is the unit impulse [n], the resulting
output is called the impulse response and denoted h[n]. The impulse response is the basis for nding the
lter response by a method called convolution.
3.2.1 Digital Filter Terminology
An important lter classication is based on the length of its impulse response. Consider a digital lter
described by the equation
y[n] = B
0
x[n] +B
1
x[n 1] + +B
M
x[n M] (Non-recursive or FIR lter) (3.8)
Its present response depends only on the input terms and shows no dependence (recursion) on past values of
the response. It is called a nonrecursive lter, or a moving average lter, because its response is just a
weighted sum (moving average) of the input terms. It is also called a nite impulse response (FIR) lter
because its impulse response is of nite duration (length). In fact the impulse response sequence contains
sample values that correspond directly to the so called lter coecients B
k
.
Now consider a digital lter described by the dierence equation
y[n] +A
1
y[n 1] + +A
N
y[n N] = B
0
x[n] (Recursive, IIR or AR Filter) (3.9)
This describes a recursive lter of order N whose present output depends on its own past values y[n k]
and on present value of the input. It is also called an innite impulse response (IIR) lter because its
impulse response h[n] (the response to a unit impulse input) is usually of innite duration. It is also called
an AR (autoregressive) lter because its output depends (regresses) on its own previous values.
Finally, consider the most general formulation described by the dierence equation
y[n]+A
1
y[n1]+ +A
N
y[nN] = B
0
x[n]+B
1
x[n1]+ +B
M
x[nM] (Recursive or ARMA Filter)
(3.10)
c Ashok Ambardar, September 1, 2003
52 Chapter 3 Time-Domain Analysis
This is also a recursive lter. It is called an ARMA (autoregressive, moving average) lter because its
present output depends not only on its own past values y[n k] but also on the past and present values of
the input. In general, the impulse response of such a lter is also of innite duration.
REVIEW PANEL 3.5
The Terminology of Digital Filters
Nonrecursive or FIR: y[n] = B
0
x[n] +B
1
x[n 1] + +B
M
x[n M]
Recursive, IIR or AR: y[n] +A
1
y[n 1] + +A
N
y[n N] = B
0
x[n]
Recursive or ARMA: y[n] +A
1
y[n 1] + +A
N
y[n N] = B
0
x[n] +B
1
x[n 1] + +B
M
x[n M]
3.2.2 Digital Filter Realization
Digital lters described by linear dierence equations with constant coecients may be realized by using
elements corresponding to the operations of scaling (or scalar multiplication), shift (or delay), and summing
(or addition) that naturally occur in such equations. These elements describe the gain (scalar multiplier),
delay, and summer (or adder), represented symbolically in Figure 3.1.
[n] x [n1] x
1
z
[n] x [n] x A
[n] x [n] y [n] x
[n] y
Delay Multiplier
A
Summer
+
+
+

Figure 3.1 The building blocks for digital lter realization


Delay elements in cascade result in an output delayed by the sum of the individual delays. The operational
notation for a delay of k units is z
k
. A nonrecursive lter described by
y[n] = B
0
x[n] +B
1
x[n 1] + +B
N
x[n N] (3.11)
can be realized using a feed-forward structure with N delay elements, and a recursive lter of the form
y[n] = A
1
y[n 1] A
N
y[n N] +x[n] (3.12)
requires a feedback structure (because the output depends on its own past values). Each realization is
shown in Figure 3.2 and requires N delay elements.
DRILL PROBLEM 3.6
Sketch the realizations of the following systems
(a) y[n] 0.6y[n 1] = x[n] (b) y[n]
1
6
y[n 1]
1
6
y[n 2] = 4x[n]
Answer: See following gures.
[n] x [n] y
1
z
+
+

0.6
[n] x [n] y
1
z
1
z
+
+
+
+

4
1/6
1/6
c Ashok Ambardar, September 1, 2003
3.2 Digital Filters 53
1
A
2
A
N
A
0
B
1
B
2
B
M
B
[n] x [n] y [n] y [n] x
1
z
1
z
1
z
1
z
1
z
1
z
+
+
+
+
+
+ +
+
+
+
+
+

Figure 3.2 Realization of a nonrecursive (left) and recursive (right) digital lter
The general form described by
y[n] = A
1
y[n 1] A
N
y[n N] +B
0
x[n] +B
1
x[n 1] + +B
N
x[n N] (3.13)
requires both feed-forward and feedback and may be realized using 2N delay elements, as shown in Figure 3.3.
This describes a direct form I realization.
However, since LTI systems may be cascaded in any order (as we shall learn soon), we can switch the
feedback and feedforward sections to obtain a canonical realization with only N delays, as also shown in
Figure 3.3. It is also called a direct form II realization. Other forms that also use only N elements are
also possible. We discuss various aspects of digital lter realization in more detail in subsequent chapters.
0
B
1
B
2
B
N
B
1
A
2
A
N
A
[n] y [n] x
1
z
1
z
1
z
1
z
1
z
1
z
+
+
+
+
+
+
+
+
+
+
+
+

0
B
1
B
2
B
N
B
1
A
2
A
N
A
[n] y [n] x
1
z
1
z
1
z
+
+
+
+
+
+ +
+
+
+
+
+

Figure 3.3 Direct (left) and canonical (right) realization of a digital lter
c Ashok Ambardar, September 1, 2003
54 Chapter 3 Time-Domain Analysis
DRILL PROBLEM 3.7
What is the dierence equation of the digital lter whose realization is shown?
[n] y [n] x
1
z
1
z
+
+
+
+ +

2

2
Answer: y[n] y[n 1] 2y[n 2] = 2x[n] x[n 1]
3.3 Response of Digital Filters
A digital lter processes discrete signals and yields a discrete output in response to a discrete input. Its
response depends not only on the applied input but also on the initial conditions that describe its state just
prior to the application of the input. Systems with zero initial conditions are said to be relaxed. Digital
lters may be analyzed in the time domain using any of the following models:
The dierence equation representation applies to linear, nonlinear, and time-varying systems. For LTI
systems, it allows computation of the response using superposition even if initial conditions are present.
The impulse response representation describes a relaxed LTI system by its impulse response h[n]. The
output y[n] appears explicitly in the governing relation called the convolution sum. It also allows us to relate
time-domain and transformed-domain methods of system analysis.
The state variable representation describes an nth-order system by n simultaneous rst-order dierence
equations called state equations in terms of n state variables. It is useful for complex or nonlinear systems
and those with multiple inputs and outputs. For LTI systems, state equations can be solved using matrix
methods. The state variable form is also readily amenable to numerical solution. We do not pursue this
method in this book.
3.3.1 Response of Nonrecursive Filters
The system equation of a non-recursive lter is
y[n] = B
0
x[n] +B
1
x[n 1] + +B
M
x[n M] (FIR lter)
Since the output y[n] depends only on the input x[n] and its shifted versions, the response is simply a
weighted sum of the input terms exactly as described by its system equation.
EXAMPLE 3.5 (Response of Nonrecursive Filters)
Consider an FIR lter described by y[n] = 2x[n] 3x[n 2]. Find its output y[n] if the input to the system
is x[n] = (0.5)
n
u[n] and compute y[n] for n = 1 and n = 2.
We nd that y[n] = 2x[n] 3x[n 2] = 2(0.5)
n
u[n] 3(0.5)
n2
u[n 2].
We get y[1] = 2(0.5)
1
= 1 and y[2] = 2(0.5)
2
3(0.5)
0
= 0.5 3 = 2.5
c Ashok Ambardar, September 1, 2003
3.3 Response of Digital Filters 55
3.3.2 Response of Recursive Filters by Recursion
The dierence equation of a recursive digital lter is
y[n] +A
1
y[n 1] + +A
N
y[n N] = B
0
x[n] +B
1
x[n 1] + +B
M
x[n M]
The response shows dependence on its past values as well as values of the input. The output y[n] requires
the prior values of y[n 1], y[n 2], . . . , y[n N]. Once known, we can use y[n] and the other previously
known values to compute y[n + 1] and continue to use recursion to successively compute the values of the
output as far as desired. Consider the second-order dierence equation
y[n] +A
1
y[n 1] +A
2
y[n 2] = B
0
x[n] +B
1
x[n 1]
To nd y[n], we rewrite this equation as follows
y[n] = A
1
y[n 1] A
2
y[n 2] +B
0
x[n] +B
1
x[n 1]
We see that y[n] can be found from its past values y[n 1] and y[n 2]. To start the recursion at n = 0, we
must be given values of y[1] and y[2], and once known, values of y[n], n 0 may be computed successively
as far as desired. We see that this method requires initial conditions to get the recursion started. In general,
the response y[n], n 0 of the Nth-order dierence equation requires the N consecutive initial conditions
y[1], y[2], . . . , y[N]. The recursive approach is eective and can be used even for nonlinear or time
varying systems. Its main disadvantage is that a general closed form solution for the output is not always
easy to discern.
EXAMPLE 3.6 (System Response Using Recursion)
(a) Consider a system described by y[n] = a
1
y[n 1] +b
0
u[n]. Let the initial condition be y[1] = 0. We
then successively compute
y[0] = a
1
y[1] +b
0
u[0] = b
0
y[1] = a
1
y[0] +b
0
u[1] = a
1
b
0
+b
0
= b
0
[1 +a
1
]
y[2] = a
1
y[1] +b
0
u[2] = a
1
[a
1
b
0
+b
0
] +b
0
= b
0
[1 +a
1
+a
2
1
]
The form of y[n] may be discerned as
y[n] = b
0
[1 +a
1
+a
2
1
+ +a
n1
1
+a
n
1
]
Using the closed form for the geometric sequence results in
y[n] =
b
0
(1 a
n+1
1
)
1 a
1
If the coecients appear as numerical values, the general form may not be easy to discern.
(b) Consider a system described by y[n] = a
1
y[n1] +b
0
nu[n]. Let the initial condition be y[1] = 0. We
then successively compute
y[0] = a
1
y[1] = 0
y[1] = a
1
y[0] +b
0
u[1] = b
0
y[2] = a
1
y[1] + 2b
0
u[2] = a
1
b
0
+ 2b
0
y[3] = a
1
y[2] + 3b
0
u[3] = a
1
[a
1
b
0
+ 2b
0
] + 3b
0
= a
2
1
+ 2a
1
b
0
+ 3b
0
c Ashok Ambardar, September 1, 2003
56 Chapter 3 Time-Domain Analysis
The general form is thus y[n] = a
n1
1
+ 2b
0
a
n2
1
+ 3b
0
a
n3
1
+ (n 1)b
0
a
1
+nb
0
.
We can nd a more compact form for this, but not without some eort. By adding and subtracting
b
0
a
n1
1
and factoring out a
n
1
, we obtain
y[n] = a
n
1
b
0
a
n1
1
+b
0
a
n
1
[a
1
1
+ 2a
2
1
+ 3a
3
1
+ +na
n
1
]
Using the closed form for the sum kx
k
from k = 1 to k = N (with x = a
1
), we get
y[n] = a
n
1
b
0
a
n1
1
+b
0
a
n
1
a
1
[1 (n + 1)a
n
+na
(n+1)
]
(1 a
1
)
2
What a chore! More elegant ways of solving dierence equations are described later in this chapter.
(c) Consider the recursive system y[n] = y[n 1] + x[n] x[n 3]. If x[n] equals [n] and y[1] = 0, we
successively obtain
y[0] = y[1] +[0] [3] = 1 y[3] = y[2] +[3] [0] = 1 1 = 0
y[1] = y[0] +[1] [2] = 1 y[4] = y[3] +[4] [1] = 0
y[2] = y[1] +[2] [1] = 1 y[5] = y[4] +[5] [2] = 0
The impulse response of this recursive lter is zero after the rst three values and has a nite length.
It is actually a nonrecursive (FIR) lter in disguise!
DRILL PROBLEM 3.8
(a) Let y[n] y[n 1] 2y[n 2] = u[n]. Use recursion to compute y[3] if y[1] = 2, y[2] = 0.
(b) Let y[n] 0.8y[n1] = x[n]. Use recursion to nd the general form of y[n] if x[n] = [n] and y[1] = 0.
Answers: (a) 32 (b) (0.8)
n
, n 0 or (0.8)
n
u[n]
3.4 The Natural and Forced Response
For an LTI system governed by a linear constant-coecient dierence equation, a formal way of computing
the output is by the method of undetermined coecients. This method yields the total response y[n] as
the sum of the forced response y
F
[n] and the natural response y
N
[n]. The form of the natural response
depends only on the system details and is independent of the nature of the input. The forced response arises
due to the interaction of the system with the input and thus depends on both the input and the system
details.
3.4.1 The Single-Input Case
Consider the Nth-order dierence equation with the single unscaled input x[n]
y[n] +A
1
y[n 1] +A
2
y[n 2] + +A
N
y[n N] = x[n] (3.14)
with initial conditions y[1], y[2], y[3], . . . , y[N].
Its forced response arises due to the interaction of the system with the input and thus depends on
both the input and the system details. It satises the given dierence equation and has the same form as the
c Ashok Ambardar, September 1, 2003
3.4 The Natural and Forced Response 57
input. Table 3.1 summarizes these forms for various types of inputs. The constants in the forced response
can be found uniquely and independently of the natural response or initial conditions simply by satisfying
the given dierential equation.
The characteristic equation is dened by the polynomial equation
1 +A
1
z
1
+A
2
z
2
+ +A
N
z
N
= z
N
+A
1
z
N1
+ +A
N
= 0 (3.15)
This equation has N roots, z
1
, z
2
, . . . , z
N
. The natural response is a linear combination of N discrete-time
exponentials of the form
y
N
[n] = K
1
z
n
1
+K
2
z
n
2
+ +K
N
z
n
N
(3.16)
This form must be modied for multiple roots. Since complex roots occur in conjugate pairs, their associated
constants also form conjugate pairs to ensure that y
N
[n] is real. Algebraic details lead to the preferred form
with two real constants. Table 3.2 summarizes the preferred forms for multiple or complex roots.
The total response is found by rst adding the forced and natural response and then evaluating the un-
determined constants (in the natural component) using the prescribed initial conditions. For stable systems,
the natural response is also called the transient response, since it decays to zero with time. For systems
with harmonic or switched harmonic inputs, the forced response is a harmonic at the input frequency and
is termed the steady-state response.
Table 3.1 Form of the Forced Response for Discrete LTI Systems
Note: If the right-hand side (RHS) is
n
, where is also a root of the characteristic
equation repeated p times, the forced response form must be multiplied by n
p
.
Entry Forcing Function (RHS) Form of Forced Response
1 C
0
(constant) C
1
(another constant)
2
n
(see note above) C
n
3 cos(n +) C
1
cos(n) +C
2
sin(n) or C cos(n +)
4
n
cos(n +) (see note above)
n
[C
1
cos(n) +C
2
sin(n)]
5 n C
0
+C
1
n
6 n
p
C
0
+C
1
n +C
2
n
2
+ +C
p
n
p
7 n
n
(see note above)
n
(C
0
+C
1
n)
8 n
p

n
(see note above)
n
(C
0
+C
1
n +C
2
n
2
+ +C
p
n
p
)
9 ncos(n +) (C
1
+C
2
n)cos(n) + (C
3
+C
4
n)sin(n)
c Ashok Ambardar, September 1, 2003
58 Chapter 3 Time-Domain Analysis
Table 3.2 Form of the Natural Response for Discrete LTI Systems
Entry Root of Characteristic Equation Form of Natural Response
1 Real and distinct: r Kr
n
2 Complex conjugate: re
j
r
n
[K
1
cos(n) +K
2
sin(n)]
3 Real, repeated: r
p+1
r
n
(K
0
+K
1
n +K
2
n
2
+ +K
p
n
p
)
4 Complex, repeated:

re
j

p+1
r
n
cos(n)(A
0
+A
1
n +A
2
n
2
+ +A
p
n
p
)
+ r
n
sin(n)(B
0
+B
1
n +B
2
n
2
+ +B
p
n
p
)
REVIEW PANEL 3.6
Response of LTI Systems Described by Dierence Equations
Total Response = Natural Response + Forced Response
The roots of the characteristic equation determine only the form of the natural response.
The input terms (RHS) of the dierence equation completely determine the forced response.
Initial conditions satisfy the total response to yield the constants in the natural response.
EXAMPLE 3.7 (Forced and Natural Response)
(a) Consider the system shown in Figure E3.7A.
Find its response if x[n] = (0.4)
n
, n 0 and the initial condition is y[1] = 10.
[n] x [n] y
1
z
+
+

0.6
Figure E3.7A The system for Example 3.7(a)
The dierence equation describing this system is y[n] 0.6y[n 1] = x[n] = (0.4)
n
, n 0.
Its characteristic equation is 1 0.6z
1
= 0 or z 0.6 = 0.
Its root z = 0.6 gives the form of the natural response y
N
[n] = K(0.6)
n
.
Since x[n] = (0.4)
n
, the forced response is y
F
[n] = C(0.4)
n
.
We nd C by substituting for y
F
[n] into the dierence equation
y
F
[n] 0.6y
F
[n 1] = (0.4)
n
= C(0.4)
n
0.6C(0.4)
n1
.
Cancel out (0.4)
n
from both sides and solve for C to get
C 1.5C = 1 or C = 2.
Thus, y
F
[n] = 2(0.4)
n
. The total response is y[n] = y
N
[n] +y
F
[n] = 2(0.4)
n
+K(0.6)
n
.
We use the initial condition y[1] = 10 on the total response to nd K:
c Ashok Ambardar, September 1, 2003
3.4 The Natural and Forced Response 59
y[1] = 10 = 5 +
K
0.6
and K = 9.
Thus, y[n] = 2(0.4)
n
+ 9(0.6)
n
, n 0.
(b) Consider the dierence equation y[n] 0.5y[n 1] = 5 cos(0.5n), n 0 with y[1] = 4.
Its characteristic equation is 1 0.5z
1
= 0 or z 0.5 = 0.
Its root z = 0.5 gives the form of the natural response y
N
[n] = K(0.5)
n
.
Since x[n] = 5 cos(0.5n), the forced response is y
F
[n] = Acos(0.5n) +Bsin(0.5n).
We nd y
F
[n 1] = Acos[0.5(n 1)] +Bsin[0.5(n 1)] = Asin(0.5n) Bcos(0.5n). Then
y
F
[n] 0.5y
F
[n 1] = (A+ 0.5B)cos(0.5n) (0.5AB)sin(0.5n) = 5 cos(0.5n)
Equate the coecients of the cosine and sine terms to get
(A+ 0.5B) = 5, (0.5AB) = 0 or A = 4, B = 2, and y
F
[n] = 4 cos(0.5n) + 2 sin(0.5n).
The total response is y[n] = K(0.5)
n
+ 4 cos(0.5n) + 2 sin(0.5n). With y[1] = 4, we nd
y[1] = 4 = 2K 2 or K = 3, and thus y[n] = 3(0.5)
n
+ 4 cos(0.5n) + 2 sin(0.5n), n 0.
The steady-state response is 4 cos(0.5n) + 2 sin(0.5n), and the transient response is 3(0.5)
n
.
(c) Consider the dierence equation y[n] 0.5y[n 1] = 3(0.5)
n
, n 0 with y[1] = 2.
Its characteristic equation is 1 0.5z
1
= 0 or z 0.5 = 0.
Its root, z = 0.5, gives the form of the natural response y
N
[n] = K(0.5)
n
.
Since x[n] = (0.5)
n
has the same form as the natural response, the forced response is y
F
[n] = Cn(0.5)
n
.
We nd C by substituting for y
F
[n] into the dierence equation:
y
F
[n] 0.5y
F
[n 1] = 3(0.5)
n
= Cn(0.5)
n
0.5C(n 1)(0.5)
n1
.
Cancel out (0.5)
n
from both sides and solve for C to get Cn C(n 1) = 3, or C = 3.
Thus, y
F
[n] = 3n(0.5)
n
. The total response is y[n] = y
N
[n] +y
F
[n] = K(0.5)
n
+ 3n(0.5)
n
.
We use the initial condition y[1] = 2 on the total response to nd K:
y[1] = 2 = 2K 6, and K = 4.
Thus, y[n] = 4(0.5)
n
+ 3n(0.5)
n
= (4 + 3n)(0.5)
n
, n 0.
(d) (A Second-Order System) Consider the system shown in Figure E3.7D.
Find the forced and natural response of this system if x[n] = u[n] and y[1] = 0, y[2] = 12.
[n] x [n] y
1
z
1
z
+
+
+
+

4
1/6
1/6
Figure E3.7D The system for Example 3.7(d)
c Ashok Ambardar, September 1, 2003
60 Chapter 3 Time-Domain Analysis
Comparison with the generic realization reveals that the system dierence equation is
y[n]
1
6
y[n 1]
1
6
y[n 2] = 4x[n] = 4u[n]
Its characteristic equation is 1
1
6
z
1

1
6
z
2
= 0 or z
2

1
6
z
1
6
= 0.
Its roots are z
1
=
1
2
and z
2
=
1
3
.
The natural response is thus y
N
[n] = K
1
(z
1
)
n
+K
2
(z
2
)
n
= K
1
(
1
2
)
n
+K
2
(
1
3
)
n
.
Since the forcing function is 4u[n] (a constant for n 0), the forced response y
F
[n] is constant.
Let y
F
[n] = C. Then y
F
[n 1] = C, y
F
[n 2] = C, and
y
F
[n]
1
6
y
F
[n 1]
1
6
y
F
[n 2] = C
1
6
C
1
6
C = 4. This yields C = 6.
Thus, y
F
[n] = 6. The total response y[n] is y[n] = y
N
[n] +y
F
[n] = K
1
(
1
2
)
n
+K
2
(
1
3
)
n
+ 6.
To nd the constants K
1
and K
2
, we use the initial conditions on the total response to obtain
y[1] = 0 = 2K
1
3K
2
+ 6, and y[2] = 12 = 4K
1
+ 9K
2
+ 6. We nd K
1
= 1.2 and K
2
= 1.2.
Thus, y[n] = 1.2(
1
2
)
n
+ 1.2(
1
3
)
n
+ 6, n 0.
Its transient response is 1.2(
1
2
)
n
+ 1.2(
1
3
)
n
. Its steady-state response is a constant that equals 6.
DRILL PROBLEM 3.9
(a) Let y[n] 0.8y[n 1] = 2 with y[1] = 5. Solve for y[n], n 0.
(b) Let y[n] + 0.8y[n 1] = 2(0.8)
n
with y[1] = 10. Solve for y[n], n 0.
(c) Let y[n] 0.8y[n 1] = 2(0.8)
n
with y[1] = 5. Solve for y[n], n 0.
(d) Let y[n] 0.8y[n 1] = 2(0.8)
n
+ 2(0.4)
n
with y[1] = 5. Solve for y[n], n 0.
Answers: (a) 10 4(0.8)
n
(b) (0.8)
n
7(0.8)
n
(c) (2n+6)(0.8)
n
(d) (0.8)
n
+(0.8)
n
2(0.4)
n
3.4.2 The Zero-Input Response and Zero-State Response
A linear system is one for which superposition applies and implies that the system is relaxed (with zero initial
conditions) and that the system equation involves only linear operators. However, we can use superposition
even for a system with nonzero initial conditions that is otherwise linear. We treat it as a multiple-input
system by including the initial conditions as additional inputs. The output then equals the superposition
of the outputs due to each input acting alone, and any changes in the input are related linearly to changes
in the response. As a result, its response can be written as the sum of a zero-input response (due to the
initial conditions alone) and the zero-state response (due to the input alone). The zero-input response obeys
superposition, as does the zero-state response.
It is often more convenient to describe the response y[n] of an LTI system as the sum of its zero-state
response (ZSR) y
zs
[n] (assuming zero initial conditions) and zero-input response (ZIR) y
zi
[n] (assuming zero
input). Each component is found using the method of undetermined coecients. Note that the natural and
forced components y
N
[n] and y
F
[n] do not, in general, correspond to the zero-input and zero-state response,
respectively, even though each pair adds up to the total response.
c Ashok Ambardar, September 1, 2003
3.4 The Natural and Forced Response 61
REVIEW PANEL 3.7
The ZSR and ZIR for y[n] +A
1
y[n 1] + +A
N
y[n N] = x[n]
1. Find ZSR from y
zs
[n] +A
1
y
zs
[n 1] + +A
N
y
zs
[n N] = x[n], assume zero initial conditions.
2. Find ZIR from y
zi
[n] +A
1
y
zi
[n 1] + +A
N
y
zi
[n N] = 0, using given initial conditions.
3. Find complete response as y[n] = y
zs
[n] +y
zi
[n].
The ZSR obeys superposition. The ZIR obeys superposition.
EXAMPLE 3.8 (Zero-Input and Zero-State Response for the Single-Input Case)
(a) (A First-Order System) Consider the dierence equation y[n] 0.6y[n 1] = (0.4)
n
, n 0, with
y[1] = 10.
Its characteristic equation is 1 0.6z
1
= 0 or z 0.6 = 0.
Its root z = 0.6 gives the form of the natural response y
N
[n] = K(0.6)
n
.
Since x[n] = (0.4)
n
, the forced response is y
F
[n] = C(0.4)
n
.
We nd C by substituting for y
F
[n] into the dierence equation
y
F
[n] 0.6y
F
[n 1] = (0.4)
n
= C(0.4)
n
0.6C(0.4)
n1
.
Cancel out (0.4)
n
from both sides and solve for C to get
C 1.5C = 1 or C = 2.
Thus, y
F
[n] = 2(0.4)
n
.
The total response (subject to initial conditions) is y[n] = y
F
[n] +y
N
[n] = 2(0.4)
n
+K(0.6)
n
1. Its ZSR is found from the form of the total response is y
zs
[n] = 2(0.4)
n
+ K(0.6)
n
, assuming
zero initial conditions:
y
zs
[1] = 0 = 5 +
K
0.6
K = 3 y
zs
[n] = 2(0.4)
n
+ 3(0.6)
n
, n 0
2. Its ZIR is found from the natural response y
zi
[n] = K(0.6)
n
, with given initial conditions:
y
zi
[1] = 10 =
K
0.6
K = 6 y
zi
[n] = 6(0.6)
n
, n 0
3. The total response is y[n] = y
zi
[n] +y
zs
[n] = 2(0.4)
n
+ 9(0.6)
n
, n 0.
(b) (A Second-Order System)
Let y[n]
1
6
y[n 1]
1
6
y[n 2] = 4, n 0, with y[1] = 0 and y[2] = 12.
Its characteristic equation is 1
1
6
z
1

1
6
z
2
= 0 or z
2

1
6
z
1
6
= 0.
Its roots are z
1
=
1
2
and z
2
=
1
3
.
Since the forcing function is a constant for n 0, the forced response y
F
[n] is constant.
Let y
F
[n] = C. Then y
F
[n1] = C, y
F
[n2] = C, and y
F
[n]
1
6
y
F
[n1]
1
6
y
F
[n2] = C
1
6
C
1
6
C = 4.
This yields C = 6 to give the forced response y
F
[n] = 6.
c Ashok Ambardar, September 1, 2003
62 Chapter 3 Time-Domain Analysis
1. The ZIR has the form of the natural response y
zi
[n] = K
1
(
1
2
)
n
+K
2
(
1
3
)
n
.
To nd the constants, we use the given initial conditions y[1] = 0 and y[2] = 12:
0 = K
1
(
1
2
)
1
+K
2
(
1
3
)
1
= 2K
1
3K
2
12 = K
1
(
1
2
)
2
+K
2
(
1
3
)
2
= 4K
1
+ 9K
2
Thus, K
1
= 1.2, K
2
= 0.8, and
y
zi
[n] = 1.2(
1
2
)
n
+ 0.8(
1
3
)
n
, n 0
2. The ZSR has the same form as the total response. Since the forced response is y
F
[n] = 6, we have
y
zs
[n] = K
1
(
1
2
)
n
+K
2
(
1
3
)
n
+ 6
To nd the constants, we assume zero initial conditions, y[1] = 0 and y[2] = 0, to get
y[1] = 0 = 2K
1
3K
2
+ 6 y[2] = 0 = 4K
1
+ 9K
2
+ 6
We nd K
1
= 2.4 and K = 0.4, and thus
y
zs
[n] = 2.4(
1
2
)
n
+ 0.4(
1
3
)
n
+ 6, n 0
3. The total response is y[n] = y
zi
[n] +y
zs
[n] = 1.2(
1
2
)
n
+ 1.2(
1
3
)
n
+ 6, n 0.
(c) (Linearity and Superposition of the ZSR and ZIR) An IIR lter is described by y[n] y[n
1] 2y[n 2] = x[n], with x[n] = 6u[n] and initial conditions y[1] = 1, y[2] = 4.
1. Find the zero-input response, zero-state response, and total response.
2. How does the total response change if y[1] = 1, y[2] = 4 as given, but x[n] = 12u[n]?
3. How does the total response change if x[n] = 6u[n] as given, but y[1] = 2, y[2] = 8?
1. We nd the characteristic equation as (1 z
1
2z
2
) = 0 or (z
2
z 2) = 0.
The roots of the characteristic equation are z
1
= 1 and z
2
= 2.
The form of the natural response is y
N
[n] = A(1)
n
+B(2)
n
.
Since the input x[n] is constant for n 0, the form of the forced response is also constant.
So, choose y
F
[n] = C in the system equation and evaluate C:
y
F
[n] y
F
[n 1] 2y
F
[n 2] = C C 2C = 6 C = 3 y
F
[n] = 3
For the ZSR, we use the form of the total response and zero initial conditions:
y
zs
[n] = y
F
[n] +y
N
[n] = 3 +A(1)
n
+B(2)
n
, y[1] = y[2] = 0
We obtain y
zs
[1] = 0 = 3 A+ 0.5B and y
zs
[2] = 0 = 3 +A+ 0.25B
Thus, A = 1, B = 8, and y
zs
[n] = 3 + (1)
n
+ 8(2)
n
, n 0.
For the ZIR, we use the form of the natural response and the given initial conditions:
y
zi
[n] = y
N
[n] = A(1)
n
+B(2)
n
y[1] = 1 y[2] = 4
This gives y
zi
[1] = 1 = A+ 0.5B, and y
zi
[2] = 4 = A+ 0.25B.
Thus, A = 3, B = 4, and y
zi
[n] = 3(1)
n
+ 4(2)
n
, n 0.
The total response is the sum of the zero-input and zero-state response:
y[n] = y
zi
[n] +y
zs
[n] = 3 + 4(1)
n
+ 12(2)
n
, n 0
2. If x[n] = 12u[n], the zero-state response doubles to y
zs
[n] = 6 + 2(1)
n
+ 16(2)
n
.
3. If y[1] = 2 and y[2] = 8, the zero-input response doubles to y
zi
[n] = 6(1)
n
+ 8(2)
n
.
c Ashok Ambardar, September 1, 2003
3.4 The Natural and Forced Response 63
DRILL PROBLEM 3.10
(a) Let y[n] 0.8y[n 1] = 2. Find its zero-state response.
(b) Let y[n] + 0.8y[n 1] = x[n] with y[1] = 5. Find its zero-input response.
(c) Let y[n] 0.4y[n 1] = (0.8)
n
with y[1] = 10. Find its zero-state and zero-input response.
Answers: (a) 10 8(0.8)
n
(b) 4(0.8)
n
(c) 2(0.8)
n
(0.4)
n
, 4(0.4)
n
3.4.3 Solution of the General Dierence Equation
The solution of the general dierence equation described by
y[n] +A
1
y[n 1] + +A
N
y[n N] = B
0
x[n] +B
1
x[n 1] + +B
M
x[n M] (3.17)
is found by invoking linearity and superposition as follows:
1. Compute the zero-state response y
0
[n] of the single-input system
y
0
[n] +A
1
y
0
[n 1] +A
2
y
0
[n 2] + +A
N
y
0
[n N] = x[n] (3.18)
2. Use linearity and superposition to nd y
zs
[n] as
y
zs
[n] = B
0
y
0
[n] +B
1
y
0
[n 1] + +B
M
y
0
[n M] (3.19)
3. Find the zero-input response y
zi
[n] using initial conditions.
4. Find the total response as y[n] = y
zs
[n] +y
zi
[n].
Note that the zero-input response is computed and included just once.
REVIEW PANEL 3.8
Solving y[n] +A
1
y[n 1] + +A
N
y[n N] = B
0
x[n] +B
1
x[n 1] + +B
M
x[n M]
1. Find y
0
[n] from y
0
[n] +A
1
y
0
[n 1] + +A
N
y
0
[n N] = x[n], assume zero initial conditions.
2. Find ZSR (using superposition) as y
zs
[n] = B
0
y
0
[n] +B
1
y
0
[n 1] + +B
M
y
0
[n M].
3. Find ZIR from y
zi
[n] +A
1
y
zi
[n 1] + +A
N
y
zi
[n N] = 0, using given initial conditions.
4. Find complete response as y[n] = y
zs
[n] +y
zi
[n].
EXAMPLE 3.9 (Response of a General System)
Consider the recursive digital lter whose realization is shown in Figure E3.9.
What is the response of this system if x[n] = 6u[n] and y[1] = 1, y[2] = 4?
[n] y [n] x
1
z
1
z
+
+
+
+ +

2

2
Figure E3.9 The digital lter for Example 3.9
c Ashok Ambardar, September 1, 2003
64 Chapter 3 Time-Domain Analysis
Comparison with the generic realization reveals that the system dierence equation is
y[n] y[n 1] 2y[n 2] = 2x[n] x[n 1]
From the previous example, the ZSR of y[n]y[n1]2y[n2] = x[n] is y
0
[n] = [3+(1)
n
+8(2)
n
]u[n].
The ZSR for the input 2x[n] x[n 1] is thus
y
zs
[n] = 2y
0
[n] y
0
[n 1] = [6 + 2(1)
n
+ 16(2)
n
]u[n] [3 + (1)
n1
+ 8(2)
n1
]u[n 1]
From the previous example, the ZIR of y[n] y[n 1] 2y[n 2] = x[n] is y
zi
[n] = [3(1)
n
+4(2)
n
]u[n].
The total response is y[n] = y
zi
[n] +y
zs
[n]:
y[n] = [3(1)
n
+ 4(2)
n
]u[n] + [6 + 2(1)
n
+ 16(2)
n
]u[n] [3 + (1)
n1
+ 8(2)
n1
]u[n 1]
DRILL PROBLEM 3.11
(a) Let y[n] 0.8y[n 1] = x[n] with x[n] = 2(0.4)
n
and y[1] = 10. Find its ZSR and ZIR.
(b) Let y[n] 0.8y[n 1] = 2x[n] with x[n] = 2(0.4)
n
and y[1] = 10. Find y[n].
(c) Let y[n] 0.8y[n 1] = 2x[n] x[n 1] with x[n] = 2(0.4)
n
and y[1] = 10. Find y[n].
Answers: (a) 4(0.8)
n
2(0.4)
n
, 8(0.8)
n
(b) 4(0.4)
n
(c) (0.4)
n
5(0.8)
n
(simplied)
3.5 The Impulse Response
The impulse response h[n] of a relaxed LTI system is simply the response to a unit impulse input [n]. The
impulse response provides us with a powerful method for nding the zero-state response of LTI systems to
arbitrary inputs using superposition (as described in the next chapter). The impulse response and the step
response are often used to assess the time-domain performance of digital lters.
REVIEW PANEL 3.9
The Impulse Response and Step Response Is Dened Only for Relaxed LTI Systems
[n] h [n] Relaxed LTI system Impulse input Impulse response
Impulse response h[n]: The output of a relaxed LTI system if the input is a unit impulse [n]
Step response s[n]: The output of a relaxed LTI system if the input is a unit step u[n]
3.5.1 Impulse Response of Nonrecursive Filters
For a nonrecursive (FIR) lter of length M + 1 described by
y[n] = B
0
x[n] +B
1
x[n 1] + +B
M
x[n M] (3.20)
the impulse response h[n] (with x[n] = [n]) is an M + 1 term sequence of the input terms, which may be
written as
h[n] = B
0
[n] +B
1
[n 1] + +B
M
[n M] or h[n] =

B
0
, B
1
, . . . , B
M
(3.21)
The sequence h[n] represents the FIR lter coecients.
c Ashok Ambardar, September 1, 2003
3.5 The Impulse Response 65
DRILL PROBLEM 3.12
(a) Let y[n] = 2x[n + 1] + 3x[n] x[n 2]. Write its impulse response as a sequence.
(b) Let y[n] = x[n] 2x[n 1] + 4x[n 3]. Write its impulse response as a sum of impulses.
Answers: (a) h[n] = 2,

3, 0, 1 (b) h[n] = [n] 2[n 1] + 4[n 3]


3.5.2 Impulse Response by Recursion
Recursion provides a simple means of obtaining as many terms of the impulse response h[n] of a relaxed
recursive lter as we please, though we may not always be able to discern a closed form from the results.
Remember that we must use zero initial conditions.
EXAMPLE 3.10 (Impulse Response by Recursion)
Find the impulse response of the system described by y[n] y[n 1] = x[n].
We nd h[n] as the solution to h[n] = h[n 1] + [n] subject to the initial condition y[1] = 0. By
recursion, we obtain
h[0] = h[1] +[0] = 1 h[2] = h[1] =
2
h[1] = h[0] = h[3] = h[2] =
3
The general form of h[n] is easy to discern as h[n] =
n
u[n].
DRILL PROBLEM 3.13
(a) Let y[n] + 0.8y[n 1] = x[n]. Find its impulse response by recursion.
(b) Let y[n] + 0.8y[n 1] = 3x[n]. Find its impulse response by recursion.
(c) Let y[n] y[n 1] = Cx[n]. Find its impulse response by recursion.
Answers: (a) (0.8)
n
(b) 3(0.8)
n
(c) C
n
3.5.3 Impulse Response for the Single-Input Case
Consider the Nth-order dierence equation with a single input:
y[n] +A
1
y[n 1] +A
2
y[n 2] + +A
N
y[n N] = x[n] (3.22)
To nd its impulse response, we solve the dierence equation
h[n] +A
1
h[n 1] +A
2
h[n 2] + +A
N
h[n N] = [n] (zero initial conditions) (3.23)
Since the input [n] is zero for n > 0, we must apparently assume a forced response that is zero and thus
solve for the natural response using initial conditions (leading to a trivial result). The trick is to use at
least one nonzero initial condition, which we must nd by recursion. By recursion, we nd h[0] = 1. Since
[n] = 0, n > 0, the impulse response is found as the natural response of the homogeneous equation
h[n] +A
1
h[n 1] +A
2
h[n 2] + +A
N
h[n N] = 0, h[0] = 1 (3.24)
subject to the nonzero initial condition h[0] = 1. All the other initial conditions are assumed to be zero
(h[1] = 0 for a second-order system, h[1] = h[2] = 0 for a third-order system, and so on).
c Ashok Ambardar, September 1, 2003
66 Chapter 3 Time-Domain Analysis
REVIEW PANEL 3.10
The Impulse Response of y[n] +A
1
y[n 1] + +A
N
y[n N] = x[n]
Find h[n] from h[n] +A
1
h[n 1] + +A
N
h[n N] = 0, with just h[0] = 1 (all others zero).
EXAMPLE 3.11 (Impulse Response Computation for the Single-Input Case)
(a) (A First-Order System) Consider the dierence equation y[n] 0.6y[n 1] = x[n].
Its impulse response is found by solving h[n] 0.6h[n 1] = 0, h[0] = 1.
Its natural response is h[n] = K(0.6)
n
.
With h[0] = 1, we nd K = 1, and thus h[n] = (0.6)
n
u[n].
(b) (A Second-Order System) Let y[n]
1
6
y[n 1]
1
6
y[n 2] = x[n].
Its impulse response is found by solving h[n]
1
6
h[n 1]
1
6
h[n 2] = 0, h[0] = 1, h[1] = 0.
Its characteristic equation is 1
1
6
z
1

1
6
z
2
= 0 or z
2

1
6
z
1
6
= 0.
Its roots, z
1
=
1
2
and z
2
=
1
3
, give the natural response h[n] = K
1
(
1
2
)
n
+K
2
(
1
3
)
n
.
With h[0] = 1 and h[1] = 0, we nd 1 = K
1
+K
2
and 0 = 2K
1
3K
2
.
Solving for the constants, we obtain K
1
= 0.6 and K
2
= 0.4.
Thus, h[n] = [0.6(
1
2
)
n
+ 0.4(
1
3
)
n
]u[n].
DRILL PROBLEM 3.14
(a) Let y[n] 0.9y[n 1] = x[n]. Find its impulse response h[n].
(b) Let y[n] 1.2y[n 1] + 0.32y[n 2] = x[n]. Find its impulse response h[n].
Answers: (a) (0.9)
n
(b) 2(0.8)
n
(0.4)
n
3.5.4 Impulse Response for the General Case
To nd the impulse response of the general system described by
y[n] +A
1
y[n 1] +A
2
y[n 2] + +A
N
y[n N] = B
0
x[n] +B
1
x[n 1] + +B
M
x[n M] (3.25)
we use linearity and superposition as follows:
1. Find the impulse response h
0
[n] of the single-input system
y
0
[n] +A
1
y
0
[n 1] +A
2
y
0
[n 2] + +A
N
y
0
[n N] = x[n] (3.26)
by solving the homogeneous equation
h
0
[n] +A
1
h
0
[n 1] + +A
N
h
0
[n N] = 0, h
0
[0] = 1 (all other conditions zero) (3.27)
2. Then, invoke superposition to nd the actual impulse response h[n] as
h[n] = B
0
h
0
[n] +B
1
h
0
[n 1] + +B
M
h
0
[n M] (3.28)
c Ashok Ambardar, September 1, 2003
3.5 The Impulse Response 67
REVIEW PANEL 3.11
Impulse Response of y[n] +A
1
y[n 1] + +A
N
y[n N] = B
0
x[n] +B
1
x[n 1] + +B
M
x[n M]
1. Find h
0
[n] from h
0
[n] +A
1
h
0
[n 1] + +A
N
h
0
[n N] = 0 with just h
0
[0] = 1 (all others zero).
2. Find h[n] (using superposition) as h[n] = B
0
h
0
[n] +B
1
h
0
[n 1] + +B
M
h
0
[n M].
EXAMPLE 3.12 (Impulse Response for the General Case)
(a) Find the impulse response of y[n] 0.6y[n 1] = 4x[n] and y[n] 0.6y[n 1] = 3x[n + 1] x[n].
We start with the single-input system y
0
[n] 0.6y
0
[n 1] = x[n].
Its impulse response h
0
[n] was found in the previous example as h
0
[n] = (0.6)
n
u[n].
Then, for the rst system, h
[
n] = 4h
0
[n] = 4(0.6)
n
u[n].
For the second system, h[n] = 3h
0
[n + 1] h
0
[n] = 3(0.6)
n+1
u[n + 1] (0.6)
n
u[n].
This may also be expressed as h[n] = 3[n + 1] + 0.8(0.6)
n
u[n].
Comment: The general approach can be used for causal or noncausal systems.
(b) Let y[n]
1
6
y[n 1]
1
6
y[n 2] = 2x[n] 6x[n 1].
To nd h[n], start with the single-input system y[n]
1
6
y[n 1]
1
6
y[n 2] = x[n].
Its impulse response h
0
[n] was found in the previous example as
h
0
[n] = [0.6(
1
2
)
n
+ 0.4(
1
3
)
n
]u[n]
The impulse response of the given system is h[n] = 2h
0
[n] 6h
0
[n 1]. This gives
h[n] = [1.2(
1
2
)
n
+ 0.8(
1
3
)
n
]u[n] [3.6(
1
2
)
n1
+ 2.4(
1
3
)
n1
]u[n 1]
Comment: This may be simplied to h[n] = [6(
1
2
)
n
+ 8(
1
3
)
n
]u[n].
DRILL PROBLEM 3.15
(a) Let y[n] 0.5y[n 1] = 2x[n] +x[n 1]. Find its impulse response h[n].
(b) Let y[n] 1.2y[n 1] + 0.32y[n 2] = x[n] + 2x[n 1]. Find its impulse response h[n].
Answers: (a) 2(0.5)
n
u[n] +(0.5)
n1
u[n 1] = 4(0.5)
n
u[n] 2[n] (b) 7(0.8)
n
6(0.4)
n
(simplied)
3.5.5 Recursive Forms for Nonrecursive Digital Filters
The terms FIR and nonrecursive are synonymous. A nonrecursive lter always has a nite impulse response.
The terms IIR and recursive are often, but not always, synonymous. Not all recursive lters have an innite
impulse response. In fact, nonrecursive lters can always be implemented in recursive form if desired. A
recursive lter may also be approximated by a nonrecursive lter of the form y[n] = B
0
x[n] +B
1
x[n 1] +
+B
M
x[n M] if we know all the past inputs. In general, this implies M .
c Ashok Ambardar, September 1, 2003
68 Chapter 3 Time-Domain Analysis
EXAMPLE 3.13 (Recursive Forms for Nonrecursive Filters)
Consider the nonrecursive lter y[n] = x[n] +x[n 1] +x[n 2].
Its impulse response is h[n] = [n] +[n 1] +[n 2].
To cast this lter in recursive form, we compute y[n 1] = x[n 1] +x[n 2] +x[n 3].
Upon subtraction from the original equation, we obtain the recursive form
y[n] y[n 1] = x[n] x[n 3]
This describes a recursive formulation for the given nonrecursive, FIR lter.
DRILL PROBLEM 3.16
Consider the nonrecursive lter y[n] = x[n] + x[n 1] + x[n 2]. What recursive lter do you obtain by
computing y[n] y[n2]. Does the impulse response of the recursive lter match the impulse response of
the nonrecursive lter?
Answers: y[n] y[n 2] = x[n] +x[n 1] x[n 3] x[n 4], the impulse responses match.
3.5.6 The Response of Anti-Causal Systems
So far, we have focused on the response of systems described by dierence equations to causal inputs.
However, by specifying an anti-causal input and appropriate initial conditions, the same dierence equation
can be solved backward in time for n < 0 to generate an anti-causal response. For example, to nd the
causal impulse response h[n], we assume that h[n] = 0, n < 0; but to nd the anti-causal impulse response
h
A
[n] of the same system, we would assume that h[n] = 0, n > 0. This means that the same system can be
described by two dierent impulse response functions. How we distinguish between them is easily handled
using the z-transform (described in the next chapter).
EXAMPLE 3.14 (Causal and Anti-Causal Impulse Response)
(a) Find the causal impulse response of the rst-order system y[n] 0.4y[n 1] = x[n].
For the causal impulse response, we assume h[n] = 0, n < 0, and solve for h[n], n > 0, by recursion
from h[n] = 0.4h[n 1] +[n]. With h[0] = 0.4h[1] +[0] = 1 and [n] = 0, n = 0, we nd
h[1] = 0.4h[0] = 0.4 h[2] = 0.4h[1] = (0.4)
2
h[3] = 0.4h[2] = (0.4)
3
etc.
The general form is easily discerned as h[n] = (0.4)
n
and is valid for n 0.
Comment: The causal impulse response of y[n] y[n 1] = x[n] is h[n] =
n
u[n].
(b) Find the anti-causal impulse response of the rst-order system y[n] 0.4y[n 1] = x[n].
For the anti-causal impulse response, we assume h[n] = 0, n 0, and solve for h[n], n < 0, by recursion
from h[n 1] = 2.5(h[n] [n]). With h[1] = 2.5(h[0] [0]) = 2.5, and [n] = 0, n = 0, we nd
h[2] = 2.5h[1] = (2.5)
2
h[3] = 2.5h[2] = (2.5)
3
h[4] = 2.5h[3] = (2.5)
4
etc.
The general form is easily discerned as h[n] = (2.5)
n
= (0.4)
n
and is valid for n 1.
Comment: The anti-causal impulse response of y[n] y[n 1] = x[n] is h[n] =
n
u[n 1].
c Ashok Ambardar, September 1, 2003
3.6 System Representation in Various Forms 69
DRILL PROBLEM 3.17
Let y[n] 0.5y[n 1] = 2x[n]. Find its causal impulse response and anti-causal impulse response.
Answers: h
c
[n] = 2(0.5)
n
u[n], h
ac
[n] = 2(0.5)
n
u[n 1]
3.6 System Representation in Various Forms
An LTI system may be described by a dierence equation, impulse response, or input-output data. All three
are related and, given one form, we should be able to access the others. We have already studied how to
obtain the impulse response from a dierence equation. Here we shall describe how to obtain the system
dierence equation from its impulse response or from input-output data.
3.6.1 Dierence Equations from the Impulse Response
In the time domain, the process of nding the dierence equation from its impulse response is tedious. It is
much easier implemented by other methods (such as the z-transform). The central idea is that the terms in
the impulse response are an indication of the natural response (and the roots of the characteristic equation)
from which the dierence equation may be reconstructed if we can describe the combination of the impulse
response and its delayed versions by a sum of impulses. The process is best illustrated by some examples.
EXAMPLE 3.15 (Dierence Equations from the Impulse Response)
(a) Let h[n] = u[n]. Then h[n 1] = u[n 1], and h[n] h[n 1] = u[n] u[n 1] = [n].
The dierence equation corresponding to h[n] h[n 1] = [n] is simply y[n] y[n 1] = x[n].
(b) Let h[n] = 3(0.6)
n
u[n]. This suggests a dierence equation whose left-hand side is y[n] 0.6y[n 1].
We then set up h[n] 0.6h[n 1] = 3(0.6)
n
u[n] 1.8(0.6)
n1
u[n 1]. This simplies to
h[n] 0.6h[n 1] = 3(0.6)
n
u[n] 3(0.6)
n
u[n 1] = 3(0.6)
n
(u[n] u[n 1]) = 3(0.6)
n
[n] = 3[n]
The dierence equation corresponding to h[n] 0.6h[n 1] = 3[n] is y[n] 0.6y[n 1] = 3x[n].
(c) Let h[n] = 2(0.5)
n
u[n] + (0.5)
n
u[n]. This suggests a characteristic equation (z 0.5)(z + 0.5).
The left-hand side of the dierence equation is thus y[n] 0.25y[n 2]. We now compute
h[n] 0.25h[n 2] = 2(0.5)
n
u[n] + (0.5)
n
u[n] 0.25(2(0.5)
n1
u[n 1] + (0.5)
n1
u[n 1])
This simplies to
h[n] 0.25h[n 2] = [2(0.5)
n
+ (0.5)
n
](u[n] u[n 2])
Since u[n] u[n 2] has just two samples (at n = 0 and n = 1), it equals [n] +[n 1], and we get
h[n] 0.25h[n 2] = [2(0.5)
n
+ (0.5)
n
]([n] +[n 1])
This simplies further to h[n] 0.25h[n 2] = 3[n] 0.5[n 1].
From this result, the dierence equation is y[n] 0.25y[n 2] = 3x[n] 0.5x[n 1].
c Ashok Ambardar, September 1, 2003
70 Chapter 3 Time-Domain Analysis
DRILL PROBLEM 3.18
(a) Set up the dierence equation corresponding to the impulse response h[n] = 2(0.5)
n
u[n].
(b) Set up the dierence equation corresponding to the impulse response h[n] = (0.5)
n
u[n] +[n].
Answers: (a) y[n] + 0.5y[n 1] = 2x[n] (b) y[n] 0.5y[n 1] = 2x[n] 0.5x[n 1]
3.6.2 Dierence Equations from Input-Output Data
The dierence equation of LTI systems may also be obtained from input-output data. The response of the
system described by y[n] = 3x[n] +2x[n1] to x[n] = [n] is y[n] = 3[n] +2[n1]. Turning things around,
the input [n] and output 3[n]+2[n1] then corresponds to the dierence equation y[n] = 3x[n]+2x[n1].
Note how the coecients of the input match the output data (and vice versa).
REVIEW PANEL 3.12
Dierence Equation of an LTI System from Input-Output Information
Example: If x[n] =

1, 2, 3 and y[n] =

3, 3, then y[n] + 2y[n 1] + 3y[n 2] = 3x[n] + 3x[n 1].


[n] y [n1] y [n2] y [n] x [n1] x + 2 + 3 = 3 + 3
System Equation
System
1
2
3
n
2 1
Input
3 3
n
1
Output
DRILL PROBLEM 3.19
(a) The impulse response of an LTI system is h[n] =

1, 2, 1. What is the system equation?


(b) The input

2, 4 to an LTI system gives the output

3, 1. What is the system equation?


Answers: (a) y[n] = x[n] + 2x[n 1] x[n 2] (b) 2y[n] + 4y[n 1] = 3x[n] x[n 1]
3.7 Application-Oriented Examples
In this section we explore various practical applications of digital lters such as signal smoothing using
averaging lters, echo cancellation using inverse lters, special audio eects using echo and reverb, and
wave-table synthesis of musical tones for synthesizers.
3.7.1 Moving Average Filters
A moving average or running average lter is an FIR lter that replaces a signal value by an average of
its neighboring values. The averaging process blurs the sharp details of the signal and results in an output
that is a smoother version of the input. Moving average ltering is a simple way of smoothing a noisy signal
and improving its signal-to-noise ratio. A causal L-point moving average (or averaging) lter replaces a
signal value by the average of its past L samples and is dened by
y[n] =
1
L
[x[n] +x[n 1] +x[n 2] + +x[n (L 1)] =
1
L
L1

k=0
x[n k] (3.29)
c Ashok Ambardar, September 1, 2003
3.7 Application-Oriented Examples 71
This is an L-point FIR lter whose impulse response is
h[n] =
1
L

1, 1, 1, . . . , 1, 1
. .. .
L samples
(3.30)
Figure 3.4 shows a noisy sinusoid and the output when it is passed through two moving average lters of
dierent lengths. We see that the output of the 10-point lter is indeed a smoother version of the noisy
input. Note that the output is a delayed version of the input and shows a start-up transient for about 10
samples before the lter output settles down. This is typical of all lters. A lter of longer length will result
in a longer transient. It should also produce better smoothing. While that may be generally true, we see
that the 50-point averaging lter produces an output that is essentially zero after the initial transient. The
reason is that the sinusoid also has a period of 50 samples and its 50-point running average is thus zero!
This suggests that while averaging lters of longer lengths usually do a better job of signal smoothing, the
peculiarities of a given signal may not always result in a useful output.
0 50 100 150
1.5
1
0
1
1.5
(a) Periodic signal
0 50 100 150
1.5
1
0
1
1.5
(b) Noisy signal
0 10 50 100 150
1.5
1
0
1
1.5
(c) Output of 10point averaging filter
0 50 100 150
1.5
1
0
1
1.5
(d) Output of 50point averaging filter
Figure 3.4 The response of two averaging lters to a noisy sinusoid
3.7.2 Inverse Systems
Inverse systems are quite important in practical applications. For example, a measurement system (such as
a transducer) invariably aects (distorts) the signal being measured. To undo the eects of the distortion
requires a system that acts as the inverse of the measurement system. If an input x[n] to a system results
in an output y[n], then its inverse is a system that recovers the signal x[n] in response to the input y[n], as
illustrated in Figure 3.5. For invertible LTI systems described by dierence equations, nding the inverse
system is as easy as switching the input and output variables, as illustrated in the following example.
c Ashok Ambardar, September 1, 2003
72 Chapter 3 Time-Domain Analysis
y[n] x[n] x[n]
System Inverse system
Figure 3.5 A system and its inverse
REVIEW PANEL 3.13
Finding the Inverse of an LTI System? Try Switching the Input and Output
System: y[n] + 2y[n 1] = 3x[n] + 4x[n 1] Inverse system: 3y[n] + 4y[n 1] = x[n] + 2x[n 1]
How to nd the inverse from the impulse response h[n]? Find the dierence equation rst.
EXAMPLE 3.16 (Inverse Systems)
(a) Refer to the interconnected system shown in Figure E3.16A(1). Find the dierence equation of the
inverse system, sketch a realization of each system, and nd the output of each system.
[n] y [n] x [n-1] x = - 0.5 Inverse system
1
4 4
n
Input
Output
Figure E3.16A(1) The interconnected system for Example 3.16(a)
The original system is described by y[n] = x[n] 0.5x[n 1]. By switching the input and output, the
inverse system is described by y[n] 0.5y[n 1] = x[n]. The realization of each system is shown in
Figure E3.16A(2). Are they related? Yes. If you ip the realization of the echo system end-on-end
and change the sign of the feedback signal, you get the inverse realization.
1
z
1
z
0.5
Input Output
+
+
Inverse system

System
0.5
Input Output +

Figure E3.16A(2) Realization of the system and its inverse for Example 3.16(a)
The response g[n] of the rst system is simply
g[n] = (4[n] + 4[n 1]) (2[n 1] + 2[n 2]) = 4[n] + 2[n 1]) 2[n 2])
If we let the output of the second system be y
0
[n], we have
y
0
[n] = 0.5y
0
[n 1] + 4[n] + 2[n 1]) 2[n 2])
Recursive solution gives
y
0
[0] = 0.5y
0
[1] + 4[0] = 4 y
0
[1] = 0.5y
0
[0] + 2[0] = 4 y
0
[2] = 0.5y
0
[1] 2[0] = 0
All subsequent values of y
0
[n] are zero since the input terms are zero for n > 2. The output is thus
y
0
[n] =

4, 4, the same as the input to the overall system.


c Ashok Ambardar, September 1, 2003
3.7 Application-Oriented Examples 73
(b) Consider the FIR lter y[n] = x[n] +2x[n 1]. Its inverse is found by switching the input and output
as y[n] +2y[n1] = x[n]. The inverse of an FIR lter always results in an IIR lter (while the converse
is not always true).
DRILL PROBLEM 3.20
(a) Consider the IIR lter y[n] + 2y[n 1] = x[n]. What is the inverse system? Is it FIR or IIR?
(b) Consider the IIR lter y[n] + 2y[n 1] = x[n] +x[n 1]. Is the inverse system FIR or IIR?
(c) Consider the FIR lter whose impulse response is h[n] =

1, 0.4. What is the impulse response of


the inverse system? Is it FIR or IIR?
Answers: (a) y[n] = x[n] +2x[n1], FIR (b) y[n] +y[n1] = x[n] +2x[n1], IIR (c) 0.4)
n
u[n], IIR
Invertible Systems
Not all systems have an inverse. For a system to have an inverse, or be invertible, distinct inputs must
lead to distinct outputs. If a system produces an identical output for two dierent inputs, it does not have
an inverse. For example, the system described by y[n] = cos(x[n]) is not invertible because dierent inputs
(such as x[n] + 2k, k = 0, 1, 2 . . .) yield an identical input.
EXAMPLE 3.17 (Invertibility)
(a) The nonlinear system y[n] = x
2
[n] does not have an inverse. Two inputs, diering in sign, yield the
same output. If we try to recover x[n] as

y[n], we run into a sign ambiguity.
(b) The linear (but time-varying) decimating system y[n] = x[2n] does not have an inverse. Two inputs,
which dier in the samples discarded (for example, the signals 1, 2, 4, 5 and 1, 3, 4, 8 yield the same
output 1, 4). If we try to recover the original signal by interpolation, we cannot uniquely identify
the original signal.
(c) The linear (but time-varying) interpolating system y[n] = x[n/2] does have an inverse. Its inverse is a
decimating system that discards the very samples inserted during interpolation and thus recovers the
original signal exactly.
DRILL PROBLEM 3.21
(a) Is the system y[n] = e
x[n]
invertible? If so, what is the inverse system?
(b) Is the system described by the impulse response h[n] = (0.5)
n
u[n] invertible? If so, what is the impulse
response of the inverse system?
Answers: (a) Yes, y[n] = ln(x[n]) (b) Yes, h[n] = [n] 0.5[n 1]
c Ashok Ambardar, September 1, 2003
74 Chapter 3 Time-Domain Analysis
3.7.3 Echo and Reverb
The digital processing of audio signals often involves digital lters to create various special eects such as
echo and reverb, which are typical of modern-day studio sounds. An echo lter has the form
y[n] = x[n] +x[n D] (an echo lter) (3.31)
This describes an FIR lter whose output y[n] equals the input x[n] and its delayed (by D samples) and
attenuated (by ) replica of x[n] (the echo term). Its realization is sketched in Figure 3.6. The D-sample
delay is implemented by a cascade of D delay elements and represented by the block marked z
D
. This
lter is also called a comb lter (for reasons to be explained in later chapters).
z
-D
z
-D
Input Output +

An echo filter
+

Input Output
+
+

A reverb filter

Figure 3.6 Echo and reverb lters


Reverberations are due to multiple echoes (from the walls and other structures in a concert hall, for
example). For simplicity, if we assume that the signal suers the same delay D and the same attenuation
in each round-trip to the source, we may describe the action of reverb by
y[n] = x[n] +x[n D] +
2
x[n 2D] +
3
x[n 3D] + (3.32)
If we delay both sides by D units, we get
y[n D] = x[n D] +
2
x[n 2D] +
3
x[n 3D] +
4
x[n 4D] + (3.33)
Subtracting the second equation from the rst, we obtain a compact form for a reverb lter:
y[n] y[n D] = x[n] (a reverb lter) (3.34)
This is an IIR lter whose realization is also sketched in Figure 3.6. Its form is reminiscent of the inverse of
the echo system y[n] +y[n D] = x[n], but with replaced by .
In concept, it should be easy to tailor the simple reverb lter to simulate realistic eects by including
more terms with dierent delays and attenuation. In practice, however, this is no easy task, and the lter
designs used by commercial vendors in their applications are often proprietary.
3.7.4 Periodic Sequences and Wave-Table Synthesis
Electronic music synthesizers typically possess tone banks of stored tone sequences that are replicated and
used in their original or delayed forms or combined to synthesize new sounds and generate various musical
eects. The periodic versions of such sequences can be generated by lters that have a recursive form
developed from a nonrecursive lter whose impulse response corresponds to one period of the periodic
signal. The dierence equation of such a recursive lter is given by
y[n] y[n N] = x
1
[n] (N-sample periodic signal generator) (3.35)
c Ashok Ambardar, September 1, 2003
3.7 Application-Oriented Examples 75
where x
1
[n] corresponds to one period (N samples) of the signal x[n]. This form actually describes a reverb
system with no attenuation whose delay equals the period N. Hardware implementation often uses a circular
buer or wave-table (in which one period of the signal is stored), and cycling over it generates the periodic
signal. The same wave-table can also be used to change the frequency (or period) of the signal (to double
the frequency for example, we would cycle over alternate samples) or for storing a new signal.
EXAMPLE 3.18 (Generating Periodic Signals Using Recursive Filters)
Suppose we wish to generate the periodic signal x[n] described by
x[n] = 2, 3, 1, 6, 2, 3, 1, 6, 2, 3, 1, 6, 2, 3, 1, 6, . . .
The impulse response sequence of a nonrecursive lter that generates this signal is simply
h[n] = 2, 3, 1, 6, 2, 3, 1, 6, 2, 3, 1, 6, 2, 3, 1, 6, . . .
If we delay this sequence by one period (four samples), we obtain
h[n 4] = 0, 0, 0, 0, 2, 3, 1, 6, 2, 3, 1, 6, 2, 3, 1, 6, 2, 3, 1, 6, . . .
Subtracting the delayed sequence from the original gives us h[n] h[n 4] = 2, 3, 1, 6.
Its recursive form is y[n] y[n 4] = x
1
[n], where x
1
[n] = 2, 3, 1, 6 describes one period of x[n].
DRILL PROBLEM 3.22
(a) What is the system equation of a lter whose impulse response is periodic with rst period

1, 2.
(b) Find the response of the system y[n] y[n 3] = 2[n] + 3[n 2].
Answers: (a) y[n] y[n 2] = x[n] + 2x[n 1] (b) Periodic (N = 3) with period y
1
[n] =

2, 0, 3
3.7.5 How Dierence Equations Arise
We conclude with some examples of dierence equations, which arise in many ways in various elds ranging
from mathematics and engineering to economics and biology.
1. y[n] = y[n 1] +n, y[1] = 1
This dierence equation describes the number of regions y[n] into which n lines divide a plane if no
two lines are parallel and no three lines intersect.
2. y[n + 1] = (n + 1)(y[n] + 1), y[0] = 0
This dierence equation describes the number of multiplications y[n] required to compute the deter-
minant of an n n matrix using cofactors.
3. y[n + 2] = y[n + 1] +y[n], y[0] = 0, y[1] = 1
This dierence equation generates the Fibonacci sequence y[n] = 0, 1, 1, 2, 3, 5, . . ., where each
number is the sum of the previous two.
4. y[n + 2] 2xy[n + 1] +y[n] = 0, y[0] = 1, y[1] = x
This dierence equation generates the Chebyshev polynomials T
n
(x) = y[n] of the rst kind. We
nd that T
2
(x) = y[2] = 2x
2
1, T
3
(x) = y[3] = 4x
3
3x, etc. Similar dierence equations called
recurrence relations form the basis for generating other polynomial sets.
c Ashok Ambardar, September 1, 2003
76 Chapter 3 Time-Domain Analysis
5. y[n + 1] = y[n](1 y[n])
This dierence equation, called a logistic equation in biology, is used to model the growth of populations
that reproduce at discrete intervals.
6. y[n + 1] = (1 +)y[n] +d[n]
This dierence equation describes the bank balance y[n] at the beginning of the nth-compounding
period (day, month, etc.) if the percent interest rate is per compounding period and d[n] is the
amount deposited in that period.
3.8 Discrete Convolution
Discrete-time convolution is a method of nding the zero-state response of relaxed linear time-invariant
(LTI) systems. It is based on the concepts of linearity and time invariance and assumes that the system
information is known in terms of its impulse response h[n]. In other words, if the input is [n], a unit sample
at the origin n = 0, the system response is h[n]. Now, if the input is x[0][n], a scaled impulse at the origin,
the response is x[0]h[n] (by linearity). Similarly, if the input is the shifted impulse x[1][n 1] at n = 1, the
response is x[1]h[n 1] (by time invariance). The response to the shifted impulse x[k][n k] at n = k is
x[k]h[nk] (by linearity and time invariance). Since an arbitrary input x[n] is simply a sequence of samples,
it can be described by a sum of scaled and shifted impulses:
x[n] =

k=
x[k][n k] (3.36)
By superposition, the response to x[n] is the sum of scaled and shifted versions of the impulse response:
y[n] =

k=
x[k]h[n k] = x[n] h[n] (3.37)
This denes the convolution operation and is also called linear convolution or the convolution sum, and
denoted by y[n] = x[n] h[n] (or by x[n] h[n] in the gures) in this book. The order in which we perform the
operation does not matter, and we can interchange the arguments of x and h without aecting the result.

k=
x[k]h[n k] =

k=
x[n k]h[k] or x[n] h[n] = h[n] x[n] (3.38)
REVIEW PANEL 3.14
Convolution Yields the Zero-State Response of an LTI System
h[n] x[n]
*
= h[n] x[n]
*
=
x[n] and h[n]
Input Output x[n] y[n]
h[n]
System
Output = convolution of impulse response =
Notation: We use x[n] h[n] to denote

k=
x[k]h[n k]
3.8.1 Analytical Evaluation of Discrete Convolution
If x[n] and h[n] are described by simple enough analytical expressions, the convolution sum can be imple-
mented quite readily to obtain closed-form results. While evaluating the convolution sum, it is useful to
c Ashok Ambardar, September 1, 2003
3.8 Discrete Convolution 77
keep in mind that x[k] and h[n k] are functions of the summation variable k. For causal signals of the
form x[n]u[n] and h[n]u[n], the summation involves step functions of the form u[k] and u[n k]. Since
u[k] = 0, k < 0 and u[n k] = 0, k > n, these can be used to simplify the lower and upper summation
limits to k = 0 and k = n, respectively.
EXAMPLE 3.19 (Analytical Evaluation of Discrete Convolution)
(a) Let x[n] = h[n] = u[n]. Then x[k] = u[k] and h[nk] = u[nk]. The lower limit on the convolution sum
simplies to k = 0 (because u[k] = 0, k < 0), the upper limit to k = n (because u[n k] = 0, k > n),
and we get
y[n] =

k=
u[k]u[n k] =
n

k=0
1 = (n + 1)u[n] = r[n + 1]
Note that (n + 1)u[n] also equals r[n + 1], and thus u[n] u[n] = r[n + 1].
(b) Let x[n] = h[n] = a
n
u[n]. Then x[k] = a
k
u[k] and h[n k] = a
nk
u[n k]. The lower limit on the
convolution sum simplies to k = 0 (because u[k] = 0, k < 0), the upper limit to k = n (because
u[n k] = 0, k > n), and we get
y[n] =

k=
a
k
a
nk
u[k]u[n k] =
n

k=0
a
k
a
nk
= a
n
n

k=0
1 = (n + 1)a
n
u[n]
The argument of the step function u[n] is based on the fact that the upper limit on the summation
must exceed or equal the lower limit (i.e. n 0).
(c) Let x[n] = u[n 1] and h[n] =
n
u[n 1]. Then
u[n 1]
n
u[n 1] =

k=

k
u[k 1]u[n 1 k] =
n1

k=1

k
=
(
n
)
1
u[n 2]
Here, we used the closed form result for the nite summation. The argument of the step function
u[n 2] is dictated by the fact that the upper limit on the summation must exceed or equal the lower
limit (i.e. n 1 1 or n 2).
(d) Let x[n] = (0.8)
n
u[n] and h[n] = (0.4)
n
u[n]. Then
y[n] =

k=
(0.8)
k
u[k](0.4)
nk
u[n k] =
n

k=0
(0.8)
k
(0.4)
nk
= (0.4)
n
n

k=0
2
k
Using the closed-form result for the sum, we get y[n] = (0.4)
n
1 2
n+1
1 2
= (0.4)
n
(2
n+1
1)u[n].
This may also be expressed as y[n] = [2(0.8)
n
(0.4)
n
]u[n].
(e) Let x[n] = nu[n] and h[n] = a
n
u[n1], a < 1. With h[nk] = a
(nk)
u[n1k] and x[k] = ku[k],
the lower and upper limits on the convolution sum become k = 0 and k = n 1. Then
y[n] =
n1

k=0
ka
(nk)
= a
n
n1

k=0
ka
k
=
a
n+1
(1 a)
2
[1 na
n1
+ (n 1)a
n
]u[n 1]
c Ashok Ambardar, September 1, 2003
78 Chapter 3 Time-Domain Analysis
Here, we used known results for the nite summation to generate the closed-form solution.
DRILL PROBLEM 3.23
(a) Let x[n] = (0.8)
n
u[n] and h[n] = (0.4)
n
u[n 1]. Find their convolution.
(b) Let x[n] = (0.8)
n
u[n 1] and h[n] = (0.4)
n
u[n]. Find their convolution.
(c) Let x[n] = (0.8)
n
u[n 1] and h[n] = (0.4)
n
u[n 1]. Find their convolution.
Answers: (a) [(0.8)
n
(0.4)
n
]u[n 1] (b) 2[(0.8)
n
(0.4)
n
]u[n 1] (c) [(0.8)
n
2(0.4)
n
]u[n 2]
3.9 Convolution Properties
Many of the properties of discrete convolution are based on linearity and time invariance. For example, if
x[n] (or h[n]) is shifted by n
0
, so is y[n]. Thus, if y[n] = x[n] h[n], then
x[n n
0
] h[n] = x[n] h[n n
0
] = y[n n
0
] (3.39)
The sum of the samples in x[n], h[n], and y[n] are related by

n=
y[n] =

n=
x[n]

n=
h[n]

(3.40)
For causal systems (h[n] = 0, n < 0) and causal signals (x[n] = 0, n < 0), y[n] is also causal. Thus,
y[n] = x[n] h[n] = h[n] x[n] =
n

k=0
x[k]h[n k] =
n

k=0
h[k]x[n k] (3.41)
An extension of this result is that the convolution of two left-sided signals is also left-sided and the convolution
of two right-sided signals is also right-sided.
EXAMPLE 3.20 (Properties of Convolution)
(a) Here are two useful convolution results that are readily found from the dening relation:
[n] x[n] = x[n] [n] [n] = [n]
(b) We nd y[n] = u[n] x[n]. Since the step response is the running sum of the impulse response, the
convolution of a signal x[n] with a unit step is the running sum of the signal x[n]:
x[n] u[n] =
n

k=
x[k]
(c) We nd y[n] = rect(n/2N) rect(n/2N) where rect(n/2N) = u[n +N] u[n N 1].
The convolution contains four terms:
y[n] = u[n+N] u[n+N] u[n+N] u[nN1] u[nN1] u[n+N] +u[nN1] u[nN1]
c Ashok Ambardar, September 1, 2003
3.10 Convolution of Finite Sequences 79
Using the result u[n] u[n] = r[n + 1] and the shifting property, we obtain
y[n] = r[n + 2N + 1] 2r[n] +r[n 2N 1] = (2N + 1)tri

n
2N + 1

The convolution of two rect functions (with identical arguments) is thus a tri function.
DRILL PROBLEM 3.24
(a) Let x[n] = (0.8)
n+1
u[n + 1] and h[n] = (0.4)
n1
u[n 2]. Find their convolution.
(b) Let x[n] = (0.8)
n
u[n] and h[n] = (0.4)
n1
u[n 2]. Find their convolution.
Answers: (a) [(0.8)
n
(0.4)
n
]u[n 1] (b) [(0.8)
n1
(0.4)
n1
]u[n 2]
3.10 Convolution of Finite Sequences
In practice, we often deal with sequences of nite length, and their convolution may be found by several
methods. The convolution y[n] of two nite-length sequences x[n] and h[n] is also of nite length and is
subject to the following rules, which serve as useful consistency checks:
1. The starting index of y[n] equals the sum of the starting indices of x[n] and h[n].
2. The ending index of y[n] equals the sum of the ending indices of x[n] and h[n].
3. The length L
y
of y[n] is related to the lengths L
x
and L
h
of x[n] and h[n] by L
y
= L
x
+L
h
1.
3.10.1 The Sum-by-Column Method
This method is based on the idea that the convolution y[n] equals the sum of the (shifted) impulse responses
due to each of the impulses that make up the input x[n]. To nd the convolution, we set up a row of index
values beginning with the starting index of the convolution and h[n] and x[n] below it. We regard x[n] as a
sequence of weighted shifted impulses. Each element (impulse) of x[n] generates a shifted impulse response
(product with h[n]) starting at its index (to indicate the shift). Summing the response (by columns) gives
the discrete convolution. Note that none of the sequences is folded. It is better (if only to save paper) to let
x[n] be the shorter sequence. The starting index (and the marker location corresponding to n = 0) for the
convolution y[n] is found from the starting indices of x[n] and h[n].
REVIEW PANEL 3.15
Discrete Convolution Using the Sum-by-Column Method
1. Line up the sequence x[n] below the sequence h[n].
2. Line up with each sample of x[n], the product of the entire array h[n] with that sample of x[n].
3. Sum the columns of the (successively shifted) arrays to generate the convolution sequence.
EXAMPLE 3.21 (Convolution of Finite-Length Signals)
(a) An FIR (nite impulse response) lter has an impulse response given by h[n] =

1, 2, 2, 3. Find its
response y[n] to the input x[n] =

2, 1, 3. Assume that both x[n] and h[n] start at n = 0.


c Ashok Ambardar, September 1, 2003
80 Chapter 3 Time-Domain Analysis
The paper-and-pencil method expresses the input as x[n] = 2[n] [n 1] + 3[n 2] and tabulates
the response to each impulse and the total response as follows:
h[n] = 1 2 2 3
x[n] = 2 1 3
Input Response
2[n] 2h[n] = 2 4 4 6
[n 1] h[n 1] = 1 2 2 3
3[n 2] 3h[n 2] = 3 6 6 9
Sum = x[n] Sum = y[n] = 2 3 5 10 3 9
So, y[n] =

2, 3, 5, 10, 3, 9 = 2[n] + 3[n 1] + 5[n 2] + 10[n 3] + 3[n 4] + 9[n 5].


The convolution process is illustrated graphically in Figure E3.21A.
[n] x
1 2 3
1 2 3 4
1 2 3 5 4
1 2 3 4 5
[n] y
1 2 3
[n] h
n
1
2 2
3
1 2 3
[n] h
n
1
2 2
3
1 2 3
[n] h
n
1
2 2
3
1 2 3
[n] h
n
1
2 2
3
Input Output
1
1
2
n
2
3
n
2
1
3
1
2
n
4 4
6
2
n
1
2
2
3
n
3
6 6
n
3
5
3
2
10
9
n
n
Superposition Superposition
Figure E3.21A The discrete convolution for Example 3.21(a)
(b) Let h[n] = 2, 5,

0, 4 and x[n] = 4,

1, 3.
We note that the convolution starts at n = 3 and use this to set up the index array and generate the
convolution as follows:
c Ashok Ambardar, September 1, 2003
3.10 Convolution of Finite Sequences 81
n 3 2 1 0 1 2
h[n] 2 5 0 4
x[n] 4 1 3
8 20 0 16
2 5 0 4
6 15 0 12
y[n] 8 22 11

31 4 12
The marker is placed by noting that the convolution starts at n = 3, and we get
y[n] = 8, 22, 11,

31, 4, 12
(c) (Response of a Moving-Average Filter) Let x[n] =

2, 4, 6, 8, 10, 12, . . ..
What system will result in the response y[n] =

1, 3, 5, 7, 9, 11, . . .?
At each instant, the response is the average of the input and its previous value. This system describes
an averaging or moving average lter. Its dierence equation is simply y[n] = 0.5(x[n] +x[n 1]).
Its impulse response is thus h[n] = 0.5[n] +[n 1], or h[n] =

0.5, 0.5.
Using discrete convolution, we nd the response as follows:
x: 2 4 6 8 10 12 . . .
h:
1
2
1
2
1 2 3 4 5 6 . . .
1 2 3 4 5 6 . . .
y: 1 3 5 7 9 11 . . .
This result is indeed the averaging operation we expected.
DRILL PROBLEM 3.25
(a) Let x[n] =

1, 4, 0, 2 and h[n] =

1, 2, 1. Find their convolution.


(b) Let x[n] = 1, 4,

1, 3 and h[n] = 2,

1, 1. Find their convolution.


Answers: (a)

1, 6, 9, 6, 4, 2 (b) 2, 9, 5,

3, 2, 3
3.10.2 The Fold, Shift, Multiply, and Sum Concept
The convolution sum may also be interpreted as follows. We fold x[n] and shift x[n] to line up its last
element with the rst element of h[n]. We then successively shift x[n] (to the right) past h[n], one index
at a time, and nd the convolution at each index as the sum of the pointwise products of the overlapping
samples. One method of computing y[n] is to list the values of the folded function on a strip of paper and
slide it along the stationary function, to better visualize the process. This technique has prompted the name
sliding strip method. We simulate this method by showing the successive positions of the stationary and
folded sequence along with the resulting products, the convolution sum, and the actual convolution.
c Ashok Ambardar, September 1, 2003
82 Chapter 3 Time-Domain Analysis
EXAMPLE 3.22 (Convolution by the Sliding Strip Method)
Find the convolution of h[n] =

2, 5, 0, 4 and x[n] =

4, 1, 3.
Since both sequences start at n = 0, the folded sequence is x[k] = 3, 1,

4.
We line up the folded sequence below h[n] to begin overlap and shift it successively, summing the product
sequence as we go, to obtain the discrete convolution. The results are computed in Figure E3.22.
The discrete convolution is y[n] =

8, 22, 11, 31, 4, 12.


0 4
2 5 0 4
[4] y
15 0 16
[3] y = sum of products = 31
2 5 0 4
1 4 3 1 4 3
[5] y = sum of products = 12
2 5 0 4
1 4 3
= sum of products = 4
Slide Slide
12
2 5 0 4
2 20
[1] y = sum of products = 22 [0] y = sum of products = 8
2 5 0 4
1 4 3 1 4 3
2 5 0 4
6 5 0
[2] y = sum of products = 11
1 4 3
Slide Slide
8
Figure E3.22 The discrete signals for Example 3.22 and their convolution
3.10.3 Discrete Convolution, Multiplication, and Zero Insertion
The discrete convolution of two nite-length sequences x[n] and h[n] is entirely equivalent to multiplication
of two polynomials whose coecients are described by the arrays x[n] and h[n] (in ascending or descending
order). The convolution sequence corresponds to the coecients of the product polynomial. Based on this
result, if we insert N zeros between each pair of adjacent samples of each sequence to be convolved, their
convolution corresponds to the original convolution sequence with N zeros inserted between each pair of its
adjacent samples.
If we append zeros to one of the convolved sequences, the convolution result will also show as many appended
zeros at the corresponding location. For example, leading zeros appended to a sequence will appear as leading
zeros in the convolution result. Similarly, trailing zeros appended to a sequence will show up as trailing zeros
in the convolution.
REVIEW PANEL 3.16
Convolution of Finite-Length Signals Corresponds to Polynomial Multiplication
Example:

1, 1, 3

1, 0, 2 =

1, 1, 5, 2, 6 (x
2
+x + 3)(x
2
+ 2) = x
4
+x
3
+ 5x
2
+ 2x + 6
Zero Insertion of Both Sequences Leads to Zero Insertion of the Convolution
Example: If

1, 2

3, 1, 4 =

3, 7, 6, 8
Then,

1, 0, 0, 2

3, 0, 0, 1, 0, 0, 4 =

3, 0, 0, 7, 0, 0, 6, 0, 0, 8
Zero-Padding of One Sequence Leads to Zero-Padding of the Convolution
Example: If x[n] h[n] = y[n] then 0, 0, x[n], 0, 0 h[n], 0 = 0, 0, y[n], 0, 0, 0.
c Ashok Ambardar, September 1, 2003
3.10 Convolution of Finite Sequences 83
EXAMPLE 3.23 (Polynomial Multiplication, Zero Insertion, Zero-Padding)
(a) Let h[n] =

2, 5, 0, 4 and x[n] =

4, 1, 3. To nd their convolution, we set up the polynomials


h(z) = 2z
3
+ 5z
2
+ 0z + 4 x(z) = 4z
2
+ 1z + 3
Their product is y(z) = 8z
5
+ 22z
4
+ 11z
3
+ 31z
2
+ 4z + 12.
The convolution is thus y[n] =

8, 22, 11, 31, 4, 12.


(b) (Zero-Insertion) Zero insertion of each convolved sequence gives
h
1
[n] =

2, 0, 5, 0, 0, 0, 4 x
1
[n] =

4, 0, 1, 0, 3
To nd their convolution, we set up the polynomials
h
1
(z) = 2z
6
+ 5z
4
+ 0z
2
+ 4 x
1
(z) = 4z
4
+ 1z
2
+ 3
Their product is y
1
(z) = 8z
10
+ 22z
8
+ 11z
6
+ 31z
4
+ 4z
2
+ 12.
The convolution is then y
1
[n] =

8, 0, 22, 0, 11, 0, 31, 0, 4, 0, 12.


This result is just y[n] with zeros inserted between adjacent samples.
(c) (Zero-Padding) If we pad the rst sequence by two zeros and the second by one zero, we get
h
2
[n] =

2, 5, 0, 4, 0, 0 x
2
[n] =

4, 1, 3, 0
To nd their convolution, we set up the polynomials
h
2
(z) = 2z
5
+ 5z
4
+ 4z
2
x
2
(z) = 4z
3
+ 1z
2
+ 3z
Their product is y
2
(z) = 8z
8
+ 22z
7
+ 11z
6
+ 31z
5
+ 4z
4
+ 12z
3
.
The convolution is then y
1
[n] =

8, 22, 11, 31, 4, 12, 0, 0, 0.


This result is just y[n] with three zeros appended at the end.
DRILL PROBLEM 3.26
Use the result

1, 0, 2

1, 3 =

1, 3, 2, 6 to answer the following.


(a) Let x[n] =

1, 0, 0, 0, 2 and h[n] =

1, 0, 3. Find their convolution.


(b) Let x[n] =

1, 0, 2, 0 and h[n] =

1, 3, 0, 0. Find their convolution.


(c) Let x[n] = 0, 0,

1, 0, 2 and h[n] =

1, 3, 0. Find their convolution.


Answers: (a)

1, 0, 3, 0, 2, 0, 6 (b)

1, 3, 2, 6, 0, 0, 0 (c) 0, 0,

1, 3, 2, 6, 0
c Ashok Ambardar, September 1, 2003
84 Chapter 3 Time-Domain Analysis
3.10.4 Impulse Response of LTI Systems in Cascade and Parallel
Consider the ideal cascade of two LTI systems shown in Figure 3.7. The response of the rst system is
y
1
[n] = x[n] h
1
[n]. The response y[n] of the second system is
y[n] = y
1
[n] h
2
[n] = (x[n] h
1
[n]) h
2
[n] = x[n] (h
1
[n] h
2
[n]) (3.42)
If we wish to replace the cascaded system by an equivalent LTI system with impulse response h[n] such that
y[n] = x[n] h[n], it follows that h[n] = h
1
[n] h
2
[n]. Generalizing this result, the impulse response h[n] of
N ideally cascaded LTI systems is simply the convolution of the N individual impulse responses
h[n] = h
1
[n] h
2
[n] h
N
[n] (for a cascade combination) (3.43)
If the h
k
[n] are energy signals, the order of cascading is unimportant.
[n] h
1
[n] h
2
[n] h
1
[n] h
2 *
[n] x [n] y [n] y [n] x
[n] h
1
[n] h
2
[n] h
1
[n] h
2
+ [n] y
[n] x
[n] x
[n] y
Two LTI systems in cascade Equivalent LTI system
Two LTI systems in parallel
Equivalent LTI system

+
+
Figure 3.7 Cascaded and parallel systems and their equivalents
The overall impulse response of LTI systems in parallel equals the sum of the individual impulse responses,
as shown in Figure 3.7:
h
P
[n] = h
1
[n] +h
2
[n] + +h
N
[n] (for a parallel combination) (3.44)
REVIEW PANEL 3.17
Impulse Response of N Interconnected Discrete LTI Systems
In cascade: Convolve the impulse responses: h
C
[n] = h
1
[n] h
2
[n] h
N
[n]
In parallel: Add the impulse responses: h
P
[n] = h
1
[n] +h
2
[n] + +h
N
[n]
EXAMPLE 3.24 (Interconnected Systems)
Consider the interconnected system of Figure E3.24. Find its overall impulse response and the output.
Comment on the results.
[n] y [n] x [n1] x = 0.5 [n] h (0.5)
n
[n] u =
1
4 4
n
Input
Output
Figure E3.24 The interconnected system of Example 3.24
c Ashok Ambardar, September 1, 2003
3.11 Stability and Causality of LTI Systems 85
The impulse response of the rst system is h
1
[n] = [n] 0.5[n1]. The overall impulse response h
C
[n]
is given by the convolution
h
C
[n] = ([n] 0.5[n 1]) (0.5)
n
u[n] = (0.5)
n
u[n] 0.5(0.5)
n1
u[n 1]
This simplies to
h
C
[n] = (0.5)
n
(u[n] u[n 1]) = (0.5)
n
[n] = [n]
What this means is that the overall system output equals the applied input. The second system thus acts
as the inverse of the rst.
DRILL PROBLEM 3.27
(a) Find the impulse response of the cascade of two identical lters, each with h[n] =

1, 1, 3.
(b) The impulse response of two lters is h
1
[n] =

1, 0, 2 and h
2
[n] = 4,

1, 3. Find the impulse


response of their parallel combination.
(c) Two lters are described by y[n] 0.4y[n 1] = x[n] and h
2
[n] = 2(0.4)
n
u[n]. Find the impulse
response of their parallel combination and cascaded combination.
Answers: (a)

1, 2, 7, 6, 9 (b) 4,

2, 3, 2 (c) h
P
[n] = 3(0.4)
n
u[n], h
C
[n] = 2(n+1)(0.4)
n
u[n]
3.11 Stability and Causality of LTI Systems
System stability is an important practical constraint in lter design and is dened in various ways. Here we
introduce the concept of Bounded-input, bounded-output (BIBO) stability that requires every bounded
input to produce a bounded output.
3.11.1 Stability of FIR Filters
The system equation of an FIR lter describes the output as a weighted sum of shifted inputs. If the input
remains bounded, the weighted sum of the inputs is also bounded. In other words, FIR lters are always
stable. This can be a huge design advantage.
3.11.2 Stability of LTI Systems Described by Dierence Equations
For an LTI system described by the dierence equation
y[n] +A
1
y[n 1] + +A
N
y[n N] = B
0
x[n] +B
1
x[n 1] + +B
M
x[n M] (3.45)
the conditions for BIBO stability involve the roots of the characteristic equation. A necessary and sucient
condition for BIBO stability of such an LTI system is that every root of its characteristic equation must have
a magnitude less than unity. This criterion is based on the results of Tables 3.1 and 3.2. Root magnitudes
less than unity ensure that the natural (and zero-input) response always decays with time (see Table 3.2),
and the forced (and zero-state) response always remains bounded for every bounded input. Roots with
magnitudes that equal unity make the system unstable. Simple (non-repeated) roots with unit magnitude
produce a constant (or sinusoidal) natural response that is bounded; but if the input is also a constant
(or sinusoid at the same frequency), the forced response is a ramp or growing sinusoid (see Table 3.1) and
hence unbounded. Repeated roots with unit magnitude result in a natural response that is itself a growing
sinusoid or polynomial and thus unbounded. In the next chapter, we shall see that the stability condition
is equivalent to having an LTI system whose impulse response h[n] is absolutely summable. The stability of
nonlinear or time-varying systems usually must be checked by other means.
c Ashok Ambardar, September 1, 2003
86 Chapter 3 Time-Domain Analysis
3.11.3 Stability of LTI Systems Described by the Impulse Response
For systems described by their impulse response, it turns out that BIBO stability requires that the impulse
response h[n] be absolutely summable. Here is why. If x[n] is bounded such that [x[n][ < M, so too is its
shifted version x[n k]. The convolution sum then yields the following inequality:
[y[n][ <

k=
[h[k][[x[n k][ < M

k=
[h[k][ (3.46)
If the output is to remain bounded ([y[n][ < ), then

k=
[h[k][ < (for a stable LTI system) (3.47)
In other words, h[n] must be absolutely summable. This is both a necessary and sucient condition. The
stability of nonlinear systems must be investigated by other means.
3.11.4 Causality
In analogy with analog systems, causality of discrete-time systems implies a non-anticipating system with
an impulse response h[n] = 0, n < 0. This ensures that an input x[n]u[n n
0
] starting at n = n
0
results in
a response y[n] also starting at n = n
0
(and not earlier). This follows from the convolution sum:
y[n] =

k
x[k]u(k n
0
]h[n k]u[n k] =
n

n0
x[k]h[n k] (3.48)
REVIEW PANEL 3.18
Stability and Causality of Discrete LTI Systems
Stability: Every root r of the characteristic equation must have magnitude [r[ less than unity.
The impulse response h[n] must be absolutely summable with

k=
[h[k][ < .
Note: FIR lters are always stable.
Causality: The impulse response h[n] must be zero for negative indices (h[n] = 0, n < 0).
EXAMPLE 3.25 (Concepts Based on Stability and Causality)
(a) The system y[n]
1
6
y[n 1]
1
6
y[n 2] = x[n] is stable since the roots of its characteristic equation
z
2

1
6
z
1
6
= 0 are z
1
=
1
2
and z
2
=
1
3
and their magnitudes are less than 1.
(b) The system y[n] y[n1] = x[n] is unstable. The root of its characteristic equation z 1 = 0 is z = 1
gives the natural response y
N
= Ku[n], which is actually bounded. However, for an input x[n] = u[n],
the forced response will have the form Cnu[n], which becomes unbounded.
c Ashok Ambardar, September 1, 2003
3.12 System Response to Periodic Inputs 87
(c) The system y[n] 2y[n 1] + y[n 2] = x[n] is unstable. The roots of its characteristic equation
z
2
2z + 1 = 0 are equal and produce the unbounded natural response y
N
[n] = Au[n] +Bnu[n].
(d) The system y[n]
1
2
y[n 1] = nx[n] is linear, time varying and unstable. The (bounded) step input
x[n] = u[n] results in a response that includes the ramp nu[n], which becomes unbounded.
(e) The system y[n] = x[n] 2x[n 1] is stable because it describes an FIR lter.
(f ) The FIR lter described by y[n] = x[n + 1] x[n] has the impulse response h[n] = 1,

1. It is a
stable system, since

[h[n][ = [1[ + [ 1[ = 2. It is also noncausal because h[n] = [n + 1] [n] is
not zero for n < 0. We emphasize that FIR lters are always stable because

[h[n][ is the absolute
sum of a nite sequence and is thus always nite.
(g) A lter described by h[n] = (0.5)
n
u[n] is causal. It describes a system with the dierence equation
y[n] = x[n] +ay[n 1]. It is also stable because

[h[n][ is nite. In fact, we nd that

n=
[h[n][ =

n=0
(0.5)
n
=
1
1 0.5
= 2
(h) A lter described by the dierence equation y[n] 0.5y[n 1] = nx[n] is causal but time varying. It
is also unstable. If we apply a step input u[n] (bounded input), then y[n] = nu[n] + 0.5y[n 1]. The
term nu[n] grows without bound and makes this system unstable. We caution you that this approach
is not a formal way of checking for the stability of time-varying systems.
DRILL PROBLEM 3.28
(a) Is the lter described by h[n] = 2,

1, 1, 3 causal? Is it stable?
(b) Is the lter described by h[n] = 2
n
u[n + 2] causal? Is it stable?
(c) Is the lter described by y[n] + 0.5y[n 1] = 4u[n] causal? Is it stable?
(d) Is the lter described by y[n] + 1.5y[n 1] + 0.5y[n 2] = u[n] causal? Is it stable?
Answers: (a) Noncausal, stable (b) Noncausal, unstable (c) Causal, stable (d) Causal, unstable
3.12 System Response to Periodic Inputs
In analogy with analog systems, the response of a discrete-time system to a periodic input with period N is
also periodic with the same period N. A simple example demonstrates this concept.
c Ashok Ambardar, September 1, 2003
88 Chapter 3 Time-Domain Analysis
EXAMPLE 3.26 (Response to Periodic Inputs)
(a) Let x[n] =

1, 2, 3, 1, 2, 3, 1, 2, 3, . . . and h[n] =

1, 1.
The convolution y[n] = x[n] h[n], using the sum-by-column method, is
Index n 0 1 2 3 4 5 6 7 8 9 10
x[n] 1 2 3 1 2 3 1 2 3 1 . . .
h[n] 1 1
1 2 3 1 2 3 1 2 3 1 . . .
1 2 3 1 2 3 1 2 3 . . .
y[n] 1 3 1 2 3 1 2 3 1 2 . . .
The convolution y[n] is periodic with period N = 3, except for start-up eects (which last for one
period). One period of the convolution is y[n] =

2, 3, 1.
(b) Let x[n] =

1, 2, 3, 1, 2, 3, 1, 2, 3, . . . and h[n] =

1, 1, 1.
The convolution y[n] = x[n] h[n], using the sum-by-column method, is found as follows:
Index n 0 1 2 3 4 5 6 7 8 9 10
x[n] 1 2 3 1 2 3 1 2 3 1 . . .
h[n] 1 1 1
1 2 3 1 2 3 1 2 3 1 . . .
1 2 3 1 2 3 1 2 3 . . .
1 2 3 1 2 3 1 2 . . .
y[n] 1 3 0 0 0 0 0 0 0 0 . . .
Except for start-up eects, the convolution is zero. The system h[n] =

1, 1, 1 is a moving average
lter. It extracts the 3-point running sum, which is always zero for the given periodic signal x[n].
REVIEW PANEL 3.19
The Response of LTI Systems to Periodic Signals Is Also Periodic with Identical Period
Period = Period = N N
Relaxed
LTI system
Periodic input Periodic output
One way to nd the system response to periodic inputs is to nd the response to one period of the input and
then use superposition. In analogy with analog signals, if we add an absolutely summable signal (or energy
signal) x[n] and its innitely many replicas shifted by multiples of N, we obtain a periodic signal with period
N, which is called the periodic extension of x[n]:
x
pe
[n] =

k=
x[n +kN] (3.49)
c Ashok Ambardar, September 1, 2003
3.12 System Response to Periodic Inputs 89
An equivalent way of nding one period of the periodic extension is to wrap around N-sample sections of
x[n] and add them all up. If x[n] is shorter than N, we obtain one period of its periodic extension simply
by padding x[n] with zeros (to increase its length to N).
EXAMPLE 3.27 (Periodic Extension)
(a) The periodic extension of x[n] =

1, 5, 2, 0, 4, 3, 6, 7 with period N = 3 is found by wrapping


around blocks of 3 samples and nding the sum to give

1, 5, 2, 0, 4, 3, 6, 7 = wrap around =

1 5 2
0 4 3
6 7

= sum =

7, 16, 5
In other words, if we add x[n] to its shifted versions x[n + kN] where N = 3 and k = 1, 2, 3, . . .,
we get a periodic signal whose rst period is

7, 16, 5.
The periodic extension of the signal x[n] =
n
u[n] with period N is given by
x
pe
[n] =

k=
x[n +kN] =

k=0

n+kN
=
n

k=0
(
N
)k =

n
1
N
, 0 n N 1
The methods for nding the response of a discrete-time system to periodic inputs rely on the concepts of
periodic extension and wraparound. One approach is to nd the output for one period of the input (using
regular convolution) and nd one period of the periodic output by superposition (using periodic extension).
Another approach is to rst nd one period of the periodic extension of the impulse response, then nd its
regular convolution with one period of the input and nally, wrap around the regular convolution to generate
one period of the periodic output.
EXAMPLE 3.28 (System Response to Periodic Inputs)
(a) Let x[n] =

1, 2, 3 describe one period of a periodic input with period N = 3 to a system whose


impulse response is h[n] =

1, 1. The response y[n] is also periodic with period N = 3. To nd y[n]


for one period, we nd the regular convolution y
1
[n] of h[n] and one period of x[n] to give
y
1
[n] =

1, 1

1, 2, 3 =

1, 3, 1, 3
We then wrap around y
1
[n] past three samples to obtain one period of y[n] as

1, 3, 1, 3 = wrap around =


1 3 1
3

= sum =

2, 3, 1
This is identical to the result obtained in the previous example.
(b) We nd the response y
p
[n] of a moving average lter described by h[n] =

2, 1, 1, 3, 1 to a periodic
signal whose one period is x
p
[n] =

2, 1, 3, with N = 3, using two methods.


c Ashok Ambardar, September 1, 2003
90 Chapter 3 Time-Domain Analysis
1. (Method 1) We nd the regular convolution y[n] = x
p
[n] h[n] to obtain
y[n] =

2, 1, 3

2, 1, 1, 3, 1 =

4, 4, 9, 10, 8, 10, 3
To nd y
p
[n], values past N = 3 are wrapped around and summed to give

4, 4, 9, 10, 8, 10, 3 = wrap around =

4 4 9
10 8 10
3

= sum =

17, 12, 19
2. (Method 2) We rst create the periodic extension of h[n], with N = 3, (by wraparound) to get
h
p
[n] =

5, 2, 1. The regular convolution of h


p
[n] and one period of x[n] gives
y[n] =

2, 1 , 3

5, 2 , 1 =

10, 9, 19, 7, 3
This result is wrapped around past N = 3 to give y
p
[n] =

17, 12, 19, as before.


(c) We nd the response of the system y[n] 0.5y[n 1] = x[n] to x[n] = . . . ,

7, 0, 0, 7, 0, 0, . . ..
The impulse response of the system is h[n] = (0.5)
n
u[n]. The input is periodic with period N = 3
and rst period x
1
=

7, 0, 0. The response due to this one period is just y


1
[n] = 7(0.5)
n
u[n]. The
complete periodic output is the periodic extension of y
1
[n] with period N = 3. Using the result that
the periodic extension of
n
u[n] with period N is given by

n
1
N
, 0 n N 1, we nd one period
of the periodic output as
y
p
[n] =
(0.5)
n
1 (0.5)
3
, 0 n 2 =

8, 4, 2
DRILL PROBLEM 3.29
(a) A lter is described by h[n] =

2, 1, 1, 3, 2. Find one period of its periodic output if the input is


periodic with rst period x
1
[n] =

1, 0, 0, 2.
(b) A lter is described by y[n] 0.5y[n 1] = 7x[n]. Find one period of its periodic output if the input
is periodic with rst period x
1
[n] =

1, 0, 0.
(c) A lter is described by y[n] 0.5y[n 1] = 7x[n]? Find one period of its periodic output if the input
is periodic with rst period x
1
[n] =

1, 0, 1.
Answers: (a)

6, 3, 7, 11 (b)

8, 4, 2 (c)

12, 6, 10
c Ashok Ambardar, September 1, 2003
3.13 Periodic Convolution 91
3.13 Periodic Convolution
The regular convolution of two signals, both of which are periodic, does not exist. For this reason, we resort
to periodic convolution by using averages. If both x
p
[n] and h
p
[n] are periodic with identical period N,
their periodic convolution generates a convolution result y
p
[n] that is also periodic with the same period N.
The periodic convolution or circular convolution y
p
[n] of x
p
[n] and h
p
[n] is denoted y
p
[n] = x
p
[n] (h
p
[n]
and, over one period (n = 0, 1, . . . , N 1), it is dened by
y
p
[n] = x
p
[n] (h
p
[n] = h
p
[n] (x
p
[n] =
N1

k=0
x
p
[k]h
p
[n k] =
N1

k=0
h
p
[k]x
p
[n k] (3.50)
An averaging factor of 1/N is sometimes included with the summation. Periodic convolution can be imple-
mented using wraparound. We nd the linear convolution of one period of x
p
[n] and h
p
[n], which will have
(2N 1) samples. We then extend its length to 2N (by appending a zero), slice it in two halves (of length
N each), line up the second half with the rst, and add the two halves to get the periodic convolution.
REVIEW PANEL 3.20
Periodic Convolution of Periodic Discrete-Time Signals with Identical Period N
1. Find the regular convolution of their one-period segments (this will have length 2N 1).
2. Append a trailing zero. Wrap around the last N samples and add to the rst N samples.
EXAMPLE 3.29 (Periodic Convolution)
(a) Find the periodic convolution of x
p
[n] =

1, 0, 1, 1 and h
p
[n] =

1, 2, 3, 1.
The period is N = 4. First, we nd the linear convolution y[n].
Index n 0 1 2 3 4 5 6
h
p
[n] 1 2 3 1
x
p
[n] 1 0 1 1
1 2 3 1
0 0 0 0
1 2 3 1
1 2 3 1
y[n] 1 2 4 4 5 4 1
Then, we append a zero, wrap around the last four samples, and add.
Index n 0 1 2 3
First half of y[n] 1 2 4 4
Wrapped around half of y[n] 5 4 1 0
Periodic convolution y
p
[n] 6 6 5 4
(b) Find the periodic convolution of x
p
[n] =

1, 2, 3 and h
p
[n] =

1, 0, 2, with period N = 3.
The regular convolution is easily found to be y
R
[n] =

1, 2, 5, 4, 6.
Appending a zero and wrapping around the last three samples gives y
p
[n] =

5, 8, 5.
c Ashok Ambardar, September 1, 2003
92 Chapter 3 Time-Domain Analysis
DRILL PROBLEM 3.30
Find the periodic convolution of two identical signals whose rst period is given by x
1
[n] =

1, 2, 0, 2.
Answer:

9, 4, 8, 4
3.13.1 Periodic Convolution By the Cyclic Method
To nd the periodic convolution, we shift the folded signal x
p
[n] past h
p
[n], one index at a time, and
nd the convolution at each index as the sum of the pointwise product of their samples but only over a
one-period window (0, N 1). Values of x
p
[n] and h
p
[n] outside the range (0, N 1) are generated by
periodic extension. One way to visualize the process is to line up x[k] clockwise around a circle and h[k]
counterclockwise (folded), on a concentric circle positioned to start the convolution, as shown in Figure 3.8.
[0]=(1)(1)+(2)(2)+(0)(3) = 5 y
(folded h)
1
0
2
1
2 3
Rotate outer
sequence
clockwise
[1]=(0)(1)+(1)(2)+(2)(3) = 8 y
(folded h)
2
0
1
2 3
Rotate outer
sequence
clockwise
1
[2]=(2)(1)+(0)(2)+(1)(3) = 5 y
1
2
0
1
2 3
Figure 3.8 The cyclic method of circular (periodic) convolution
Shifting the folded sequence turns it clockwise. At each turn, the convolution equals the sum of the
pairwise products. This approach clearly brings out the cyclic nature of periodic convolution.
3.13.2 Periodic Convolution By the Circulant Matrix
Periodic convolution may also be expressed as a matrix multiplication. We set up an N N matrix whose
columns equal x[n] and its cyclically shifted versions (or whose rows equal successively shifted versions of
the rst period of the folded signal x[n]). This is called the circulant matrix or convolution matrix.
An N N circulant matrix C
x
for x[n] has the general form
C
x
=

x[0] x[N 1] . . . x[2] x[1]


x[1] x[0] . . . x[2]
x[2] x[1] . . . x[3]
.
.
.
.
.
.
.
.
.
x[N 2] . . . x[0] x[N 1]
x[N 1] x[N 2] . . . x[1] x[0]

(3.51)
Note that each diagonal of the circulant matrix has equal values. Such a constant diagonal matrix is also
called a Toeplitz matrix. Its matrix product with an N 1 column matrix h describing h[n] yields the
periodic convolution y = Ch as an N 1 column matrix.
c Ashok Ambardar, September 1, 2003
3.13 Periodic Convolution 93
EXAMPLE 3.30 (Periodic Convolution By the Circulant Matrix)
Consider x[n] =

1, 0, 2 and h[n] =

1, 2, 3, described over one period (N = 3).


(a) The circulant matrix C
x
and periodic convolution y
1
[n] are given by
C
x
=

1 2 0
0 1 2
2 0 1

h =

1
2
3

y
1
[n] =

1 2 0
0 1 2
2 0 1

1
2
3

5
8
5

Comment: Though not required, normalization by N = 3 gives y


p1
[n] =
y
1
[n]
3
=

5
3
,
8
3
,
5
3
.
(b) The periodic convolution y
2
[n] of x[n] and h[n] over a two-period window yields
C
2
=

1 2 0 1 2 0
0 1 2 0 1 2
2 0 1 2 0 1
1 2 0 1 2 0
0 1 2 0 1 2
2 0 1 2 0 1

h
2
=

1
2
3
1
2
3

y
2
[n] =

10
16
10
10
16
10

We see that y
2
[n] has double the length (and values) of y
1
[n], but it is still periodic with N = 3.
Comment: Normalization by a two-period window width (6 samples) gives
y
p2
[n] =
y
2
[n]
6
=

5
3
,
8
3
,
5
3
,
5
3
,
8
3
,
5
3

Note that one period (N = 3) of y


p2
[n] is identical to the normalized result y
p1
[n] of part (a).
3.13.3 Regular Convolution from Periodic Convolution
The linear convolution of x[n] (with length N
x
) and h[n] (with length N
h
) may also be found using the
periodic convolution of two zero-padded signals x
z
[n] and h
z
[n] (each of length N
y
= N
x
+ N
h
1). The
regular convolution of the original, unpadded sequences equals the periodic convolution of the zero-padded
sequences.
REVIEW PANEL 3.21
Regular Convolution from Periodic Convolution by Zero-Padding
x[n] h[n] = Periodic convolution of their zero-padded (each to length N
x
+N
h
1) versions.
EXAMPLE 3.31 (Regular Convolution by the Circulant Matrix)
Let x[n] =

2, 5, 0, 4 and h[n] =

4, 1, 3. Their regular convolution has S = M + N 1 = 6 samples.


Using trailing zeros, we create the padded sequences
x
zp
[n] =

2, 5, 0, 4, 0, 0 h
zp
[n] =

4, 1, 3, 0, 0, 0
c Ashok Ambardar, September 1, 2003
94 Chapter 3 Time-Domain Analysis
The periodic convolution x
zp
[n] (h
zp
[n], using the circulant matrix, equals
C
xzp
=

2 0 0 4 0 5
5 2 0 0 4 0
0 5 2 0 0 4
4 0 5 2 0 0
0 4 0 5 2 0
0 0 4 0 5 2

h
zp
=

4
1
3
0
0
0

y
p
[n] =

8
22
11
31
4
12

This is identical to the regular convolution y[n] = x[n] h[n] obtained previously by several other methods
in previous examples.
DRILL PROBLEM 3.31
Let x[n] =

1, 2, 0, 2, 2 and h[n] =

3, 2.
(a) How many zeros must be appended to x[n] and h[n] in order to generate their regular convolution
from the periodic convolution of the zero-padded sequences.
(b) What is the regular convolution of the zero-padded sequences?
(c) What is the regular convolution of the original sequences?
Answers: (a) 1, 4 (b)

3, 8, 4, 6, 10, 4, 0, 0, 0, 0, 0 (c)

3, 8, 4, 6, 10, 4
3.14 Deconvolution
Given the system impulse response h[n], the response y[n] of the system to an input x[n] is simply the
convolution of x[n] and h[n]. Given x[n] and y[n] instead, how do we nd h[n]? This situation arises very
often in practice and is referred to as deconvolution or system identication.
For discrete-time systems, we have a partial solution to this problem. Since discrete convolution may
be thought of as polynomial multiplication, discrete deconvolution may be regarded as polynomial division.
One approach to discrete deconvolution is to use the idea of long division, a familiar process, illustrated in
the following example.
3.14.1 Deconvolution By Recursion
Deconvolution may also be recast as a recursive algorithm. The convolution
y[n] = x[n] h[n] =
n

k=0
h[k]x[n k] (3.52)
when evaluated at n = 0, provides the seed value h[0] as
y[0] = x[0]h[0] h[0] =
y[0]
x[0]
(3.53)
We now separate the term containing h[n] in the convolution relation
y[n] =
n

k=0
h[k]x[n k] = h[n]x[0] +
n1

k=0
h[k]x[n k] (3.54)
c Ashok Ambardar, September 1, 2003
3.14 Deconvolution 95
and evaluate h[n] for successive values of n > 0 from
h[n] =
1
x[0]

y[n]
n1

k=0
h[k]x[n k]

, n > 0 (3.55)
If all goes well, we need to evaluate h[n] only at M N + 1 points, where M and N are the lengths of y[n]
and x[n], respectively.
Naturally, problems arise if a remainder is involved. This may well happen in the presence of noise,
which could modify the values in the output sequence even slightly. In other words, the approach is quite
susceptible to noise or roundo error and not very practical.
REVIEW PANEL 3.22
Deconvolution May Be Regarded as Polynomial Division or Matrix Inversion
EXAMPLE 3.32 (Deconvolution)
(a) (Deconvolution by Polynomial Division)
Consider x[n] =

2, 5, 0, 4 and y[n] =

8, 22, 11, 31, 4, 12. We regard these as being the


coecients, in descending order, of the polynomials
x(w) = 2w
3
+ 5w
2
+ 0w + 4 y(w) = 8w
5
+ 22w
4
+ 11w
3
+ 31w
2
+ 4w + 12
The polynomial h(w) may be deconvolved out of x(w) and y(w) by performing the division y(w)/x(w):
2w
3
+ 5w
2
+ 0w + 4

4w
2
+w + 3
8w
5
+ 22w
4
+ 11w
3
+ 31w
2
+ 4w + 12
8w
5
+ 20w
4
+ 0w
3
+ 16w
2
2w
4
+ 11w
3
+ 15w
2
+4w + 12
2w
4
+ 5w
3
+ 0w
2
+ 4w
6w
3
+ 15w
2
+ 0w + 12
6w
3
+ 15w
2
+ 0w + 12
0
The coecients of the quotient polynomial describe the sequence h[n] =

4, 1, 3.
(b) (Deconvolution by Recursion)
Let x[n] =

2, 5, 0, 4 and y[n] =

8, 22, 11, 31, 4, 12. We note that x[n] is of length N = 4 and


y[n] is of length M = 4. So, we need only M N + 1 = 6 4 + 1 = 3 recursive evaluations to obtain
h[n]. We compute
h[0] =
y[0]
x[0]
= 4
c Ashok Ambardar, September 1, 2003
96 Chapter 3 Time-Domain Analysis
h[1] =
1
x[0]

y[1]
0

k=0
h[k]x[1 k]

=
y[1] h[0]x[1]
x[0]
= 1
h[2] =
1
x[0]

y[2]
1

k=0
h[k]x[2 k]

=
y[2] h[0]x[2] h[1]x[1]
x[0]
= 3
As before, h[n] = 4, 1, 3.
DRILL PROBLEM 3.32
The input x[n] =

1, 2 to an LTI system produces the output y[n] =

2, 3, 1, 6. Use deconvolution to
nd the impulse response h[n].
Answer:

2, 1, 3
3.15 Discrete Correlation
Correlation is a measure of similarity between two signals and is found using a process similar to convolution.
The discrete cross-correlation (denoted ) of x[n] and h[n] is dened by
r
xh
[n] = x[n] h[n] =

k=
x[k]h[k n] =

k=
x[k +n]h[k] (3.56)
r
hx
[n] = h[n] x[n] =

k=
h[k]x[k n] =

k=
h[k +n]x[k] (3.57)
Some authors prefer to switch the denitions of r
xh
[n] and r
hx
[n].
To nd r
xh
[n], we line up the last element of h[n] with the rst element of x[n] and start shifting h[n]
past x[n], one index at a time. We sum the pointwise product of the overlapping values to generate the
correlation at each index. This is equivalent to performing the convolution of x[n] and the folded signal
h[n]. The starting index of the correlation equals the sum of the starting indices of x[n] and h[n].
Similarly, r
hx
[n] equals the convolution of x[n] and h[n], and its starting index equals the sum of the
starting indices of x[n] and h[n]. However, r
xh
[n] does not equal r
hx
[n]. The two are folded versions of
each other and related by r
xh
[n] = r
hx
[n].
REVIEW PANEL 3.23
Correlation Is the Convolution of One Signal with a Folded Version of the Other
r
xh
[n] = x[n] h[n] = x[n] h[n] r
hx
[n] = h[n] x[n] = h[n] x[n]
Correlation length: N
x
+N
h
1 Correlation sum:

r[n] = (

x[n])(

h[n])
c Ashok Ambardar, September 1, 2003
3.15 Discrete Correlation 97
EXAMPLE 3.33 (Discrete Autocorrelation and Cross-Correlation)
(a) Let x[n] = 2,

5, 0, 4 and h[n] =

3, 1, 4.
To nd r
xh
[n], we compute the convolution of x[n] and h[n] = 4, 1,

3. The starting index of r


xh
[n]
is n = 3. We use this to set up the index array and generate the result (using the sum-by-column
method for convolution) as follows:
n 3 2 1 0 1 2
x[n] 2 5 0 4
h[n] 4 1 3
8 20 0 16
2 5 0 4
6 15 0 12
r
xh
[n] 8 22 11

31 4 12
So, r
xh
[n] = 8, 22, 11,

31, 4, 12
(b) Let x[n] = 2,

5, 0, 4 and h[n] =

3, 1, 4.
To nd r
hx
[n], we compute the convolution of x[n] = 4, 0,

5, 2 and h[n]. The starting index of


r
hx
[n] is n = 2. We use this to set up the index array and generate the result (using the sum-by-
column method for convolution) as follows:
n 2 1 0 1 2 3
x[n] 4 0 5 2
h[n] 3 1 4
12 0 15 6
4 0 5 2
16 0 20 8
r
hx
[n] 12 4

31 11 22 8
So, r
xh
[n] = 12, 4,

31, 11, 22, 8.


Note that r
xh
[n] and r
xh
[n] are folded versions of each other with r
xh
[n] = r
xh
[n].
(c) Let x[n] =

3, 1, 4.
To nd r
xx
[n], we compute the convolution of x[n] and x[n] = 4, 1,

3. The starting index of r


xx
[n]
is n = 2. We use this to set up the index array and generate the result (using the sum-by-column
method for convolution) as follows:
c Ashok Ambardar, September 1, 2003
98 Chapter 3 Time-Domain Analysis
n 2 1 0 1 2
x[n] 3 1 4
x[n] 4 1 3
12 4 16
3 1 4
9 3 12
r
xx
[n] 12 1

26 1 12
So, r
xx
[n] = 12, 1,

26, 1, 12. Note that r


xx
[n] is even symmetric about the origin n = 0.
DRILL PROBLEM 3.33
Let x[n] =

2, 1, 3 and h[n] = 3,

2, 1. Find r
xh
[n], r
hx
[n], r
xx
[n] and r
hh
[n]
Answers: 2,

3, 7, 3, 9 9, 3, 7,

3, 2 6, 5,

14, 5, 6 3, 8,

14, 8, 3
3.15.1 Autocorrelation
The correlation r
xx
[n] of a signal x[n] with itself is called the autocorrelation. It is an even symmetric
function (r
xx
[n] = r
xx
[n]) with a maximum at n = 0 and satises the inequality [r
xx
[n][ r
xx
[0].
Correlation is an eective method of detecting signals buried in noise. Noise is essentially uncorrelated
with the signal. This means that if we correlate a noisy signal with itself, the correlation will be due only to
the signal (if present) and will exhibit a sharp peak at n = 0.
REVIEW PANEL 3.24
The Autocorrelation Is Always Even Symmetric with a Maximum at the Origin
r
xx
[n] = x[n] x[n] = x[n] x[n] r
xx
[n] = r
xx
[n] r
xx
[n] r
xx
[0]
EXAMPLE 3.34 (Discrete Autocorrelation and Cross-Correlation)
(a) Let x[n] = (0.5)
n
u[n] and h[n] = (0.4)
n
u[n]. We compute the cross-correlation r
xh
[n] as follows:
r
xh
[n] =

k=
x[k]h[k n] =

k=
(0.5)
k
(0.4)
kn
u[k]u[k n]
This summation requires evaluation over two ranges of n. If n < 0, the shifted step u[k n] is nonzero
for some k < 0. But since u[k] = 0, k < 0, the lower limit on the summation reduces to k = 0 and we
get
(n < 0) r
xh
[n] =

k=0
(0.5)
k
(0.4)
kn
= (0.4)
n

k=0
(0.2)
k
=
(0.4)
n
1 0.2
= 1.25(0.4)
n
u[n 1]
If n 0, the shifted step u[k n] is zero for k < n, the lower limit on the summation reduces to k = n
and we obtain
(n 0) r
xh
[n] =

k=n
(0.5)
k
(0.4)
kn
c Ashok Ambardar, September 1, 2003
3.15 Discrete Correlation 99
With the change of variable m = k n, we get
(n 0) r
xh
[n] =

m=0
(0.5)
m+n
(0.4)
m
= (0.5)
n

m=0
(0.2)
m
=
(0.5)
n
1 0.2
= 1.25(0.5)
n
u[n]
So, r
xh
[n] = 1.25(0.4)
n
u[n 1] + 1.25(0.5)
n
u[n].
(b) Let x[n] = a
n
u[n], [a[ < 1. To compute r
xx
[n] which is even symmetric, we need compute the result
only for n 0 and create its even extension. Following the previous part, we have.
(n 0) r
xx
[n] =

k=
x[k]x[k n] =

k=n
a
k
a
kn
=

m=0
a
m+n
a
m
= a
n

m=0
a
2m
=
a
n
1 a
2
u[n]
The even extension of this result gives r
xx
[n] =
a
|n|
1 a
2
.
(c) Let x[n] = a
n
u[n], [a[ < 1, and y[n] = rect(n/2N). To nd r
xy
[n], we shift y[k] and sum the products
over dierent ranges. Since y[k n] shifts the pulse to the right over the limits (N +n, N +n), the
correlation r
xy
[n] equals zero until n = N. We then obtain
N n N 1 (partial overlap): r
xy
[n] =

k=
x[k]y[k n] =
N+1

k=0
a
k
=
1 a
N+n+1
1 a
n N (total overlap): r
xy
[n] =
N+1

k=N+1
a
k
=
2N

m=0
a
mN+1
= a
N+1
1 a
2N+1
1 a
DRILL PROBLEM 3.34
(a) Let x[n] = (0.5)
n
u[n] and h[n] = u[n]. Find their correlation.
(b) Let x[n] = (0.5)
n
u[n] and h[n] = (0.5)
n
u[n]. Find their correlation.
Answers: (a) 2u[n 1] + 2(0.5)
n
u[n] (b) (n + 1)(0.5)
n
]u[n]
3.15.2 Periodic Discrete Correlation
For periodic sequences with identical period N, the periodic discrete correlation is dened as
r
xhp
[n] = x[n] ( (h[n] =
N1

k=0
x[k]h[k n] r
hxp
[n] = h[n] ( (x[n] =
N1

k=0
h[k]x[k n] (3.58)
As with discrete periodic convolution, an averaging factor of 1/N is sometimes included in the summation. We
can nd one period of the periodic correlation r
xhp
[n] by rst computing the linear correlation of one period
segments and then wrapping around the result. We nd that r
hxp
[n] is a circularly folded version of r
xhp
[n]
with r
hxp
[n] = r
xhp
[n]. We also nd that the periodic autocorrelation r
xxp
[n] or r
hhp
[n] always displays
circular even symmetry. This means that the periodic extension of r
xxp
[n] or r
hhp
[n] is even symmetric about
the origin n = 0. The periodic autocorrelation function also attains a maximum at n = 0.
c Ashok Ambardar, September 1, 2003
100 Chapter 3 Time-Domain Analysis
EXAMPLE 3.35 (Discrete Periodic Autocorrelation and Cross-Correlation)
Consider two periodic signals whose rst period is given by x
1
[n] =

2, 5, 0, 4 and h
1
[n] =

3, 1, 1, 2.
(a) To nd the periodic cross-correlation r
xhp
[n], we rst evaluate the linear cross-correlation
r
xh
[n] = x
1
[n] h
1
[n] = 4, 8, 3,

19, 11, 4, 12
Wraparound gives the periodic cross-correlation as r
xhp
= 15, 12, 9,

19.
We invoke periodicity and describe the result in terms of its rst period as r
xhp
=

19, 15, 12, 9.


(b) To nd the periodic cross-correlation r
hxp
[n], we rst evaluate the linear cross-correlation
r
hx
[n] = r
xh
[n] = 12, 4, 11,

19, 3, 8, 4
Wraparound gives the periodic cross-correlation as r
hxp
= 9, 12, 15,

19.
We rewrite the result in terms of its rst period as r
hxp
=

19, 9, 12, 15.


Note that r
hxp
[n] is a circularly folded version of r
xhp
[n] with r
hxp
[n] = r
xhp
[n]
(c) To nd the periodic autocorrelation r
xxp
[n], we rst evaluate the linear autocorrelation
r
xx
[n] = x
1
[n] x
1
[n] = 8, 20, 10,

45, 10, 20, 8


Wraparound gives the periodic autocorrelation as r
xxp
= 18, 40, 18,

45.
We rewrite the result in terms of its rst period as r
hxp
=

45, 18, 40, 18.


This displays circular even symmetry (its periodic extension is even symmetric about the origin n = 0).
(d) To nd the periodic autocorrelation r
hhp
[n], we rst evaluate the linear autocorrelation
r
hh
[n] = h
1
[n] h
1
[n] = 6, 1, 0,

15, 0, 1, 6
Wraparound gives the periodic autocorrelation as r
xxp
= 6, 2, 6,

15.
We rewrite the result in terms of its rst period as r
hxp
=

15, 6, 2, 6.
This displays circular even symmetry (its periodic extension is even symmetric about n = 0).
c Ashok Ambardar, September 1, 2003
3.15 Discrete Correlation 101
DRILL PROBLEM 3.35
Let x
1
[n] =

2, 1, 0, 3 and h
1
[n] = 1,

0, 3, 2 describe one-period segments of two periodic signals.


Find the rst period of r
xxp
[n], r
hhp
[n], r
xhp
[n] and r
hxp
[n]
Answers:

14, 4, 6, 4

14, 8, 6, 8

0, 8, 12, 4

0, 4, 12, 8
3.15.3 Matched Filtering and Target Ranging
Correlation nds widespread use in applications such as target ranging and estimation of periodic signals
buried in noise. For target ranging, a sampled interrogating signal x[n] is transmitted toward the target.
The signal reected from the target is s[n] = x[n D] + p[n], a delayed (by D) and attenuated (by )
version of x[n], contaminated by noise p[n]. The reected signal s[n] is correlated with the interrogating
signal x[n]. If the noise is uncorrelated with the signal x[n], its correlation with x[n] is essentially zero. The
correlation of x[n] and its delayed version x[n D] yield a result that attains a peak at n = D. It is thus
quite easy to identify the index D from the correlation peak (rather than from the reected signal directly),
even in the presence of noise. The (round-trip) delay index D may then be related to the target distance d
by d = 0.5vD/S, where v is the propagation velocity and S is the rate at which the signal is sampled. The
device that performs the correlation of the received signal s[n] and x[n] is called a correlation receiver.
The correlation of s[n] with x[n] is equivalent to the convolution of s[n] with x[n], a folded version of the
interrogating signal. This means that the impulse response of the correlation receiver is h[n] = x[n] and is
matched to the transmitted signal. For this reason, such a receiver is also called a matched lter.
Figure 3.9 illustrates the concept of matched ltering. The transmitted signal is a rectangular pulse.
The impulse response of the matched lter is its folded version and is noncausal. In an ideal situation, the
received signal is simply a delayed version of the transmitted signal and the output of the matched lter
yields a peak whose location gives the delay. This can also be identied from the ideal received signal itself.
In practice, the received signal is contaminated by noise and it is dicult to identify where the delayed pulse
is located. The output of the matched lter, however, provides a clear indication of the delay even for low
signal-to-noise ratios.
Correlation also nds application in pattern recognition. For example, if we need to establish whether
an unknown pattern belongs to one of several known patterns or templates, it can be compared (correlated)
with each template in turn. A match occurs if the autocorrelation of the template matches (or resembles)
the cross-correlation of the template and the unknown pattern.
Identifying Periodic Signals in Noise
Correlation methods may also be used to identify the presence of a periodic signal x[n] buried in the noisy
signal s[n] = x[n] +p[n], where p[n] is the noise component (presumably uncorrelated with the signal), and to
extract the signal itself. The idea is to rst identify the period of the signal from the periodic autocorrelation
of the noisy signal. If the noisy signal contains a periodic component, the autocorrelation will show peaks
at multiples of the period N. Once the period N is established, we can recover x[n] as the periodic cross-
correlation of an impulse train i[n] = (n kN), with period N, and the noisy signal s[n]. Since i[n] is
uncorrelated with the noise, the periodic cross-correlation of i[n] and s[n] yields (an amplitude scaled version
of) the periodic signal x[n].
Figure 3.10 illustrates the concept of identifying a periodic signal hidden in noise. A periodic sawtooth
signal is contaminated by noise to yield a noisy signal. The peaks in the periodic autocorrelation of the
noisy signal allows us to identify the period as N = 20. The periodic cross-correlation of the noisy signal
with the impulse train [n20k] extracts the signal from noise. Note that longer lengths of the noisy signal
c Ashok Ambardar, September 1, 2003
102 Chapter 3 Time-Domain Analysis
20 0 20 50 100 150 200
0
0.5
1
(a) Transmitted signal
20 0 50 100 150 200
0
0.5
1
(b) Matched filter
0 50 100 150 200
0
0.5
1
(c) Ideal received signal
0 50 100 150 200
0
5
10
15
20
(d) Ideal matched filter output
0 50 100 150 200
2
1
0
1
(e) Noisy received signal
0 50 100 150 200
5
0
5
10
15
(f) Actual matched filter output. SNR = 40 dB
Figure 3.9 The concept of matched ltering and target ranging
(compared to the period N) will improve the match between the recovered and buried periodic signal.
REVIEW PANEL 3.25
How to Identify a Periodic Signal x[n] Buried in a Noisy Signal s[n]
Find the period N of x[n] from the periodic autocorrelation of the noisy signal s[n].
Find the signal x[n] as the periodic cross-correlation of s[n] and an impulse train with period N.
3.16 Discrete Convolution and Transform Methods
Discrete-time convolution provides a connection between the time-domain and frequency-domain methods
of system analysis for discrete-time signals. It forms the basis for every transform method described in this
text, and its role in linking the time domain and the transformed domain is intimately tied to the concept
of discrete eigensignals and eigenvalues. The everlasting exponential z
n
is an eigensignal of discrete-time
linear systems. In this complex exponential z
n
, the quantity z has the general form z = re
j2F
. If the input
to an LTI system is z
n
the output has the same form and is given by Cz
n
where C is a (possibly complex)
constant. Similarly, the everlasting discrete-time harmonic z = e
j2nF
(a special case with r = 1) is also an
c Ashok Ambardar, September 1, 2003
3.16 Discrete Convolution and Transform Methods 103
0 20 40 60
1
0
1
2
3
(a) A periodic signal
0 20 40 60
1
0
1
2
3
(b) Periodic signal with added noise
0 20 40 60
300
400
500
600
700
(c) Correlation of noisy signal gives period
0 20 40 60
1
0
1
2
3
(d) Extracted signal
Figure 3.10 Extraction of a periodic signal buried in noise
eigensignal of discrete-time systems.
3.16.1 The z-Transform
For an input x[n] = r
n
e
j2nF
=

re
j2F

n
= z
n
, where z is complex, with magnitude [z[ = r, the response
may be written as
y[n] = x[n] h[n] =

k=
z
nk
h[k] = z
n

k=
h[k]z
k
= x[n]H(z) (3.59)
The response equals the input (eigensignal) modied by the system function H(z), where
H(z) =

k=
h[k]z
k
(two-sided z-transform) (3.60)
The complex quantity H(z) describes the z-transform of h[n] and is not, in general, periodic in z. Denoting
the z-transform of x[n] and y[n] by X(z) and Y (z), we write
Y (z) =

k=
y[k]z
k
=

k=
x[k]H(z)z
k
= H(z)X(z) (3.61)
Convolution in the time domain thus corresponds to multiplication in the z-domain.
c Ashok Ambardar, September 1, 2003
104 Chapter 3 Time-Domain Analysis
3.16.2 The Discrete-Time Fourier Transform
For the harmonic input x[n] = e
j2nF
, the response y[n] equals
y[n] =

k=
e
j2(nk)F
h[k] = e
j2nF

k=
h[k]e
j2kF
= x[n]H(F) (3.62)
This is just the input modied by the system function H(F), where
H(F) =

k=
h[k]e
j2kF
(3.63)
The quantity H(F) describes the discrete-time Fourier transform(DTFT) or discrete-time frequency
response or spectrumof h[n]. Any signal x[n] may similarly be described by its DTFT X(F). The response
y[n] = x[n]H[F] may then be transformed to its DTFT Y [n] to give
Y (F) =

k=
y[k]e
j2Fk
=

k=
x[k]H(F)e
j2Fk
= H(F)X(F) (3.64)
Once again, convolution in the time domain corresponds to multiplication in the frequency domain. Note
that we obtain the DTFT of h[n] from its z-transform H(z) by letting z = e
j2F
or [z[ = 1 to give
H(F) = H(z)[
z=exp(j2F)
= H(z)[
|z|=1
(3.65)
The DTFT is thus the z-transform evaluated on the unit circle [z[ = 1. The system function H(F) is also
periodic in F with a period of unity because e
j2kF
= e
j2k(F+1)
. This periodicity is a direct consequence
of the discrete nature of h[n].
c Ashok Ambardar, September 1, 2003
Chapter 3 Problems 105
CHAPTER 3 PROBLEMS
3.1 (Operators) Which of the following describe linear operators?
(a) O = 4 (b) O = 4 + 3 (c) O =
{ }
3.2 (System Classication) In each of the systems below, x[n] is the input and y[n] is the output.
Check each system for linearity, shift invariance, memory, and causality.
(a) y[n] y[n 1] = x[n] (b) y[n] +y[n + 1] = nx[n]
(c) y[n] y[n + 1] = x[n + 2] (d) y[n + 2] y[n + 1] = x[n]
(e) y[n + 1] x[n]y[n] = nx[n + 2] (f ) y[n] +y[n 3] = x
2
[n] +x[n + 6]
(g) y[n] 2
n
y[n] = x[n] (h) y[n] = x[n] +x[n 1] +x[n 2]
3.3 (System Classication) Classify the following systems in terms of their linearity, time invariance,
memory, causality, and stability.
(a) y[n] = 3
n
x[n] (b) y[n] = e
jn
x[n]
(c) y[n] = cos(0.5n)x[n] (d) y[n] = [1 + cos(0.5n)]x[n]
(e) y[n] = e
x[n]
(f ) y[n] = x[n] + cos[0.5(n + 1)]
3.4 (System Classication) Classify the following systems in terms of their linearity, time invariance,
memory, causality, and stability.
(a) y[n] = x[n/3] (zero interpolation)
(b) y[n] = cos(n)x[n] (modulation)
(c) y[n] = [1 + cos(n)]x[n] (modulation)
(d) y[n] = cos(nx[n]) (frequency modulation)
(e) y[n] = cos(n +x[n]) (phase modulation)
(f ) y[n] = x[n] x[n 1] (dierencing operation)
(g) y[n] = 0.5x[n] + 0.5x[n 1] (averaging operation)
(h) y[n] =
1
N
N1

k=0
x[n k] (moving average)
(i) y[n] y[n 1] = x[n], 0 < < 1 (exponential averaging)
(j) y[n] = 0.4(y[n 1] + 2) +x[n]
3.5 (Classication) Classify each system in terms of its linearity, time invariance, memory, causality,
and stability.
(a) The folding system y[n] = x[n].
(b) The decimating system y[n] = x[2n].
(c) The zero-interpolating system y[n] = x[n/2].
(d) The sign-inversion system y[n] = sgnx[n].
(e) The rectifying system y[n] = [x[n][.
3.6 (Classication) Classify each system in terms of its linearity, time invariance, causality, and stability.
(a) y[n] = roundx[n] (b) y[n] = medianx[n + 1], x[n], x[n 1]
(c) y[n] = x[n] sgn(n) (d) y[n] = x[n] sgnx[n]
c Ashok Ambardar, September 1, 2003
106 Chapter 3 Time-Domain Analysis
3.7 (Realization) Find the dierence equation for each system realization shown in Figure P3.7.
[n] y [n] x
1
z
System 1
4
3
+ +
+

+
2

[n] y [n] x
1
z
1
z
System 2
2
4
3
+ +
+

Figure P3.7 Filter realizations for Problem 3.7


[Hints and Suggestions: Compare with the generic rst-order and second-order realizations to get
the dierence equations.]
3.8 (Response by Recursion) Use recursion to nd the response y[n] of the following systems for the
rst few values of n and discern the general form for y[n].
(a) y[n] ay[n 1] = [n] y[1] = 0
(b) y[n] ay[n 1] = u[n] y[1] = 1
(c) y[n] ay[n 1] = u[n] y[1] = 1
(d) y[n] ay[n 1] = nu[n] y[1] = 0
[Hints and Suggestions: For parts (b)(d), you may need to use a table of summations to simplify
the results for the general form.]
3.9 (Response by Recursion) Let y[n] + 4y[n 1] + 3y[n 2] = u[n 2] with y[1] = 0, y[2] = 1.
Use recursion to compute y[n] up to n = 4. Can you discern a general form for y[n]?
3.10 (Forced Response) Find the forced response of the following systems.
(a) y[n] 0.4y[n 1] = 3u[n] (b) y[n] 0.4y[n 1] = (0.5)
n
(c) y[n] + 0.4y[n 1] = (0.5)
n
(d) y[n] 0.5y[n 1] = cos(n/2)
[Hints and Suggestions: For part (a), 3u[n] = 3, n 0 and implies that the forced response
(or its shifted version) is constant. So, choose y
F
[n] = C = y
F
[n 1]. For part (c), pick y
F
[n] =
Acos(0.5n) + Bsin(0.5n), expand terms like cos[0.5(n 1)] using trigonometric identities and
compare the coecients of cos(0.5n) and sin(0.5n) to generate two equations to solve for A and B.]
3.11 (Zero-State Response) Find the zero-state response of the following systems.
(a) y[n] 0.5y[n 1] = 2u[n] (b) y[n] 0.4y[n 1] = (0.5)
n
u[n]
(c) y[n] 0.4y[n 1] = (0.4)
n
u[n] (d) y[n] 0.5y[n 1] = cos(n/2)
[Hints and Suggestions: Here, zero-state implies y[1] = 0. Part (c) requires y
F
[n] = Cn(0.5)
n
because the root of the characteristic equation is 0.5.]
3.12 (Zero-State Response) Consider the system y[n] 0.5y[n 1] = x[n]. Find its zero-state response
to the following inputs.
(a) x[n] = u[n] (b) x[n] = (0.5)
n
u[n] (c) x[n] = cos(0.5n)u[n]
(d) x[n] = (1)
n
u[n] (e) x[n] = j
n
u[n] (f ) x[n] = (

j)
n
u[n] + (

j)
n
u[n]
[Hints and Suggestions: For part (e), pick the forced response as y
F
[n] = C(j)
n
. This will give
a complex response because the input is complex. For part (f), x[n] simplies to a sinusoid by using

j = e
j/2
and Eulers relation.]
c Ashok Ambardar, September 1, 2003
Chapter 3 Problems 107
3.13 (Zero-State Response) Find the zero-state response of the following systems.
(a) y[n] 1.1y[n 1] + 0.3y[n 2] = 2u[n] (b) y[n] + 0.7y[n 1] + 0.1y[n 2] = (0.5)
n
(c) y[n] 0.9y[n 1] + 0.2y[n 2] = (0.5)
n
(d) y[n] 0.25y[n 2] = cos(n/2)
[Hints and Suggestions: Zero-state implies y[1] = y[2] = 0. For part (b), use y
F
[n] = C(0.5)
n
,
but for part (c), pick y
F
[n] = Cn(0.5)
n
because one root of the characteristic equation is 0.5.]
3.14 (System Response) Let y[n] 0.5y[n1] = x[n], with y[1] = 1. Find the response of this system
due to the following inputs for n 0.
(a) x[n] = 2u[n] (b) x[n] = (0.25)
n
u[n] (c) x[n] = n(0.25)
n
u[n]
(d) x[n] = (0.5)
n
u[n] (e) x[n] = n(0.5)
n
u[n] (f ) x[n] = (0.5)
n
cos(0.5n)u[n]
[Hints and Suggestions: For part (c), pick y
F
[n] = (C + Dn)(0.5)
n
(and compare coecients of
like powers of n to solve for C and D). For part (d), pick y
F
[n] = Cn(0.5)
n
because the root of the
characteristic equation is 0.5. Part(e) requires y
F
[n] = n(C +Dn)(0.5)
n
for the same reason.]
3.15 (System Response) For the system realization shown in Figure P3.15, nd the response to the
following inputs and initial conditions.
(a) x[n] = u[n] y[1] = 0 (b) x[n] = u[n] y[1] = 4
(c) x[n] = (0.5)
n
u[n] y[1] = 0 (d) x[n] = (0.5)
n
u[n] y[1] = 6
(e) x[n] = (0.5)
n
u[n] y[1] = 0 (f ) x[n] = (0.5)
n
u[n] y[1] = 2
[n] x [n] y
1
z
+

0.5

Figure P3.15 System realization for Problem 3.15


[Hints and Suggestions: For part (e), pick the forced response as y
F
[n] = Cn(0.5)
n
.]
3.16 (System Response) Find the response of the following systems.
(a) y[n] 0.4y[n 1] = 2(0.5)
n1
u[n 1] y[1] = 0
(b) y[n] 0.4y[n 1] = (0.4)
n
u[n] + 2(0.5)
n1
u[n 1] y[1] = 2.5
(c) y[n] 0.4y[n 1] = n(0.5)
n
u[n] + 2(0.5)
n1
u[n 1] y[1] = 2.5
[Hints and Suggestions: Start with y[n] 0.4y[n 1] = 2(0.5)
n
, y[1] = 0 and nd its zero-state
response. Then use superposition and time invariance as required. For the input (0.4)
n
of part (b),
assume y
F
[n] = Cn(0.4)
n
. For the input n(0.5)
n
of part (c), assume y
F
[n] = (A+Bn)(0.5)
n
.]
3.17 (System Response) Find the impulse response of the following lters.
(a) y[n] = x[n] x[n 1] (dierencing operation)
(b) y[n] = 0.5x[n] + 0.5x[n 1] (averaging operation)
(c) y[n] =
1
N
N1

k=0
x[n k], N = 3 (moving average)
(d) y[n] =
2
N(N+1)
N1

k=0
(N k)x[n k], N = 3 (weighted moving average)
(e) y[n] y[n 1] = (1 )x[n], N = 3, =
N1
N+1
(exponential averaging)
c Ashok Ambardar, September 1, 2003
108 Chapter 3 Time-Domain Analysis
3.18 (System Response) It is known that the response of the system y[n] + y[n 1] = x[n], = 0, is
given by y[n] = [5 + 3(0.5)
n
]u[n].
(a) Identify the natural response and forced response.
(b) Identify the values of and y[1].
(c) Identify the zero-input response and zero-state response.
(d) Identify the input x[n].
3.19 (System Response) It is known that the response of the system y[n] +0.5y[n1] = x[n] is described
by y[n] = [5(0.5)
n
+ 3(0.5)
n
)]u[n].
(a) Identify the zero-input response and zero-state response.
(b) What is the zero-input response of the system y[n] + 0.5y[n 1] = x[n] if y[1] = 10?
(c) What is the response of the relaxed system y[n] + 0.5y[n 1] = x[n 2]?
(d) What is the response of the relaxed system y[n] + 0.5y[n 1] = x[n 1] + 2x[n]?
3.20 (System Response) It is known that the response of the system y[n] +y[n1] = x[n] is described
by y[n] = (5 + 2n)(0.5)
n
u[n].
(a) Identify the zero-input response and zero-state response.
(b) What is the zero-input response of the system y[n] +y[n 1] = x[n] if y[1] = 10?
(c) What is the response of the relaxed system y[n] +y[n 1] = x[n 1])?
(d) What is the response of the relaxed system y[n] +y[n 1] = 2x[n 1] +x[n]?
(e) What is the complete response of the y[n] +y[n 1] = x[n] + 2x[n 1] if y[1] = 4?
3.21 (System Response) Find the response of the following systems.
(a) y[n] + 0.1y[n 1] 0.3y[n 2] = 2u[n] y[1] = 0 y[2] = 0
(b) y[n] 0.9y[n 1] + 0.2y[n 2] = (0.5)
n
y[1] = 1 y[2] = 4
(c) y[n] + 0.7y[n 1] + 0.1y[n 2] = (0.5)
n
y[1] = 0 y[2] = 3
(d) y[n] 0.25y[n 2] = (0.4)
n
y[1] = 0 y[2] = 3
(e) y[n] 0.25y[n 2] = (0.5)
n
y[1] = 0 y[2] = 0
[Hints and Suggestions: For parts (b) and (e), pick y
F
[n] = Cn(0.5)
n
because one root of the
characteristic equation is 0.5.]
3.22 (System Response) Sketch a realization for each system, assuming zero initial conditions. Then
evaluate the complete response from the information given. Check your answer by computing the rst
few values by recursion.
(a) y[n] 0.4y[n 1] = x[n] x[n] = (0.5)
n
u[n] y[1] = 0
(b) y[n] 0.4y[n 1] = 2x[n] +x[n 1] x[n] = (0.5)
n
u[n] y[1] = 0
(c) y[n] 0.4y[n 1] = 2x[n] +x[n 1] x[n] = (0.5)
n
u[n] y[1] = 5
(d) y[n] + 0.5y[n 1] = x[n] x[n 1] x[n] = (0.5)
n
u[n] y[1] = 2
(e) y[n] + 0.5y[n 1] = x[n] x[n 1] x[n] = (0.5)
n
u[n] y[1] = 0
[Hints and Suggestions: For parts (b)(c), use the results of part (a) plus linearity (superposition)
and time invariance.]
3.23 (System Response) For each system, evaluate the natural, forced, and total response. Assume that
y[1] = 0, y[2] = 1. Check your answer for the total response by computing its rst few values by
recursion.
c Ashok Ambardar, September 1, 2003
Chapter 3 Problems 109
(a) y[n] + 4y[n 1] + 3y[n 2] = u[n] (b) 1 0.5z
1
y[n] = (0.5)
n
cos(0.5n)u[n]
(c) y[n] + 4y[n 1] + 8y[n 2] = cos(n)u[n] (d) (1 + 2z
1
)
2
y[n] = n(2)
n
u[n]
(e) 1 +
3
4
z
1
+
1
8
z
2
y[n] = (
1
3
)
n
u[n] (f ) 1 + 0.5z
1
+ 0.25z
2
y[n] = cos(0.5n)u[n]
[Hints and Suggestions: For part (b), pick y
F
[n] = (0.5)
n
[Acos(0.5n) + Bsin(0.5n)], expand
terms like cos[0.5(n 1)] using trigonometric identities and compare the coecients of cos(0.5n)
and sin(0.5n) to generate two equations to solve for A and B. For part (d), pick y
F
[n] = (C+Dn)(2)
n
and compare like powers of n to solve for C and D.]
3.24 (System Response) For each system, evaluate the zero-state, zero-input, and total response. Assume
that y[1] = 0, y[2] = 1.
(a) y[n] + 4y[n 1] + 4y[n 2] = 2
n
u[n] (b) z
2
+ 4z + 4y[n] = 2
n
u[n]
[Hints and Suggestions: In part (b), y[n + 2] + 4y[n + 1] + 4y[n] = (2)
n
u[n]. By time invariance,
y[n] +4y[n1] +4y[n2] = (2)
n2
u[n2] and we shift the zero-state response of part (a) by 2 units
(n n 2) and add to the zero-input response to get the result.]
3.25 (System Response) For each system, set up a dierence equation and compute the zero-state,
zero-input, and total response, assuming x[n] = u[n] and y[1] = y[2] = 1.
(a) 1 z
1
2z
2
y[n] = x[n] (b) z
2
z 2y[n] = x[n]
(c) 1
3
4
z
1
+
1
8
z
2
y[n] = x[n] (d) 1
3
4
z
1
+
1
8
z
2
y[n] = 1 +z
1
x[n]
(e) 1 0.25z
2
y[n] = x[n] (f ) z
2
0.25y[n] = 2z
2
+ 1x[n]
[Hints and Suggestions: For part (b), use the result of part (a) and time-invariance to get the answer
as y
zs
[n2]+y
zi
[n]. For part (d), use the result of part (c) to get the answer as y
zi
[n]+y
zs
[n]+y
zs
[n1].
The answer for part (f) may be similarly obtained from part (e).]
3.26 (Impulse Response by Recursion) Find the impulse response h[n] by recursion up to n = 4 for
each of the following systems.
(a) y[n] y[n 1] = 2x[n] (b) y[n] 3y[n 1] + 6y[n 2] = x[n 1]
(c) y[n] 2y[n 3] = x[n 1] (d) y[n] y[n 1] + 6y[n 2] = nx[n 1] + 2x[n 3]
[Hints and Suggestions: For the impulse response, x[n] = 1, n = 0 and x[n] = 0, n = 0.]
3.27 (Analytical Form for Impulse Response) Classify each lter as recursive or FIR (nonrecursive),
and causal or noncausal, and nd an expression for its impulse response h[n].
(a) y[n] = x[n] +x[n 1] +x[n 2] (b) y[n] = x[n + 1] +x[n] +x[n 1]
(c) y[n] + 2y[n 1] = x[n] (d) y[n] + 2y[n 1] = x[n 1]
(e) y[n] + 2y[n 1] = 2x[n] + 6x[n 1] (f ) y[n] + 2y[n 1] = x[n + 1] + 4x[n] + 6x[n 1]
(g) 1 + 4z
1
+ 3z
2
y[n] = z
2
x[n] (h) z
2
+ 4z + 4y[n] = z + 3x[n]
(i) z
2
+ 4z + 8y[n] = x[n] (j) y[n] + 4y[n 1] + 4y[n 2] = x[n] x[n + 2]
[Hints and Suggestions: To nd the impulse response for the recursive lters, assume y[0] = 1 and
(if required) y[1] = y[2] = = 0. If the right hand side of the recursive lter equation is anything
but x[n], start with the single input x[n] and then use superposition and time-invariance to get the
result for the required input. The results for (d)(f) can be found from the results of (c) in this way.]
3.28 (Stability) Investigate the causality and stability of the following right-sided systems.
(a) y[n] = x[n 1] +x[n] +x[n + 1] (b) y[n] = x[n] +x[n 1] +x[n 2]
(c) y[n] 2y[n 1] = x[n] (d) y[n] 0.2y[n 1] = x[n] 2x[n + 2]
(e) y[n] +y[n 1] + 0.5y[n 2] = x[n] (f ) y[n] y[n 1] +y[n 2] = x[n] x[n + 1]
(g) y[n] 2y[n 1] +y[n 2] = x[n] x[n 3] (h) y[n] 3y[n 1] + 2y[n 2] = 2x[n + 3]
c Ashok Ambardar, September 1, 2003
110 Chapter 3 Time-Domain Analysis
[Hints and Suggestions: Remember that FIR lters are always stable and for right-sided systems,
every root of the caracteristic equation must have a magnitude (absolute value) less than 1.]
3.29 (System Interconnections) Two systems are said to be in cascade if the output of the rst system
acts as the input to the second. Find the response of the following cascaded systems if the input is a
unit step and the systems are described as follows. In which instances does the response dier when the
order of cascading is reversed? Can you use this result to justify that the order in which the systems
are cascaded does not matter in nding the overall response if both systems are LTI?
(a) System 1: y[n] = x[n] x[n 1] System 2: y[n] = 0.5y[n 1] +x[n]
(b) System 1: y[n] = 0.5y[n 1] +x[n] System 2: y[n] = x[n] x[n 1]
(c) System 1: y[n] = x
2
[n] System 2: y[n] = 0.5y[n 1] +x[n]
(d) System 1: y[n] = 0.5y[n 1] +x[n] System 2: y[n] = x
2
[n]
3.30 (Systems in Cascade and Parallel) Consider the realization of Figure P3.30.
[n] y
1
z
1
z
1
z
[n] x

+
+
+
+
+


Figure P3.30 System realization for Problem 3.30
(a) Find its impulse response if = . Is the overall system FIR or IIR?
(b) Find its dierence equation and impulse response if = . Is the overall system FIR or IIR?
(c) Find its dierence equation and impulse response if = = 1. What is the function of the
overall system?
3.31 (Dierence Equations from Impulse Response) Find the dierence equations describing the
following systems.
(a) h[n] = [n] + 2[n 1] (b) h[n] = 2,

3, 1
(c) h[n] = (0.3)
n
u[n] (d) h[n] = (0.5)
n
u[n] (0.5)
n
u[n]
[Hints and Suggestions: For part (c), the left hand side of the dierence equation is y[n]0.3y[n1].
So, h[n] 0.3h[n 1] simplied to get impulses leads to the right hand side. For part (d), start with
the left hand side as y[n] 0.25y[n 2].]
3.32 (Dierence Equations from Impulse Response) A system is described by the impulse response
h[n] = (1)
n
u[n]. Find the dierence equation of this system. Then nd the dierence equation of
the inverse system. Does the inverse system describe an FIR lter or IIR lter? What function does
it perform?
3.33 (Dierence Equations) For the lter realization shown in Figure P3.33, nd the dierence equation
relating y[n] and x[n] if the impulse response of the lter is given by
(a) h[n] = [n] [n 1] (b) h[n] = 0.5[n] + 0.5[n 1]
c Ashok Ambardar, September 1, 2003
Chapter 3 Problems 111
[n] x [n] y
1
z
+

Filter
Figure P3.33 Filter realization for Problem 3.33
3.34 (Dierence Equations from Dierential Equations) This problem assumes some familiarity
with analog theory. Consider an analog system described by y

(t) + 3y

(t) + 2y(t) = 2u(t).


(a) Conrm that this describes a stable analog system.
(b) Convert this to a dierence equation using the backward Euler algorithm and check the stability
of the resulting digital lter.
(c) Convert this to a dierence equation using the forward Euler algorithm and check the stability
of the resulting digital lter.
(d) Which algorithm is better in terms of preserving stability? Can the results be generalized to any
arbitrary analog system?
3.35 (Inverse Systems) Are the following systems invertible? If not, explain why; if invertible, nd the
inverse system.
(a) y[n] = x[n] x[n 1] (dierencing operation)
(b) y[n] =
1
3
(x[n] +x[n 1] +x[n 2]) (moving average operation)
(c) y[n] = 0.5x[n] +x[n 1] + 0.5x[n 2] (weighted moving average operation)
(d) y[n] y[n 1] = (1 )x[n], 0 < < 1 (exponential averaging operation)
(e) y[n] = cos(n)x[n] (modulation)
(f ) y[n] = cos(x[n])
(g) y[n] = e
x[n]
[Hints and Suggestions: The inverse system is found by swithcing input and output and rearranging.
Only one of these systems is not invertible.]
3.36 (An Echo System and Its Inverse) An echo system is described by y[n] = x[n] + 0.5x[n N].
Assume that the echo arrives after 1 ms and the sampling rate is 2 kHz.
(a) What is the value of N? Sketch a realization of this echo system.
(b) What is the impulse response and step response of this echo system?
(c) Find the dierence equation of the inverse system. Then, sketch its realization and nd its
impulse response and step response.
3.37 (Reverb) A reverb lter is described by y[n] = x[n] + 0.25y[n N]. Assume that the echoes arrive
every millisecond and the sampling rate is 2 kHz.
(a) What is the value of N? Sketch a realization of this reverb lter.
(b) What is the impulse response and step response of this reverb lter?
(c) Find the dierence equation of the inverse system. Then, sketch its realization and nd its
impulse response and step response.
3.38 (Periodic Signal Generators) Find the dierence equation of a lter whose impulse response is a
periodic sequence with rst period x[n] =

1, 2, 3, 4, 6, 7, 8. Sketch a realization for this lter.


c Ashok Ambardar, September 1, 2003
112 Chapter 3 Time-Domain Analysis
3.39 (Recursive and IIR Filters) The terms recursive and IIR are not always synonymous. A recursive
lter could in fact have a nite impulse response. Use recursion to nd the the impulse response h[n]
for each of the following recursive lters. Which lters (if any) describe IIR lters?
(a) y[n] y[n 1] = x[n] x[n 2]
(b) y[n] y[n 1] = x[n] x[n 1] 2x[n 2] + 2x[n 3]
3.40 (Recursive Forms of FIR Filters) An FIR lter may always be recast in recursive form by the
simple expedient of including identical factors on the left-hand and right-hand side of its dierence
equation in operational form. For example, the lter y[n] = (1 z
1
)x[n] is FIR, but the identical
lter (1+z
1
)y[n] = (1+z
1
)(1z
1
)x[n] has the dierence equation y[n] +y[n1] = x[n] x[n2]
and can be implemented recursively. Find two dierent recursive dierence equations (with dierent
orders) for each of the following lters.
(a) y[n] = x[n] x[n 2] (b) h[n] = 1,

2, 1
3.41 (Nonrecursive Forms of IIR Filters) An FIR lter may always be exactly represented in recursive
form, but we can only approximately represent an IIR lter by an FIR lter by truncating its impulse
response to N terms. The larger the truncation index N, the better is the approximation. Consider the
IIR lter described by y[n] 0.8y[n1] = x[n]. Find its impulse response h[n] and truncate it to three
terms to obtain h
3
[n], the impulse response of the approximate FIR equivalent. Would you expect the
greatest mismatch in the response of the two lters to identical inputs to occur for lower or higher
values of n? Compare the step response of the two lters up to n = 6 to justify your expectations.
3.42 (Nonlinear Systems) One way to solve nonlinear dierence equations is by recursion. Consider the
nonlinear dierence equation y[n]y[n 1] 0.5y
2
[n 1] = 0.5Au[n].
(a) What makes this system nonlinear?
(b) Using y[1] = 2, recursively obtain y[0], y[1], and y[2].
(c) Use A = 2, A = 4, and A = 9 in the results of part (b) to conrm that this system nds the
square root of A.
(d) Repeat parts (b) and (c) with y[1] = 1 to check whether the choice of the initial condition
aects system operation.
3.43 (LTI Concepts and Stability) Argue that neither of the following describes an LTI system. Then,
explain how you might check for their stability and determine which of the systems are stable.
(a) y[n] + 2y[n 1] = x[n] +x
2
[n] (b) y[n] 0.5y[n 1] = nx[n] +x
2
[n]
3.44 (Response of Causal and Noncausal Systems) A dierence equation may describe a causal or
noncausal system depending on how the initial conditions are prescribed. Consider a rst-order system
governed by y[n] +y[n 1] = x[n].
(a) With y[n] = 0, n < 0, this describes a causal system. Assume y[1] = 0 and nd the rst few
terms y[0], y[1], . . . of the impulse response and step response, using recursion, and establish the
general form for y[n].
(b) With y[n] = 0, n > 0, we have a noncausal system. Assume y[0] = 0 and rewrite the dierence
equation as y[n 1] = y[n] + x[n]/ to nd the rst few terms y[0], y[1], y[2], . . . of the
impulse response and step response, using recursion, and establish the general form for y[n].
c Ashok Ambardar, September 1, 2003
Chapter 3 Problems 113
3.45 (Folding) For each signal x[n], sketch g[k] = x[3 k] vs. k and h[k] = x[2 +k] vs. k.
(a) x[n] =

1, 2, 3, 4 (b) x[n] = 3, 3,

3, 2, 2, 2
[Hints and Suggestions: Note that g[k] and h[k] will be plotted against the index k.]
3.46 (Closed-Form Convolution) Find the convolution y[n] = x[n] h[n] for the following:
(a) x[n] = u[n] h[n] = u[n]
(b) x[n] = (0.8)
n
u[n] h[n] = (0.4)
n
u[n]
(c) x[n] = (0.5)
n
u[n] h[n] = (0.5)
n
u[n + 3] u[n 4]
(d) x[n] =
n
u[n] h[n] =
n
u[n]
(e) x[n] =
n
u[n] h[n] =
n
u[n]
(f ) x[n] =
n
u[n] h[n] = rect(n/2N)
[Hints and Suggestions: The summations will be over the index k and functions of n should be
pulled out before evaluating them using tables. For (a), (b), (d) and (e), summations will be from
k = 0 to k = n. For part (c) and (f), use superposition. For (a) and (d), the sum (1)
k
= (1) from
k = 0 to k = n equals n + 1.]
3.47 (Convolution with Impulses) Find the convolution y[n] = x[n] h[n] of the following signals.
(a) x[n] = [n 1] h[n] = [n 1]
(b) x[n] = cos(0.25n) h[n] = [n] [n 1]
(c) x[n] = cos(0.25n) h[n] = [n] 2[n 1] +[n 2]
(d) x[n] = (1)
n
h[n] = [n] +[n 1]
[Hints and Suggestions: Start with [n] g[n] = g[n] and use linearity and time invariance.]
3.48 (Convolution) Find the convolution y[n] = x[n] h[n] for each pair of signals.
(a) x[n] = (0.4)
n
u[n] h[n] = (0.5)
n
u[n]
(b) x[n] =
n
u[n] h[n] =
n
u[n]
(c) x[n] =
n
u[n] h[n] =
n
u[n]
(d) x[n] =
n
u[n] h[n] =
n
u[n]
[Hints and Suggestions: For parts (a) and (b) write the exponentials in the form r
n
. For parts (c)
and (d) nd the convolution of x[n] and h[n] and fold the result to get y[n].]
3.49 (Convolution of Finite Sequences) Find the convolution y[n] = x[n] h[n] for each of the following
signal pairs. Use a marker to indicate the origin n = 0.
(a) x[n] =

1, 2, 0, 1 h[n] =

2, 2, 3
(b) x[n] =

0, 2, 4, 6 h[n] =

6, 4, 2, 0
(c) x[n] = 3, 2,

1, 0, 1 h[n] =

4, 3, 2
(d) x[n] = 3, 2,

1, 1, 2 h[n] = 4,

2, 3, 2
(e) x[n] = 3, 0, 2, 0,

1, 0, 1, 0, 2 h[n] = 4, 0,

2, 0, 3, 0, 2
(f ) x[n] =

0, 0, 0, 3, 1, 2 h[n] = 4,

2, 3, 2
[Hints and Suggestions: Since the starting index of the convolution equals the sum of the starting
indices of the sequences convolved, ignore markers during convolution and assign as the last step.]
c Ashok Ambardar, September 1, 2003
114 Chapter 3 Time-Domain Analysis
3.50 (Convolution of Symmetric Sequences) The convolution of sequences that are symmetric about
their midpoint is also endowed with symmetry (about its midpoint). Compute y[n] = x[n] h[n] for
each pair of signals and use the results to establish the type of symmetry (about the midpoint) in
the convolution if the convolved signals are both even symmetric (about their midpoint), both odd
symmetric (about their midpoint), or one of each type.
(a) x[n] = 2, 1, 2 h[n] = 1, 0, 1
(b) x[n] = 2, 1, 2 h[n] = 1, 1
(c) x[n] = 2, 2 h[n] = 1, 1
(d) x[n] = 2, 0, 2 h[n] = 1, 0, 1
(e) x[n] = 2, 0, 2 h[n] = 1, 1
(f ) x[n] = 2, 2 h[n] = 1, 1
(g) x[n] = 2, 1, 2 h[n] = 1, 0, 1
(h) x[n] = 2, 1, 2 h[n] = 1, 1
(i) x[n] = 2, 2 h[n] = 1, 1
3.51 (Properties) Let x[n] = h[n] =

3, 4, 2, 1. Compute the following:


(a) y[n] = x[n] h[n] (b) g[n] = x[n] h[n]
(c) p[n] = x[n] h[n] (d) f[n] = x[n] h[n]
(e) r[n] = x[n 1] h[n + 1] (f ) s[n] = x[n 1] h[n + 4]
[Hints and Suggestions: The results for (b) and (d) can be found by folding the results for (a) and
(c) respectively. The result for (f) can be found by shifting the result for (e) (time-invariance).]
3.52 (Properties) Let x[n] = h[n] =

2, 6, 0, 4. Compute the following:


(a) y[n] = x[2n] h[2n]
(b) Find g[n] = x[n/2] h[n/2], assuming zero interpolation.
(c) Find p[n] = x[n/2] h[n], assuming step interpolation where necessary.
(d) Find r[n] = x[n] h[n/2], assuming linear interpolation where necessary.
3.53 (Application) Consider a 2-point averaging lter whose present output equals the average of the
present and previous input.
(a) Set up a dierence equation for this system.
(b) What is the impulse response of this system?
(c) What is the response of this system to the sequence

1, 2, 3, 4, 5?
(d) Use convolution to show that the system performs the required averaging operation.
3.54 (Step Response) Given the impulse response h[n], nd the step response s[n] of each system.
(a) h[n] = (0.5)
n
u[n] (b) h[n] = (0.5)
n
cos(n)u[n]
(c) h[n] = (0.5)
n
cos(n + 0.5)u[n] (d) h[n] = (0.5)
n
cos(n + 0.25)u[n]
(e) h[n] = n(0.5)
n
u[n] (f ) h[n] = n(0.5)
n
cos(n)u[n]
[Hints and Suggestions: Note that s[n] = x[n] h[n] where x[n] = u[n]. In part (b) and (f), note
that cos(n) = (1)
n
. In part (d) expand cos(n + 0.25) and use the results of parts (b).]
3.55 (Convolution and System Response) Consider the system y[n] 0.5y[n 1] = x[n].
(a) What is the impulse response h[n] of this system?
(b) Find its output if x[n] = (0.5)
n
u[n] by convolution.
c Ashok Ambardar, September 1, 2003
Chapter 3 Problems 115
(c) Find its output if x[n] = (0.5)
n
u[n] and y[1] = 0 by solving the dierence equation.
(d) Find its output if x[n] = (0.5)
n
u[n] and y[1] = 2 by solving the dierence equation.
(e) Are any of the outputs identical? Should they be? Explain.
[Hints and Suggestions: For part (e), remember that convolution nds the zero-state response.]
3.56 (Convolution and Interpolation) Let x[n] =

2, 4, 6, 8.
(a) Find the convolution y[n] = x[n] x[n].
(b) Find the convolution y
1
[n] = x[2n] x[2n]. Is y
1
[n] related to y[n]? Should it be? Explain.
(c) Find the convolution y
2
[n] = x[n/2] x[n/2], assuming zero interpolation. Is y
2
[n] related to
y[n]? Should it be? Explain.
(d) Find the convolution y
3
[n] = x[n/2] x[n/2], assuming step interpolation. Is y
3
[n] related to
y[n]? Should it be? Explain.
(e) Find the convolution y
4
[n] = x[n/2] x[n/2], assuming linear interpolation. Is y
4
[n] related to
y[n]? Should it be? Explain.
3.57 (Linear Interpolation) Consider a system that performs linear interpolation by a factor of N. One
way to construct such a system, as shown, is to perform up-sampling by N (zero interpolation between
signal samples) and pass the up-sampled signal through a lter with impulse response h[n] whose
output y[n] is the linearly interpolated signal.
x[n] up-sample (zero interpolate) by N lter y[n]
(a) What should h[n] be for linear interpolation by a factor of N?
(b) Let x[n] = 4tri(0.25n). Find y
1
[n] = x[n/2] by linear interpolation.
(c) Find the system output y[n] for N = 2. Does y[n] equal y
1
[n]?
3.58 (Causality) Argue that the impulse response h[n] of a causal system must be zero for n < 0. Based
on this result, if the input to a causal system starts at n = n
0
, when does the response start?
3.59 (Stability) Investigate the causality and stability of the following systems.
(a) h[n] = (2)
n
u[n 1] (b) y[n] = 2x[n + 1] + 3x[n] x[n 1]
(c) h[n] = (0.5)
n
u[n] (d) h[n] = 3, 2,

1, 1, 2
(e) h[n] = (0.5)
n
u[n] (f ) h[n] = (0.5)
|n|
[Hints and Suggestions: Only one of these is unstable. For part (e), note that summing [h[n][ is
equivalent to summing its folded version.]
3.60 (Numerical Convolution) The convolution y(t) of two analog signals x(t) and h(t) may be approx-
imated by sampling each signal at intervals t
s
to obtain the signals x[n] and h[n], and folding and
shifting the samples of one function past the other in steps of t
s
(to line up the samples). At each
instant kt
s
, the convolution equals the sum of the product samples multiplied by t
s
. This is equivalent
to using the rectangular rule to approximate the area. If x[n] and h[n] are convolved using the sum-
by-column method, the columns make up the product, and their sum multiplied by t
s
approximates
y(t) at t = kt
s
.
(a) Let x(t) = rect(t/2) and h(t) = rect(t/2). Find y(t) = x(t) h(t) and compute y(t) at intervals
of t
s
= 0.5 s.
c Ashok Ambardar, September 1, 2003
116 Chapter 3 Time-Domain Analysis
(b) Sample x(t) and h(t) at intervals of t
s
= 0.5 s to obtain x[n] and h[n]. Compute y[n] = x[n] h[n]
and the convolution estimate y
R
(nt
s
) = t
s
y[n]. Do the values of y
R
(nt
s
) match the exact result
y(t) at t = nt
s
? If not, what are the likely sources of error?
(c) Argue that the trapezoidal rule for approximating the convolution is equivalent to subtracting
half the sum of the two end samples of each column from the discrete convolution result and
then multiplying by t
s
. Use this rule to obtain the convolution estimate y
T
(nt
s
). Do the values
of y
T
(nt
s
) match the exact result y(t) at t = nt
s
? If not, what are the likely sources of error?
(d) Obtain estimates based on the rectangular rule and trapezoidal rule for the convolution y(t) of
x(t) = 2 tri(t) and h(t) = rect(t/2) by sampling the signals at intervals of t
s
= 0.5 s. Which rule
would you expect to yield a better approximation, and why?
3.61 (Convolution) Let x[n] = rect(n/2) and h[n] = rect(n/4).
(a) Find f[n] = x[n] x[n] and g[n] = h[n] h[n].
(b) Express these results as f[n] = Atri(n/M) and g[n] = Btri(n/K) by selecting appropriate values
for the constants A, M, B and K.
(c) Generalize the above results to show that rect(n/2N) rect(n/2N) = (2N + 1)tri(
n
2N+1
).
3.62 (Impulse Response of Dierence Algorithms) Two systems to compute the forward dierence
and backward dierence are described by
Forward dierence: y
F
[n] = x[n + 1] x[n] Backward dierence: y
B
[n] = x[n] x[n 1]
(a) What is the impulse response of each system?
(b) Which of these systems is stable? Which of these systems is causal?
(c) Find the impulse response of their parallel connection. Is the parallel system stable? Is it causal?
(d) What is the impulse response of their cascade? Is the cascaded system stable? Is it causal?
3.63 (System Response) Find the response of the following lters to the unit step x[n] = u[n], and to
the alternating unit step x[n] = (1)
n
u[n], using convolution concepts.
(a) h[n] = [n] [n 1] (dierencing operation)
(b) h[n] =

0.5, 0.5 (2-point average)


(c) h[n] =
1
N
N1

k=0
[n k], N = 3 (moving average)
(d) h[n] =
2
N(N+1)
N1

k=0
(N k)[n k], N = 3 (weighted moving average)
(e) y[n] +
N1
N+1
y[n 1] =
2
N+1
x[n], N = 3 (exponential average)
3.64 (Convolution and Interpolation) Consider the following system with x[n] =

0, 3, 9, 12, 15, 18.


x[n] zero interpolate by N lter y[n]
(a) Find the response y[n] if N = 2 and the lter impulse response is h[n] =

1, 1. Show that,
except for end eects, the output describes a step interpolation between the samples of x[n].
(b) Find the response y[n] if N = 3 and the lter impulse response is h[n] =

1, 1, 1. Does the
output describe a step interpolation between the samples of x[n]?
c Ashok Ambardar, September 1, 2003
Chapter 3 Problems 117
(c) Pick N and h[n] if the system is to perform step interpolation by 4.
3.65 (Convolution and Interpolation) Consider the following system with x[n] =

0, 3, 9, 12, 15, 18.


x[n] zero interpolate by N lter y[n]
(a) Find the response y[n] if N = 2 and the lter impulse response is h[n] = tri(n/2). Show that,
except for end eects, the output describes a linear interpolation between the samples of x[n].
(b) Find the response y[n] if N = 3 and the lter impulse response is h[n] = tri(n/3). Does the
output describe a linear interpolation between the samples of x[n]?
(c) Pick N and h[n] if the system is to perform linear interpolation by 4.
3.66 (Interconnected Systems) Consider two systems described by
h
1
[n] = [n] +[n 1] h
2
[n] = (0.5)
n
u[n]
Find the response to the input x[n] = (0.5)
n
u[n] if
(a) The two systems are connected in parallel with = 0.5.
(b) The two systems are connected in parallel with = 0.5.
(c) The two systems are connected in cascade with = 0.5.
(d) The two systems are connected in cascade with = 0.5.
3.67 (Systems in Cascade and Parallel) Consider the realization of Figure P3.67.
[n] y
1
z
1
z
1
z
[n] x

+
+
+
+
+


Figure P3.67 System realization for Problem 3.67
(a) Find its impulse response if = . Is the overall system FIR or IIR?
(b) Find its impulse response if = . Is the overall system FIR or IIR?
(c) Find its impulse response if = = 1. What does the overall system represent?
3.68 (Cascading) The impulse response of two cascaded systems equals the convolution of their impulse
responses. Does the step response s
C
[n] of two cascaded systems equal s
1
[n] s
2
[n], the convolution of
their step responses? If not, how is s
C
[n] related to s
1
[n] and s
2
[n]?
3.69 (Cascading) System 1 is a squaring circuit, and system 2 is an exponential averager described by
h[n] = (0.5)
n
u[n]. Find the output of each cascaded combination. Will their output be identical?
Should it be? Explain.
(a) 2(0.5)
n
u[n] system 1 system 2 y[n]
(b) 2(0.5)
n
u[n] system 2 system 1 y[n]
c Ashok Ambardar, September 1, 2003
118 Chapter 3 Time-Domain Analysis
3.70 (Cascading) System 1 is an IIR lter with the dierence equation y[n] = 0.5y[n 1] + x[n], and
system 2 is a lter with impulse response h[n] = [n] [n 1]. Find the output of each cascaded
combination. Will their output be identical? Should it be? Explain.
(a) 2(0.5)
n
u[n] system 1 system 2 y[n]
(b) 2(0.5)
n
u[n] system 2 system 1 y[n]
3.71 (Cascading) System 1 is an IIR lter with the dierence equation y[n] = 0.5y[n 1] + x[n], and
system 2 is a lter with impulse response h[n] = [n] (0.5)
n
u[n].
(a) Find the impulse response h
P
[n] of their parallel connection.
(b) Find the impulse response h
12
[n] of the cascade of system 1 and system 2.
(c) Find the impulse response h
21
[n] of the cascade of system 2 and system 1.
(d) Are h
12
[n] and h
21
[n] identical? Should they be? Explain.
(e) Find the impulse response h
I
[n] of a system whose parallel connection with h
12
[n] yields h
P
[n].
3.72 (Cascading) System 1 is a lowpass lter described by y[n] = 0.5y[n 1] + x[n], and system 2 is
described by h[n] = [n] 0.5[n 1].
(a) What is the output of the cascaded system to the input x[n] = 2(0.5)
n
u[n]?
(b) What is the output of the cascaded system to the input x[n] = [n]?
(c) How are the two systems related?
3.73 (Convolution in Practice) Often, the convolution of a long sequence x[n] and a short sequence h[n]
is performed by breaking the long signal into shorter pieces, nding the convolution of each short piece
with h[n], and gluing the results together. Let x[n] = 1, 1, 2, 3, 5, 4, 3, 1 and h[n] = 4, 3, 2, 1.
(a) Split x[n] into two equal sequences x
1
[n] = 1, 1, 2, 3 and x
2
[n] = 5, 4, 3, 1.
(b) Find the convolution y
1
[n] = h[n] x
1
[n].
(c) Find the convolution y
2
[n] = h[n] x
2
[n].
(d) Find the convolution y[n] = h[n] x[n].
(e) How can you nd y[n] from y
1
[n] and y
2
[n]?
[Hints and Suggestions: For part (e), use superposition and add the shifted version of y
2
[n] to y
1
[n]
to get y[n]. This forms the basis for the overlap-add method of convolution.]
3.74 (Periodic Convolution) Find the regular convolution y[n] = x[n] h[n] of one period of each pair
of periodic signals. Then, use wraparound to compute the periodic convolution y
p
[n] = x[n] (h[n]. In
each case, specify the minimum number of padding zeros we must use if we wish to nd the regular
convolution from the periodic convolution of the zero-padded signals.
(a) x[n] =

1, 2, 0, 1 h[n] =

2, 2, 3, 0
(b) x[n] =

0, 2, 4, 6 h[n] =

6, 4, 2, 0
(c) x[n] = 3, 2,

1, 0, 1 h[n] =

4, 3, 2, 0, 0
(d) x[n] = 3, 2, 1,

1, 2 h[n] = 4, 2, 3,

2, 0
[Hints and Suggestions: First assign the marker for the regular convolution. After wraparound,
this also corresponds to the marker for the periodic convolution.]
c Ashok Ambardar, September 1, 2003
Chapter 3 Problems 119
3.75 (Periodic Convolution) Find the periodic convolution y
p
[n] = x[n] (h[n] for each pair of signals
using the circulant matrix for x[n].
(a) x[n] =

1, 2, 0, 1 h[n] =

2, 2, 3, 0
(b) x[n] =

0, 2, 4, 6 h[n] =

6, 4, 2, 0
3.76 (Periodic Convolution) Consider a system whose impulse response is h[n] = (0.5)
n
u[n]. Show that
one period of its periodic extension with period N is given by h
pe
[n] =
(0.5)
n
1 (0.5)
N
, 0 n N 1.
Use this result to nd the response of this system to the following periodic inputs.
(a) x[n] = cos(n) (b) x[n] =

1, 1, 0, 0, with N = 4
(c) x[n] = cos(0.5n) (d) x[n] = (0.5)
n
, 0 n 3, with N = 4
[Hints and Suggestions: In each case, compute N samples of h
pe
[n] and then get the periodic
convolution. For example, the period of x[n] in part (a) is N = 2.]
3.77 (Correlation) For each pair of signals, compute the autocorrelation r
xx
[n], the autocorrelation r
hh
[n],
the cross-correlation r
xh
[n], and the cross-correlation r
hx
[n]. For each result, indicate the location of
the origin n = 0 by a marker.
(a) x[n] =

1, 2, 0, 1 h[n] =

2, 2, 3
(b) x[n] =

0, 2, 4, 6 h[n] =

6, 4, 2
(c) x[n] = 3, 2,

1, 2 h[n] =

4, 3, 2
(d) x[n] = 3, 2,

1, 1, 2 h[n] = 4,

2, 3, 2
[Hints and Suggestions: Use convolution to get the correlation results. For example, r
xh
[n] =
x[n] h[n] and the marker for the result is based on x[n] and h[n] (the sequences convolved).]
3.78 (Correlation) Let x[n] = rect[(n 4)/2] and h[n] = rect[n/4].
(a) Find the autocorrelation r
xx
[n].
(b) Find the autocorrelation r
hh
[n].
(c) Find the cross-correlation r
xh
[n].
(d) Find the cross-correlation r
hx
[n].
(e) How are the results of parts (c) and (d) related?
3.79 (Correlation) Find the correlation r
xh
[n] of the following signals.
(a) x[n] =
n
u[n] h[n] =
n
u[n]
(b) x[n] = n
n
u[n] h[n] =
n
u[n]
(c) x[n] = rect(n/2N) h[n] = rect(n/2N)
[Hints and Suggestions: In parts (a) and (b), each correlation will cover two ranges. For n 0, the
signals overlap over n k and for n < 0, the overlap is for 0 k . For part (c), x[n] and
h[n] are identical and even symmetric and their correlation equals their convolution.]
3.80 (Periodic Correlation) For each pair of periodic signals described for one period, compute the
periodic autocorrelations r
pxx
[n] and r
phh
[n], and the periodic cross-correlations r
pxh
[n] and r
phx
[n].
For each result, indicate the location of the origin n = 0 by a marker.
c Ashok Ambardar, September 1, 2003
120 Chapter 3 Time-Domain Analysis
(a) x[n] =

1, 2, 0, 1 h[n] =

2, 2, 3, 0
(b) x[n] =

0, 2, 4, 6 h[n] =

6, 4, 2, 0
(c) x[n] = 3, 2,

1, 2 h[n] = 0,

4, 3, 2
(d) x[n] = 3, 2,

1, 1, 2 h[n] = 4,

2, 3, 2, 0
[Hints and Suggestions: First get the regular correlation (by regular convolution) and then use
wraparound. For example, r
xh
[n] = x[n] h[n] and the marker for the result is based on x[n] and
h[n] (the sequences convolved). Then, use wraparound to get r
pxh
[n] (the marker may get wrapped
around in some cases).]
3.81 (Mean and Variance from Autocorrelation) The mean value m
x
of a random signal x[n] (with
nonzero mean value) may be computed from its autocorrelation function r
xx
[n] as m
2
x
= lim
|n|
r
xx
[n].
The variance of x[n] is then given by
2
x
= r
xx
(0) m
2
x
. Find the mean, variance, and average power
of a random signal whose autocorrelation function is r
xx
[n] = 10

1 + 2n
2
2 + 5n
2

.
COMPUTATION AND DESIGN
3.82 (Numerical Integration Algorithms) Numerical integration algorithms approximate the area y[n]
from y[n1] or y[n2] (one or more time steps away). Consider the following integration algorithms.
(a) y[n] = y[n 1] +t
s
x[n] (rectangular rule)
(b) y[n] = y[n 1] +
t
s
2
(x[n] +x[n 1]) (trapezoidal rule)
(c) y[n] = y[n 1] +
ts
12
(5x[n] + 8x[n 1] x[n 2]) (Adams-Moulton rule)
(d) y[n] = y[n 2] +
t
s
3
(x[n] + 4x[n 1] +x[n 2]) (Simpsons rule)
(e) y[n] = y[n 3] +
3t
s
8
(x[n] + 3x[n 1] + 3x[n 2] +x[n 3]) (Simpsons three-eighths rule)
Use each of the rules to approximate the area of x(t) = sinc(t), 0 t 3, with t
s
= 0.1 s and t
s
= 0.3 s,
and compare with the expected result of 0.53309323761827. How does the choice of the time step t
s
aect the results? Which algorithm yields the most accurate results?
3.83 (System Response) Use the Matlab routine filter to obtain and plot the response of the lter
described by y[n] = 0.25(x[n] +x[n 1] +x[n 2] +x[n 3]) to the following inputs and comment on
your results.
(a) x[n] = 1, 0 n 60
(b) x[n] = 0.1n, 0 n 60
(c) x[n] = sin(0.1n), 0 n 60
(d) x[n] = 0.1n + sin(0.5n), 0 n 60
(e) x[n] =

k=
[n 5k], 0 n 60
(f ) x[n] =

k=
[n 4k], 0 n 60
c Ashok Ambardar, September 1, 2003
Chapter 3 Problems 121
3.84 (System Response) Use the Matlab routine filter to obtain and plot the response of the lter
described by y[n] y[n 4] = 0.25(x[n] + x[n 1] + x[n 2] + x[n 3]) to the following inputs and
comment on your results.
(a) x[n] = 1, 0 n 60
(b) x[n] = 0.1n, 0 n 60
(c) x[n] = sin(0.1n), 0 n 60
(d) x[n] = 0.1n + sin(0.5n), 0 n 60
(e) x[n] =

k=
[n 5k], 0 n 60
(f ) x[n] =

k=
[n 4k], 0 n 60
3.85 (System Response) Use Matlab to obtain and plot the response of the following systems over the
range 0 n 199.
(a) y[n] = x[n/3], x[n] = (0.9)
n
u[n] (assume zero interpolation)
(b) y[n] = cos(0.2n)x[n], x[n] = cos(0.04n) (modulation)
(c) y[n] = [1 + cos(0.2n)]x[n], x[n] = cos(0.04n) (modulation)
3.86 (System Response) Use Matlab to obtain and plot the response of the following lters, using direct
commands (where possible) and also using the routine filter, and compare your results. Assume that
the input is given by x[n] = 0.1n + sin(0.1n), 0 n 60. Comment on your results.
(a) y[n] =
1
N
N1

k=0
x[n k], N = 4 (moving average)
(b) y[n] =
2
N(N+1)
N1

k=0
(N k)x[n k], N = 4 (weighted moving average)
(c) y[n] y[n 1] = (1 )x[n], N = 4, =
N1
N+1
(exponential average)
3.87 (System Response) Use Matlab to obtain and plot the response of the following lters, using
direct commands and using the routine filter, and compare your results. Use an input that consists
of the sum of the signal x[n] = 0.1n +sin(0.1n), 0 n 60 and uniformly distributed random noise
with a mean of 0. Comment on your results.
(a) y[n] =
1
N
N1

k=0
x[n k], N = 4 (moving average)
(b) y[n] =
2
N(N+1)
N1

k=0
(N k)x[n k], N = 4 (weighted moving average)
(c) y[n] y[n 1] = (1 )x[n], N = 4, =
N1
N+1
(exponential averaging)
3.88 (System Response) Use the Matlab routine filter to obtain and plot the response of the following
FIR lters. Assume that x[n] = sin(n/8), 0 n 60. Comment on your results. From the results,
can you describe the the function of these lters?
(a) y[n] = x[n] x[n 1] (rst dierence)
(b) y[n] = x[n] 2x[n 1] +x[n 2] (second dierence)
c Ashok Ambardar, September 1, 2003
122 Chapter 3 Time-Domain Analysis
(c) y[n] =
1
3
(x[n] +x[n 1] +x[n 2]) (moving average)
(d) y[n] = 0.5x[n] +x[n 1] + 0.5x[n 2] (weighted average)
3.89 (System Response in Symbolic Form) Determine the response y[n] of the following lters and
plot over 0 n 30.
(a) The step response of y[n] 0.5y[n 1] = x[n]
(b) The impulse response of y[n] 0.5y[n 1] = x[n]
(c) The zero-state response of y[n] 0.5y[n 1] = (0.5)
n
u[n]
(d) The complete response of y[n] 0.5y[n 1] = (0.5)
n
u[n], y[1] = 4
(e) The complete response of y[n] +y[n 1] + 0.5y[n 2] = (0.5)
n
u[n], y[1] = 4, y[2] = 3
3.90 (Inverse Systems and Echo Cancellation) A signal x(t) is passed through the echo-generating
system y(t) = x(t) + 0.9x(t ) + 0.8x(t 2), with = 93.75 ms. The resulting echo signal y(t) is
sampled at S = 8192 Hz to obtain the sampled signal y[n].
(a) The dierence equation of a digital lter that generates the output y[n] from x[n] may be written
as y[n] = x[n] + 0.9x[n N] + 0.8x[n 2N]. What is the value of the index N?
(b) What is the dierence equation of an echo-canceling lter (inverse lter) that could be used to
recover the input signal x[n]?
(c) The echo signal is supplied on the authors website as echosig.mat. Load this signal into Matlab
(using the command load echosig). Listen to this signal using the Matlab command sound.
Can you hear the echoes? Can you make out what is being said?
(d) Filter the echo signal using your inverse lter and listen to the ltered signal. Have you removed
the echoes? Can you make out what is being said? Do you agree with what is being said?
3.91 (Nonrecursive Forms of IIR Filters) An FIR lter may always be exactly represented in recursive
form, but we can only approximately represent an IIR lter by an FIR lter by truncating its impulse
response to N terms. The larger the truncation index N, the better is the approximation. Consider
the IIR lter described by y[n] 0.8y[n 1] = x[n]. Find its impulse response h[n] and truncate it to
20 terms to obtain h
A
[n], the impulse response of the approximate FIR equivalent. Would you expect
the greatest mismatch in the response of the two lters to identical inputs to occur for lower or higher
values of n?
(a) Use the Matlab routine filter to nd and compare the step response of each lter up to n = 15.
Are there any dierences? Should there be? Repeat by extending the response to n = 30. Are
there any dierences? For how many terms does the response of the two systems stay identical,
and why?
(b) Use the Matlab routine filter to nd and compare the response to x[n] = 1, 0 n 10 for
each lter up to n = 15. Are there any dierences? Should there be? Repeat by extending the
response to n = 30. Are there any dierences? For how many terms does the response of the
two systems stay identical, and why?
3.92 (Convolution of Symmetric Sequences) The convolution of sequences that are symmetric about
their midpoint is also endowed with symmetry (about its midpoint). Use the Matlab command
conv to nd the convolution of the following sequences and establish the type of symmetry (about the
midpoint) in the convolution.
(a) x[n] = sin(0.2n), 10 n 10 h[n] = sin(0.2n), 10 n 10
(b) x[n] = sin(0.2n), 10 n 10 h[n] = cos(0.2n), 10 n 10
(c) x[n] = cos(0.2n), 10 n 10 h[n] = cos(0.2n), 10 n 10
(d) x[n] = sinc(0.2n), 10 n 10 h[n] = sinc(0.2n), 10 n 10
c Ashok Ambardar, September 1, 2003
Chapter 3 Problems 123
3.93 (Extracting Periodic Signals Buried in Noise) Extraction of periodic signals buried in noise
requires autocorrelation (to identify the period) and cross-correlation (to recover the signal itself).
(a) Generate the signal x[n] = sin(0.1n), 0 n 499. Add some uniform random noise (with a
noise amplitude of 2 and a mean of 0) to obtain the noisy signal s[n]. Plot each signal. Can you
identify any periodicity from the plot of x[n]? If so, what is the period N? Can you identify any
periodicity from the plot of s[n]?
(b) Obtain the periodic autocorrelation r
px
[n] of x[n] and plot. Can you identify any periodicity
from the plot of r
px
[n]? If so, what is the period N? Is it the same as the period of x[n]?
(c) Use the value of N found above (or identify N from x[n] if not) to generate the 500-sample
impulse train i[n] =

[n kN], 0 n 499. Find the periodic cross-correlation y[n] of
s[n] and i[n]. Choose a normalizing factor that makes the peak value of y[n] unity. How is the
normalizing factor related to the signal length and the period N?
(d) Plot y[n] and x[n] on the same plot. Is y[n] a close match to x[n]? Explain how you might
improve the results.
c Ashok Ambardar, September 1, 2003
Chapter 4
z-TRANSFORM ANALYSIS
4.0 Scope and Objectives
This chapter deals with the z-transform as a method of system analysis in a transformed domain. Even
though its genesis was outlined in the previous chapter, we develop the z-transform as an independent
transformation method in order to keep the discussion self-contained. We concentrate on the operational
properties of the z-transform and its applications in systems analysis. Connections with other transform
methods and system analysis methods are explored in later chapters.
4.1 The Two-Sided z-Transform
The two-sided z-transform X(z) of a discrete signal x[n] is dened as
X(z) =

k=
x[k]z
k
(two-sided z-transform) (4.1)
The relation between x[n] and X(z) is denoted symbolically by
x[n] zt X(z) (4.2)
Here, x[n] and X(z) form a transform pair, and the double arrow implies a one-to-one correspondence
between the two.
4.1.1 What the z-Transform Reveals
The complex quantity z generalizes the concept of digital frequency F or to the complex domain and is
usually described in polar form as
z = [r[e
j2F
= [r[e
j
(4.3)
Values of the complex quantity z may be displayed graphically in the z-plane in terms of its real and
imaginary parts or in terms of its magnitude and angle.
The dening relation for the z-transform is a power series (Laurent series) in z. The term for each index
k is the product of the sample value x[k] and z
k
.
For the sequence x[n] = 7, 3,

1, 4, 8, 5, for example, the z-transform may be written as


X(z) = 7z
2
+ 3z
1
+z
0
+ 4z
1
8z
2
+ 5z
3
124 c Ashok Ambardar, September 1, 2003
4.1 The Two-Sided z-Transform 125
Comparing x[n] and X(z), we observe that the quantity z
1
plays the role of a unit delay operator. The
sample location n = 2 in x[n], for example, corresponds to the term with z
2
in X(z). In concept, then, it
is not hard to go back and forth between a sequence and its z-transform if all we are given is a nite number
of samples.
REVIEW PANEL 4.1
The Two-Sided z-Transform of Finite Sequences Is a Power Series in z
DRILL PROBLEM 4.1
(a) Let x[n] =

2, 1, 0, 4. Find its z-transform X(z).


(b) Let x[n] = 2, 3,

1, 0, 4. Find its z-transform X(z).


(c) Let X(z) = 3z
2
+z 3z
1
+ 5z
2
. Find x[n].
Answers: (a) 2 +z
1
+ 4z
3
(b) 2z
2
3z + 1 + 4z
2
(c) 3, 1,

0, 3, 5
Since the dening relation for X(z) describes a power series, it may not converge for all z. The values of
z for which it does converge dene the region of convergence (ROC) for X(z). Two completely dierent
sequences may produce the same two-sided z-transform X(z), but with dierent regions of convergence. It is
important that we specify the ROC associated with each X(z), especially when dealing with the two-sided
z-transform.
4.1.2 Some z-Transform Pairs Using the Dening Relation
Table 17.1 lists the z-transforms of some useful signals. We provide some examples using the dening relation
to nd z-transforms. For nite-length sequences, the z-transform may be written as a polynomial in z. For
sequences with a large number of terms, the polynomial form can get to be unwieldy unless we can nd
closed-form solutions.
EXAMPLE 4.1 (The z-Transform from the Dening Relation)
(a) Let x[n] = [n]. Its z-transform is X(z) = 1. The ROC is the entire z-plane.
(b) Let x[n] = 2[n +1] +[n] 5[n 1] +4[n 2]. This describes the sequence x[n] = 2,

1, 5, 4. Its
z-transform is evaluated as X(z) = 2z +1 5z
1
+4z
2
. No simplications are possible. The ROC is
the entire z-plane, except z = 0 and z = (or 0 < [z[ < ).
(c) Let x[n] = u[n] u[n N]. This represents a sequence of N samples, and its z-transform may be
written as
X(z) = 1 +z
1
+z
2
+ +z
(N1)
Its ROC is [z[ > 0 (the entire z-plane except z = 0). A closed-form result for X(z) may be found using
the dening relation as follows:
X(z) =
N1

k=0
z
k
=
1 z
N
1 z
1
, z = 1
c Ashok Ambardar, September 1, 2003
126 Chapter 4 z-Transform Analysis
Table 4.1 A Short Table of z-Transform Pairs
Entry Signal z-Transform ROC
Finite Sequences
1 [n] 1 all z
2 u[n] u[n N]
1 z
N
1 z
1
z = 0
Causal Signals
3 u[n]
z
z 1
[z[ > 1
4
n
u[n]
z
z
[z[ > [[
5 ()
n
u[n]
z
z +
[z[ > [[
6 nu[n]
z
(z 1)
2
[z[ > 1
7 n
n
u[n]
z
(z )
2
[z[ > [[
8 cos(n)u[n]
z
2
z cos
z
2
2z cos + 1
[z[ > 1
9 sin(n)u[n]
z sin
z
2
2z cos + 1
[z[ > 1
10
n
cos(n)u[n]
z
2
z cos
z
2
2z cos +
2
[z[ > [[
11
n
sin(n)u[n]
z sin
z
2
2z cos +
2
[z[ > [[
Anti-Causal Signals
12 u[n 1]
z
z 1
[z[ < 1
13 nu[n 1]
z
(z 1)
2
[z[ < 1
14
n
u[n 1]
z
z
[z[ < [[
15 n
n
u[n 1]
z
(z )
2
[z[ < [[
c Ashok Ambardar, September 1, 2003
4.1 The Two-Sided z-Transform 127
(d) Let x[n] = u[n]. We evaluate its z-transform using the dening relation as follows:
X(z) =

k=0
z
k
=

k=0
(z
1
)
k
=
1
1 z
1
=
z
z 1
, ROC: [z[ > 1
Its ROC is [z[ > 1 and is based on the fact that the geometric series

k=0
r
k
converges only if [r[ < 1.
(e) Let x[n] =
n
u[n]. Using the dening relation, its z-transform and ROC are
X(z) =

k=0

k
z
k
=

k=0

k
=
1
1 (/z)
=
z
z
, ROC: [z[ > [[
Its ROC ([z[ > ) is also based on the fact that the geometric series

k=0
r
k
converges only if [r[ < 1.
DRILL PROBLEM 4.2
(a) Let x[n] = (0.5)
n
u[n]. Find its z-transform X(z) and ROC.
(b) Let y[n] = (0.5)
n
u[n]. Find its z-transform Y (z) and ROC.
(c) Let g[n] = (0.5)
n
u[n 1]. Find its z-transform G(z) and ROC.
Answers: (a) X(z) =
z
z 0.5
, [z[ > 0.5 (b) Y (z) =
z
z + 0.5
, [z[ > 0.5 (c) G(z) =
z
z 0.5
, [z[ < 0.5
4.1.3 More on the ROC
For a nite sequence x[n], the z-transform X(z) is a polynomial in z or z
1
and converges (is nite) for all
z, except z = 0, if X(z) contains terms of the form z
k
(or x[n] is nonzero for n > 0), and/or z = if X(z)
contains terms of the form z
k
(or x[n] is nonzero for n < 0). Thus, the ROC for nite sequences is the entire
z-plane, except perhaps for z = 0 and/or z = , as applicable.
In general, if X(z) is a rational function in z, as is often the case, its ROC actually depends on the one-
or two-sidedness of x[n], as illustrated in Figure 4.1.
The ROC excludes all pole locations (denominator roots) where X(z) becomes innite. As a result,
the ROC of right-sided signals is [z[ > [p[
max
and lies exterior to a circle of radius [p[
max
, the magnitude
of the largest pole. The ROC of causal signals, with x[n] = 0, n < 0, excludes the origin and is given
by 0 > [z[ > p[
max
. Similarly, the ROC of a left-sided signal x[n] is [z[ < [p[
min
and lies interior to a
circle of radius [p[
min
, the smallest pole magnitude of X(z). Finally, the ROC of a two-sided signal x[n] is
[p[
min
< [z[ < [p[
max
, an annulus whose radii correspond to the smallest and largest pole locations in X(z).
We use inequalities of the form [z[ < [[ (and not [z[ [[), for example, because X(z) may not converge
at the boundary [z[ = [[.
c Ashok Ambardar, September 1, 2003
128 Chapter 4 z-Transform Analysis
z
z
]
Im[
Re[ Re[
] Im[ z
z Re[ z ] ]
] z ] Im[
ROC
ROC (shaded) of two-sided signals
ROC
ROC
ROC (shaded) of right-sided signals ROC (shaded) of left-sided signals
Figure 4.1 The ROC (shown shaded) of the z-transform for various sequences
REVIEW PANEL 4.2
The ROC of the z-Transform X(z) Determines the Nature of the Signal x[n]
Finite-length x[n]: ROC of X(z) is all the z-plane, except perhaps for z = 0 and/or z = .
Right-sided x[n]: ROC of X(z) is outside a circle whose radius is the largest pole magnitude.
Left-sided x[n]: ROC of X(z) is inside a circle whose radius is the smallest pole magnitude.
Two-sided x[n]: ROC of X(z) is an annulus bounded by the largest and smallest pole radius.
Why We Must Specify the ROC
Consider the signal y[n] =
n
u[n 1] = 1, n = 1, 2, . . . . The two-sided z-transform of y[n], using a
change of variables, can be written as
Y (z) =
1

k=

k
z
k
=

m=1

m
=
z/
1 (z/)
=
z
z
, ROC: [z[ < [[
The ROC of Y (z) is [z[ < [[. Recall that the z-transform of x[n] =
n
u[n] is X(z) =
z
z
. This is
identical to Y (z) but the ROC of X(z) is [z[ > [[. So, we have a situation where two entirely dierent
signals may possess an identical z-transform and the only way to distinguish between the them is by their
ROC. In other words, we cannot uniquely identify a signal from its transform alone. We must also specify
the ROC. In this book, we shall assume a right-sided signal if no ROC is specied.
EXAMPLE 4.2 (Identifying the ROC)
(a) Let x[n] = 4, 3,

2, 6. The ROC of X(z) is 0 < [z[ < and excludes z = 0 and z = because
x[n] is nonzero for n < 0 and n > 0.
(b) Let X(z) =
z
z 2
+
z
z + 3
.
Its ROC depends on the nature of x[n].
If x[n] is assumed right-sided, the ROC is [z[ > 3 (because [p[
max
= 3).
If x[n] is assumed left-sided, the ROC is [z[ < 2 (because [p[
min
= 2).
If x[n] is assumed two-sided, the ROC is 2 < [z[ < 3.
The region [z[ < 2 and [z[ > 3 does not correspond to a valid region of convergence because we must
nd a region that is common to both terms.
c Ashok Ambardar, September 1, 2003
4.2 Properties of the Two-Sided z-Transform 129
DRILL PROBLEM 4.3
(a) Let X(z) =
z + 0.5
z
. What is its ROC?
(b) Let Y (z) =
z + 1
(z 0.1)(z + 0.5)
. What is its ROC if x[n] is right-sided?
(c) Let G(z) =
z
z + 2
+
z
z 1
. What is its ROC if g[n] is two-sided?
(d) Let H(z) =
(z + 3)
(z 2)(z + 1)
. What is its ROC if h[n] is left-sided?
Answers: (a) [z[ = 0 (b) [z[ > 0.5 (c) 1 < [z[ < 2 (d) [z[ < 1
4.2 Properties of the Two-Sided z-Transform
The z-transform is a linear operation and obeys superposition. The properties of the z-transform, listed in
Table 4.2, are based on the linear nature of the z-transform operation.
Table 4.2 Properties of the Two-Sided z-Transform
Entry Property Signal z-Transform
1 Shifting x[n N] z
N
X(z)
2 Reection x[n]
X

1
z

3 Anti-causal x[n]u[n 1]
X

1
z

x[0] (for causal x[n])


4 Scaling
n
x[n]
X

5 Times-n nx[n]
z
dX(z)
dz
6 Times-cos cos(n)x[n] 0.5

X(ze
j
) +X(ze
j
)

7 Times-sin sin(n)x[n] j0.5

X(ze
j
) X(ze
j
)

8 Convolution x[n] h[n] X(z)H(z)


Time Shift: To prove the time-shift property of the two-sided z-transform, we use a change of variables.
We start with the pair x[n] zt X(z). If y[n] = x[n N], its z-transform is
Y (z) =

k=
x[k N]z
k
(4.4)
With the change of variable m = k N, the new summation index m still ranges from to (since N
is nite), and we obtain
Y (z) =

m=
x[m]z
(m+N)
= z
N

m=
x[m]z
m
= z
N
X(z) (4.5)
c Ashok Ambardar, September 1, 2003
130 Chapter 4 z-Transform Analysis
The factor z
N
with X(z) induces a right shift of N in x[n].
REVIEW PANEL 4.3
Time-Shift Property for the Two-Sided z-Transform: x[n N] zt z
N
X(z)
DRILL PROBLEM 4.4
(a) Let X(z) = 2 + 5z
1
. Find the z-transform of y[n] = x[n 3].
(b) Use the result (0.5)
n
u[n] zt
z
z 0.5
to nd g[n] if G(z)
1
z 0.5
.
Answers: (a) Y (z) = 2z
3
+ 5z
4
(b) g[n] = (0.5)
n1
u[n 1]
Times-n: The times-n property is established by taking derivatives, to yield
X(z) =

k=
x[k]z
k
dX(z)
dz
=

k=
d
dz

x[k]z
k

k=
kx[k]z
(k+1)
(4.6)
Multiplying both sides by z, we obtain
z
dX(z)
dz
=

k=
kx[k]z
k
(4.7)
This represents the transform of nx[n].
REVIEW PANEL 4.4
The Times-n Property: nx[n] zt z
dX(z)
dz
DRILL PROBLEM 4.5
(a) Let X(z) = 2 + 5z
1
4z
2
. Find the z-transform of y[n] = nx[n].
(b) Let G(z) =
z
z 0.5
, ROC : [z[ > 0.5. Find the z-transform of h[n] = ng[n] and its ROC.
Answers: (a) Y (z) = 5z
1
8z
2
(b) H(z) =
0.5z
(z 0.5)
2
, [z[ > 0.5
Scaling: The scaling property follows from the transform of y[n] =
n
x[n], to yield
Y (z) =

k=

k
x[k]z
k
=

k=
x[k]

k
= X

(4.8)
If the ROC of X(z) is [z[ > [K[, the scaling property changes the ROC of Y (z) to [z[ > [K[. In particular,
if = 1, we obtain the useful result (1)
n
x[n] zt X(z). This result says that if we change the
sign of alternating (odd indexed) samples of x[n] to get y[n], its z-transform Y (z) is given by Y (z) = X(z)
and has the same ROC.
c Ashok Ambardar, September 1, 2003
4.2 Properties of the Two-Sided z-Transform 131
REVIEW PANEL 4.5
The Scaling Property:
n
x[n] zt X(z/) and (1)
n
x[n] zt X(z)
DRILL PROBLEM 4.6
(a) Let X(z) = 2 3z
2
. Find the z-transform of y[n] = (2)
n
x[n] and its ROC.
(b) Let G(z) =
z
z 0.5
, [z[ > 0.5. Find the z-transform of h[n] = (0.5)
n
g[n] and its ROC.
(c) Let F(z) = 2 + 5z
1
4z
2
. Find the z-transform of p[n] = f[n] and its ROC.
Answers: (a) 2 12z
2
, z = 0 (b)
z
z + 0.25
, [z[ > 0.25 (c) 2 5z
1
4z
2
, z = 0
If x[n] is multiplied by e
jn
or

e
j

n
, we then obtain the pair e
jn
x[n] zt X(ze
j
). An
extension of this result, using Eulers relation, leads to the times-cos and times-sin properties:
cos(n)x[n] = 0.5x[n]

e
jn
+e
jn

zt 0.5

ze
j

+X

ze
j

(4.9)
sin(n)x[n] = j0.5x[n]

e
jn
e
jn

zt j0.5

ze
j

ze
j

(4.10)
The ROC is not aected by the times-cos and time-sin properties.
DRILL PROBLEM 4.7
(a) Let X(z) =
z
z 1
, [z[ > 1. Find the z-transform of y[n] = cos(0.5n)x[n] and its ROC.
(b) Let G(z) =
z
z 0.5
, [z[ > 0.5. Find the z-transform of h[n] = sin(0.5n)g[n] and its ROC.
Answers: (a) Y (z) =
z
2
z
2
+ 1
, [z[ > 1 (b) H(z) =
0.5z
z
2
+ 0.25
, [z[ > 0.5
Convolution: The convolution property is based on the fact that multiplication in the time domain cor-
responds to convolution in any transformed domain. The z-transforms of sequences are polynomials, and
multiplication of two polynomials corresponds to the convolution of their coecient sequences. This property
nds extensive use in the analysis of systems in the transformed domain.
REVIEW PANEL 4.6
The Convolution Property: x[n] h[n] zt X(z)H(z)
Folding: With x[n] zt X(z) and y[n] = x[n], we use k k in the dening relation to give
Y (z) =

k=
x[k]z
k
=

k=
x[k]z
k
=

k=
x[k](1/z)
k
= X(1/z) (4.11)
If the ROC of x[n] is [z[ > [[, the ROC of the folded signal x[n] becomes [1/z[ > [[ or [z[ < 1/[[.
REVIEW PANEL 4.7
The Folding Property of the Two-Sided z-Transform
x[n] zt X(1/z) (the ROC changes from [z[ > [[ to [z[ < 1/[[)
c Ashok Ambardar, September 1, 2003
132 Chapter 4 z-Transform Analysis
DRILL PROBLEM 4.8
(a) Let X(z) = 2 + 3z
1
, z = 0. Find the z-transform of y[n] = x[n] and its ROC.
(b) Let G(z) =
z
z 0.5
, [z[ > 0.5. Find the z-transform of h[n] = g[n] and its ROC.
Answers: (a) Y (z) = 3z + 2, [z[ = (b) H(z) =
1
1 0.5z
, [z[ < 2
The Folding Property and Symmetric Signals
The folding property is useful in checking for signal symmetry from its z-transform. For a signal x[n] that
has even symmetry about the origin n = 0, we have x[n] = x[n], and thus X(z) = X(1/z). Similarly, if
x[n] has odd symmetry about n = 0, we have x[n] = x[n], and thus X(z) = X(1/z). If x[n] is symmetric
about its midpoint and the center of symmetry is not n = 0, we observe that X(z) = z
M
X(1/z) for even
symmetry and X(z) = z
M
X(1/z) for odd symmetry. The factor z
M
, where M is an integer, accounts for
the shift of the center of symmetry from the origin.
REVIEW PANEL 4.8
A Property of the z-Transform of Symmetric Sequences
Even symmetry: x[n] = x[n] X(z) = z
M
X(1/z) Odd symmetry: x[n] = x[n] X(z) =
z
M
X(1/z)
DRILL PROBLEM 4.9
(a) Let x[n] =

0, 1, 5, 1. Show that X(z) = z


M
X(1/z) and nd M.
(b) Let y[n] = 2,

2. Show that Y (z) = z


M
Y (1/z) and nd M.
(c) Let g[n] = 3,

0, 2. Show that G(z) = z


M
G(1/z) and nd M.
Answers: (a) M = 4 (b) M = 1 (c) M = 0
The Folding Property and Anti-Causal Signals
The folding property is also useful in nding the transform of anti-causal signals. From the causal signal
x[n]u[n] zt X(z) (with ROC [z[ > [[), we nd the transform of x[n]u[n] as X(1/z) (whose ROC
is [z[ < 1/[[). The anti-causal signal y[n] = x[n]u[n 1] (which excludes the sample at n = 0) can then
be written as y[n] = x[n]u[n] x[0][n], as illustrated in Figure 4.2.
[n] [n] x u
n
[n] x [n1] u [n] u [n] x [0] x [n]
=
n n n
Figure 4.2 Finding the z-transform of an anti-causal signal from a causal version
c Ashok Ambardar, September 1, 2003
4.2 Properties of the Two-Sided z-Transform 133
With x[n]u[n] zt X(z), [z[ > [[, the z-transform of y[n] = x[n]u[n 1] gives
y[n] = x[n]u[n 1] zt Y (z) = X(1/z) x[0], [z[ < 1/[[ (4.12)
REVIEW PANEL 4.9
How to Find the z-Transform of a Left-Sided Signal from Its Right-Sided Version
If x[n]u[n] zt X(z), [z[ > [[, then x[n]u[n 1] zt X(1/z) x[0], [z[ < 1/[[.
DRILL PROBLEM 4.10
(a) Let x[n]u[n] zt 2 + 5z
1
, z = 0. Find the z-transform of y[n] = x[n]u[n 1] and its ROC.
(b) Let f[n]u[n] zt
z + 1
z 0.5
, [z[ > 0.5. Find the z-transform of g[n] = f[n]u[n1] and its ROC.
Answers: (a) Y (z) = 5z, [z[ = (b) G(z) =
0.5z
0.5z + 1
, [z[ < 2
EXAMPLE 4.3 (z-Transforms Using Properties)
(a) Using the times-n property, the z-transform of y[n] = nu[n] is
Y (z) = z
d
dz

z
z 1

= z

z
(z 1)
2
+
1
z 1

=
z
(z 1)
2
(b) With x[n] =
n
nu[n], we use scaling to obtain the z-transform:
X(z) =
z/
[(z/) 1]
2
=
z
(z )
2
(c) We nd the transform of the N-sample exponential pulse x[n] =
n
(u[n] u[n N]). We let y[n] =
u[n] u[n N]. Its z-transform is
Y (z) =
1 z
N
1 z
1
, [z[ = 1
Then, the z-transform of x[n] =
n
y[n] becomes
X[z] =
1 (z/)
N
1 (z/)
1
, z =
(d) The z-transforms of x[n] = cos(n)u[n] and y[n] = sin(n)u[n] are found using the times-cos and
times-sin properties:
X(z) = 0.5

ze
j
ze
j
1
+
ze
j
ze
j
1

=
z
2
z cos
z
2
2z cos + 1
Y (z) = j0.5

ze
j
ze
j
1

ze
j
ze
j
1

=
z sin
z
2
2z cos + 1
c Ashok Ambardar, September 1, 2003
134 Chapter 4 z-Transform Analysis
(e) The z-transforms of f[n] =
n
cos(n)u[n] and g[n] =
n
sin(n)u[n] follow from the results of part
(d) and the scaling property:
F(z) =
(z/)
2
(z/)cos
(z/)
2
2(z/)cos + 1
=
z
2
z cos
z
2
2z cos +
2
G(z) =
(z/)sin
(z/)
2
2(z/)cos + 1
=
z sin
z
2
2z cos +
2
(f ) We use the folding property to nd the transform of y[n] =
n
u[n1]. We start with the transform
pair x[n] =
n
u[n] zt z/(z ), ROC: [z[ > [[. With x[0] = 1, we nd
y[n] =
n
u[n 1] zt X(1/z) x[0] =
1/z
1/z
1 =
z
1 z
, ROC: [z[ <
1
[[
If we replace by 1/, and change the sign of the result, we get

n
u[n 1] zt
z
z
, ROC: [z[ < [[
This is listed as a standard transform pair in tables of z-transforms.
(g) We use the folding property to nd the transform of x[n] =
|n|
, [[ < 1 (a two-sided decaying
exponential). We write this as x[n] =
n
u[n] +
n
u[n] [n] (a one-sided decaying exponential and
its folded version, less the extra sample included at the origin), as illustrated in Figure E4.3G.
| n|

n
[n] u
n
[n] u
[n]
=
1
n
1
n
1
n n
1
Figure E4.3G The signal for Example 4.3(g)
Its z-transform then becomes
X(z) =
z
z
+
1/z
(1/z)
1 =
z
z

z
z (1/)
, ROC: [[ < [z[ <
1
[[
Note that the ROC is an annulus that corresponds to a two-sided sequence, and describes a valid region
only if [[ < 1.
4.3 Poles, Zeros, and the z-Plane
The z-transform of many signals is a rational function of the form
X(z) =
N(z)
D(z)
=
B
M
z
M
+B
M1
z
M1
+ +B
2
z
2
+B
1
z +B
0
A
N
z
N
+A
N1
z
N1
+ +A
2
z
2
+A
1
z +A
0
(4.13)
c Ashok Ambardar, September 1, 2003
4.3 Poles, Zeros, and the z-Plane 135
Denoting the roots of N(z) by z
i
, i = 1, 2, . . . , M and the roots of D(z) by p
k
, k = 1, 2, . . . , N, we may also
express X(z) in factored form as
X(z) = K
N(z)
D(z)
= K
(z z
1
)(z z
2
) (z z
M
)
(z p
1
)(z p
2
) (z p
N
)
(4.14)
The roots of N(z) are termed zeros and the roots of D(z) are termed poles. A plot of the poles (shown
as ) and zeros (shown as o) in the z-plane constitutes a pole-zero plot, and provides a visual picture of
the root locations. For multiple roots, we indicate their multiplicity next to the root location on the plot.
Clearly, we can also nd X(z) in its entirety from a pole-zero plot of the root locations but only if the value
of the multiplicative constant K is also displayed on the plot.
EXAMPLE 4.4 (Pole-Zero Plots)
(a) Let H(z) =
2z(z + 1)
(z
1
3
)(z
2
+
1
4
)(z
2
+ 4z + 5)
.
The numerator degree is 2. The two zeros are z = 0 and z = 1.
The denominator degree is 5. The ve nite poles are at z =
1
3
, z = j
1
2
, and z = 2 j.
The multiplicative factor is K = 2. The pole-zero plot is shown in Figure E4.4(a).
= 2 K
Im[ z ]
Re[ z ]
= 1 K
Im[ z ]
Re[ z ]
2 1 1/3
0.5
0.5
1
2
2
0.5
0.5
1
2
(a) (b)
Figure E4.4 Pole-zero plots for Example 4.4(a and b)
(b) What is the z-transform corresponding to the pole-zero pattern of Figure E4.4(b)? Does it represent a
symmetric signal?
If we let X(z) = KN(z)/D(z), the four zeros correspond to the numerator N(z) given by
N(z) = (z j0.5)(z +j2)(z +j0.5)(z j2) = z
4
+ 4.25z
2
+ 1
The two poles at the origin correspond to the denominator D(z) = z
2
. With K = 1, the z-transform
is given by
X(z) = K
N(z)
D(z)
=
z
4
+ 4.25z
2
+ 1
z
2
= z
2
+ 4.25 +z
2
Checking for symmetry, we nd that X(z) = X(1/z), and thus x[n] is even symmetric. In fact,
x[n] = [n + 2] + 4.25[n] + [n 2] = 1,

4.25, 1. We also note that each zero is paired with its


reciprocal (j0.5 with j2, and j0.5 with j2), a characteristic of symmetric sequences.
c Ashok Ambardar, September 1, 2003
136 Chapter 4 z-Transform Analysis
4.4 The Transfer Function
The response y[n] of a system with impulse response h[n], to an arbitrary input x[n], is given by the
convolution y[n] = x[n] h[n]. Since the convolution operation transforms to a product, we have
Y (z) = X(z)H(z) or H(z) =
Y (z)
X(z)
(4.15)
The time-domain and z-domain equivalence of these operations is illustrated in Figure 4.3.
h[n] x[n]
*
= h[n] x[n]
*
=
x[n] and h[n]
Input Output x[n] y[n]
h[n]
Input System Output
and
Y(z) = H(z)X(z)
H(z) X(z)
X(z)
System
Output = product of
impulse response =
transfer function = H(z)
Output = convolution of
Figure 4.3 System description in the time domain and z-domain
The transfer function is dened only for relaxed LTI systems, either as the ratio of the output Y (z) and
input X(z), or as the z-transform of the system impulse response h[n].
A relaxed LTI system is also described by the dierence equation:
y[n] +A
1
y[n 1] + +A
N
y[n N] = B
0
x[n] +B
1
x[n 1] + +B
M
x[n M] (4.16)
Its z-transform results in the transfer function:
H(z) =
Y (z)
X(z)
=
B
0
+B
1
z
1
+ +B
M
z
M
1 +A
1
z
1
+ +A
N
z
N
(4.17)
The transfer function is thus a ratio of polynomials in z. An LTI system may be described by its transfer
function, its impulse response, its dierence equation or its pole-zero plot and given one form, it is possible
to obtain any of the other forms.
REVIEW PANEL 4.10
The Transfer Function of Relaxed LTI Systems Is a Rational Function of z
H(z) =
Y (z) (transformed output)
X(z) (transformed input)
H(z) = Zh[n] (z-transform of impulse response)
DRILL PROBLEM 4.11
(a) Find the transfer function of the digital lter described by y[n] 0.4y[n 1] = 2x[n].
(b) Find the dierence equation of the digital lter described by H(z) =
z 1
z + 0.5
.
(c) Find the dierence equation of the digital lter described by h[n] = (0.5)
n
u[n] [n].
Answers: (a) H(z) =
2z
z 0.4
(b) y[n]+0.5y[n1] = x[n]x[n1] (c) y[n]0.5y[n1] = 0.5x[n1]
c Ashok Ambardar, September 1, 2003
4.5 Interconnected Systems 137
The poles of a transfer function H(z) are called natural modes or natural frequencies. The poles of
Y (z) = H(z)X(z) determine the form of the system response. Clearly, the natural frequencies in H(z) will
always appear in the system response, unless they are canceled by any corresponding zeros in X(z). The
zeros of H(z) may be regarded as the (complex) frequencies that are blocked by the system.
4.5 Interconnected Systems
The z-transform is well suited to the study of interconnected LTI systems. Figure 4.4 shows the intercon-
nection of two relaxed systems in cascade and in parallel.
H
1
(z) H
2
(z) H
1
(z) H
2
(z)
H
1
(z)
H
2
(z)
H
1
(z) H
2
(z) +
X(z) Y(z) X(z) Y(z)
X(z) Y(z)
Y(z) X(z)
Two LTI systems in cascade Equivalent LTI system
Two LTI systems in parallel
Equivalent LTI system

+
+
Figure 4.4 Cascade and parallel systems and their equivalents
The overall transfer function of a cascaded system is the product of the individual transfer functions.
For n systems in cascade, the overall impulse response h
C
[n] is the convolution of the individual impulse
responses h
1
[n], h
2
[n], . . . . Since the convolution operation transforms to a product, we have
H
C
(z) = H
1
(z)H
2
(z) H
n
(z) (for n systems in cascade) (4.18)
We can also factor a given transfer function H(z) into the product of rst-order and second-order transfer
functions and realize H(z) in cascaded form.
For systems in parallel, the overall transfer function is the sum of the individual transfer functions. For
n systems in parallel,
H
P
(z) = H
1
(z) +H
2
(z) + +H
n
(z) (for n systems in parallel) (4.19)
We can also use partial fractions to express a given transfer function H(z) as the sum of rst-order and/or
second-order subsystems, and realize H(z) as a parallel combination.
REVIEW PANEL 4.11
Overall Impulse Response and Transfer Function of Systems in Cascade and Parallel
Cascade: Convolve individual impulse responses. Multiply individual transfer functions.
Parallel: Add individual impulse responses. Add individual transfer functions.
c Ashok Ambardar, September 1, 2003
138 Chapter 4 z-Transform Analysis
EXAMPLE 4.5 (Systems in Cascade and Parallel)
(a) Two digital lters are described by h
1
[n] =
n
u[n] and h
2
[n] = ()
n
u[n]. The transfer function of
their cascade is H
C
(z) and of their parallel combination is H
P
(z). How are H
C
(z) and H
P
(z) related?
The transfer functions of the two lters are H
1
(z) =
z
z
and H
2
(z) =
z
z +
. Thus,
H
C
(z) = H
1
(z)H
2
(z) =
z
2
z
2

2
H
P
(z) = H
1
(z) +H
2
(z) =
2z
2
z
2

2
So, H
P
(z) = 2H
C
(z).
(b) Is the cascade or parallel combination of two linear-phase lters also linear phase? Explain.
Linear-phase lters are described by symmetric impulse response sequences.
The impulse response of their cascade is also symmetric because it is the convolution of two symmetric
sequences. So, the cascade of two linear-phase lters is always linear phase.
The impulse response of their parallel combination is the sum of their impulse responses. Since the
sum of symmetric sequences is not always symmetric (unless both are odd symmetric or both are even
symmetric), the parallel combination of two linear-phase lters is not always linear phase.
DRILL PROBLEM 4.12
(a) Find the transfer function of the parallel connection of two lters described by h
1
[n] =

1, 3 and
h
2
[n] = 2,

1.
(b) The transfer function of the cascade of two lters is H
C
(z) = 1. If the impulse response of one lter
is h
1
[n] = 2(0.5)
n
u[n], nd the impulse response of the second.
(c) Two lters are described by y[n] 0.4y[n 1] = x[n] and h
2
[n] = 2(0.4)
n
u[n]. Find the transfer
function of their parallel combination and cascaded combination.
Answers: (a) 2z + 3z
1
(b)

0.5, 0.25 (c) H


P
(z) =
3z
z 0.4
, H
C
(z) =
2z
2
(z 0.4)
2
4.6 Transfer Function Realization
The realization of digital lters described by transfer functions parallels the realization based on dierence
equations. The nonrecursive and recursive lters described by
H
N
(z) = B
0
+B
1
z
1
+ +B
M
z
M
y[n] = B
0
x[n] +B
1
x[n 1] + +B
M
x[n M] (4.20)
H
R
(z) =
1
1 +A
1
z
1
+ +A
N
z
N
y[n] = A
1
y[n 1] A
N
y[n N] +x[n] (4.21)
can be realized using the feed-forward (nonrecursive) structure and feedback (recursive) structure, as
shown in Figure 4.5.
Now, consider the general dierence equation
y[n] = A
1
y[n 1] A
N
y[n N] +B
0
x[n] +B
1
x[n 1] + +B
N
x[n N] (4.22)
c Ashok Ambardar, September 1, 2003
4.6 Transfer Function Realization 139
1
A
2
A
N
A
0
B
1
B
2
B
M
B
[n] x [n] y [n] y [n] x
1
z
1
z
1
z
1
z
1
z
1
z
+
+
+
+
+
+ +
+
+
+
+
+

Figure 4.5 Realization of a nonrecursive (left) and recursive (right) digital lter
We choose M = N with no loss of generality, since some of the coecients B
k
may always be set to zero.
The transfer function (with M = N) then becomes
H(z) =
B
0
+B
1
z
1
+ +B
N
z
N
1 +A
1
z
1
+A
2
z
2
+ +A
N
z
N
= H
N
(z)H
R
(z) (4.23)
The transfer function H(z) = H
N
(z)H
R
(z) is the product of the transfer functions of a recursive and a
nonrecursive system. Its realization is thus a cascade of the realizations for the recursive and nonrecursive
portions, as shown in Figure 4.6(a). This form describes a direct form I realization. It uses 2N delay
elements to realize an Nth-order dierence equation and is therefore not very ecient.
0
B
1
B
2
B
N
B
1
A
2
A
N
A
[n] y [n] x
1
z
1
z
1
z
1
z
1
z
1
z
+
+
+
+
+
+
+
+
+
+
+
+

0
B
1
B
2
B
N
B
1
A
2
A
N
A
[n] y [n] x
1
z
1
z
1
z
+
+
+
+
+
+ +
+
+
+
+
+

Figure 4.6 Direct form I (left) and canonical, or direct form II (right), realization of a digital lter
Since LTI systems can be cascaded in any order, we can switch the recursive and nonrecursive parts to get
the structure of Figure 4.6(b). This structure suggests that each pair of feed-forward and feedback signals
can be obtained from a single delay element instead of two. This allows us to use only N delay elements
and results in the direct form II, or canonic, realization. The term canonic implies a realization with the
minimum number of delay elements.
c Ashok Ambardar, September 1, 2003
140 Chapter 4 z-Transform Analysis
If M and N are not equal, some of the coecients (A
k
or B
k
) will equal zero and will result in missing
signal paths corresponding to these coecients in the lter realization.
REVIEW PANEL 4.12
Digital Filter Realization
FIR: No feedback paths IIR: Both feed-forward and feedback paths
4.6.1 Transposed Realization
The direct form II also yields a transposed realization if we turn the realization around (and reverse the
input and input), replace summing junctions by nodes (and vice versa), and reverse the direction of signal
ow. Such a realization is developed in Figure 4.7.
0
B
1
B
2
B
N
B N
A
2
A
1
A
1
z
1
z
1
z
[n] x [n] y
+
+
+
+
+
+
+
+
+
+

Nodes to summers
Summers to nodes
Reverse signal flow
Turn around
Nodes to summers
Summers to nodes
Reverse signal flow
Turn around
0
B
1
B
2
B
N
B
[n] x [n] y
1
z
1
z
1
z
1
A
2
A
N
A
+
+
+
+
+
+ +
+
+
+
+
+

Figure 4.7 Direct form II (left) and transposed (right) realization of a digital lter
EXAMPLE 4.6 (Direct Form II and Transposed Realizations)
Consider a system described by 2y[n] y[n 2] 4y[n 3] = 3x[n 2]. Its transfer function is
H(z) =
3z
2
2 z
2
4z
3
=
1.5z
z
3
0.5z 2
This is a third-order system. To sketch its direct form II and transposed realizations, we compare H(z) with
the generic third-order transfer function to get
H(z) =
B
0
z
3
+B
1
z
2
+B
2
z +B
3
z
3
+A
1
z
2
+A
2
z +A
3
The nonzero constants are B
2
= 1.5, A
2
= 0.5, and A
3
= 2. Using these, we obtain the direct form II
and transposed realizations shown in Figure E4.6.
c Ashok Ambardar, September 1, 2003
4.6 Transfer Function Realization 141
1
z
1
z
1
z
1
z
1
z
1
z
[n] x
[n] y
[n] x
[n] y
Direct form II
1.5 0.5
2
+


Transposed
1.5 0.5
2
+
+
+

Figure E4.6 Direct form II (left) and transposed (right) realization of the system for Example 4.6
4.6.2 Cascaded and Parallel Realization
The overall transfer function of a cascaded system is the product of the individual transfer functions.
H
C
(z) = H
1
(z)H
2
(z) H
n
(z) (for n systems in cascade) (4.24)
As a consequence, we can factor a given transfer function H(z) into the product of several transfer functions
to realize H(z) in cascaded form. Typically, an N-th order transfer function is realized as a cascade of
second-order sections (with an additional rst-order section if N is odd).
For systems in parallel, the overall transfer function is the sum of the individual transfer functions. For
n systems in parallel,
H
P
(z) = H
1
(z) +H
2
(z) + +H
n
(z) (for n systems in parallel) (4.25)
As a result, we may use partial fractions to express a given transfer function H(z) as the sum of rst-order
and/or second-order subsystems, and realize H(z) as a parallel combination.
EXAMPLE 4.7 (System Realization)
(a) Find a cascaded realization for H(z) =
z
2
(6z 2)
(z 1)(z
2

1
6
z
1
6
)
.
This system may be realized as a cascade H(z) = H
1
(z)H
2
(z), as shown in Figure E4.7A, where
H
1
(z) =
z
2
z
2

1
6
z
1
6
H
2
(z) =
6z 2
z 1
c Ashok Ambardar, September 1, 2003
142 Chapter 4 z-Transform Analysis
[n] x
1
z
1
z
[n] y
1
z
+
1/6
1/6
+
+
+


+
+
6
2
+

Figure E4.7A Cascade realization of the system for Example 4.7(a)


(b) Find a parallel realization for H(z) =
z
2
(z 1)(z 0.5)
Using partial fractions, we nd H(z) =
2z
z 1

z
z 0.5
= H
1
(z) H
2
(z).
The two subsystems H
1
(z) and H
2
(z) may now be used to obtain the parallel realization, as shown in
Figure E4.7B.
[n] y [n] x
1
z
1
z
0.5
2
+
+
+
+
+


Figure E4.7B Parallel realization of the system for Example 4.7(b)
4.7 Causality and Stability of LTI Systems
In the time domain, a causal system requires a causal impulse response h[n] with h[n] = 0, n < 0. If H(z)
describes the transfer function of this system, the number of zeros cannot exceed the number of poles. In
other words, the degree of the numerator polynomial in H(z) cannot exceed the degree of the denominator
polynomial. This means that the transfer function of a causal system must be proper and its ROC must be
outside a circle of nite radius.
For an LTI system to be BIBO (bounded-input, bounded-output) stable, every bounded input must
result in a bounded output. In the time domain, BIBO stability of an LTI system requires an absolutely
summable impulse response h[n]. For a causal system, this is equivalent to requiring the poles of the transfer
function H(z) to lie entirely within the unit circle in the z-plane. This equivalence stems from the following
observations:
Poles outside the unit circle ([z[ > 1) lead to exponential growth even if the input is bounded.
Example: H(z) = z/(z 3) results in the growing exponential (3)
n
u[n].
c Ashok Ambardar, September 1, 2003
4.7 Causality and Stability of LTI Systems 143
Multiple poles on the unit circle always result in polynomial growth.
Example: H(z) = 1/[z(z 1)
2
] produces a ramp function in h[n].
Simple (non-repeated) poles on the unit circle can also lead to an unbounded response.
Example: A simple pole at z = 1 leads to H(z) with a factor z/(z 1). If X(z) also contains a pole at z = 1,
the response Y (z) will contain the term z/(z 1)
2
and exhibit polynomial growth.
None of these types of time-domain terms is absolutely summable, and their presence leads to system
instability. Formally, for BIBO stability, all the poles of H(z) must lie inside (and exclude) the unit circle
[z[ = 1. This is both a necessary and sucient condition for the stability of causal systems.
If a system has simple (non-repeated) poles on the unit circle, it is sometimes called marginally stable.
If a system has all its poles and zeros inside the unit circle, it is called a minimum-phase system.
4.7.1 Stability and the ROC
For any LTI system, causal or otherwise, to be stable, the ROC must include the unit circle. The various
situations are illustrated in Figure 4.8.
]
Re[ z ]
z
ROC
1
Im[ z Im[
Re[ z ]
ROC
1
Im[ ] ]
Re[ z ]
z
ROC
1
causal, stable system two-sided, stable system
ROC (shaded) of
anti-causal, stable system
ROC (shaded) of ROC (shaded) of
Figure 4.8 The ROC of stable systems (shown shaded) always includes the unit circle
The stability of a causal system requires all the poles to lie inside the unit circle. Thus, the ROC includes
the unit circle. The stability of an anti-causal system requires all the poles to lie outside the unit circle.
Thus, the ROC once again includes the unit circle. Similarly, the ROC of a stable system with a two-sided
impulse response is an annulus that includes the unit circle and all its poles lie outside this annulus. The
poles inside the inner circle of the annulus contribute to the causal portion of the impulse response while
poles outside the outer circle of the annulus make up the anti-causal portion of the impulse response.
REVIEW PANEL 4.13
The ROC of Stable LTI Systems Always Includes the Unit Circle
Stable, causal system: All the poles must lie inside the unit circle.
Stable, anti-causal system: All the poles must lie outside the unit circle.
Stability from impulse response: h[n] must be absolutely summable (

[h[k][ < ).
EXAMPLE 4.8 (Stability of a Recursive Filter)
(a) Let H(z) =
z
z
.
If the ROC is [z[ > [[, its impulse response is h[n] =
n
u[n], and the system is causal.
c Ashok Ambardar, September 1, 2003
144 Chapter 4 z-Transform Analysis
For stability, we require [[ < 1 (for the ROC to include the unit circle).
(b) Let H(z) =
z
z
, as before.
If the ROC is [z[ < [[, its impulse response is h[n] =
n
u[n 1], and the system is anti-causal.
For stability, we require [[ > 1 (for the ROC to include the unit circle).
DRILL PROBLEM 4.13
(a) Is the lter described by H(z) =
2z + 1
(z 0.5)(z + 0.5)
, [z[ > 0.5 stable? Causal?
(b) Is the lter described by H(z) =
2z + 1
(z 1.5)(z + 0.5)
, [z[ > 1.5 stable? Causal?
(c) Is the lter described by H(z) =
2z + 1
(z 1.5)(z + 0.5)
, 0.5 < [z[ < 1.5 stable? Causal?
Answers: (a) Stable, causal (b) Unstable, causal (c) Stable, two-sided
4.7.2 Inverse Systems
The inverse system corresponding to a transfer function H(z) is denoted by H
1
(z), and dened as
H
1
(z) = H
I
(z) =
1
H(z)
(4.26)
The cascade of a system and its inverse has a transfer function of unity:
H
C
(z) = H(z)H
1
(z) = 1 h
C
[n] = [n] (4.27)
This cascaded system is called an identity system, and its impulse response equals h
C
[n] = [n]. The
inverse system can be used to undo the eect of the original system. We can also describe h
C
[n] by the
convolution h[n] h
I
[n]. It is far easier to nd the inverse of a system in the transformed domain.
REVIEW PANEL 4.14
Relating the Impulse Response and Transfer Function of a System and Its Inverse
If the system is described by H(z) and h[n] and its inverse by H
I
(z) and h
I
[n], then
H(z)H
I
(z) = 1 h[n] h
I
[n] = [n]
EXAMPLE 4.9 (Inverse Systems)
(a) Consider a system with the dierence equation y[n] +y[n 1] = x[n] +x[n 1].
To nd the inverse system, we evaluate H(z) and take its reciprocal. Thus,
H(z) =
1 +z
1
1 +z
1
H
I
(z) =
1
H(z)
=
1 +z
1
1 +z
1
The dierence equation of the inverse system is y[n] + y[n 1] = x[n] + x[n 1]. Note that, in
general, the inverse of an IIR lter is also an IIR lter.
c Ashok Ambardar, September 1, 2003
4.8 The Inverse z-Transform 145
(b) Consider an FIR lter whose system equation is y[n] = x[n] + 2x[n 1] + 3x[n 2].
To nd the inverse system, we evaluate H(z) and take its reciprocal. Thus,
H(z) = 1 + 2z
1
+ 3z
2
H
I
(z) =
1
H(z)
=
1
1 + 2z
1
+ 3z
2
The dierence equation of the inverse system is y[n] + 2y[n 1] + 3y[n 2] = x[n]. Note that the
inverse of an FIR lter is an IIR lter.
DRILL PROBLEM 4.14
(a) The inverse of an FIR lter is always an IIR lter. True or false? If false, give a counterexample.
(b) The inverse of an IIR lter is always an FIR lter. True or false? If false, give a counterexample.
Answers: (a) True (b) False. For example, the inverse of H(z) =
z 0.4
z 0.5
is also IIR.
4.8 The Inverse z-Transform
The formal inversion relation that yields x[n] fromX(z) actually involves complex integration and is described
by
x[n] =
1
j2

X(z)z
n1
dz (4.28)
Here, describes a clockwise contour of integration (such as the unit circle) that encloses the origin. Eval-
uation of this integral requires knowledge of complex variable theory. In this text, we pursue simpler
alternatives, which include long division and partial fraction expansion.
4.8.1 Inverse z-Transform of Finite Sequences
For nite-length sequences, X(z) has a polynomial form that immediately reveals the required sequence x[n].
The ROC can also be discerned from the polynomial form of X(z).
EXAMPLE 4.10 (Inverse Transform of Sequences)
(a) Let X(z) = 3z
1
+ 5z
3
+ 2z
4
. This transform corresponds to a causal sequence. We recognize x[n]
as a sum of shifted impulses given by
x[n] = 3[n 1] + 5[n 3] + 2[n 4]
This sequence can also be written as x[n] =

0, 3, 0, 5, 2.
(b) Let X(z) = 2z
2
5z + 5z
1
2z
2
. This transform corresponds to a noncausal sequence. Its inverse
transform is written, by inspection, as
x[n] = 2, 5,

0, 5, 2
Comment: Since X(z) = X(1/z), x[n] should possess odd symmetry. It does.
c Ashok Ambardar, September 1, 2003
146 Chapter 4 z-Transform Analysis
4.8.2 Inverse z-Transform by Long Division
A second method requires X(z) as a rational function (a ratio of polynomials) along with its ROC. For a
right-sided signal (whose ROC is [z[ > [[), we arrange the numerator and denominator in ascending powers
of z, and use long division to obtain a power series in decreasing powers of z, whose inverse transform
corresponds to the right-sided sequence.
For a left-sided signal (whose ROC is [z[ < [[), we arrange the numerator and denominator in descending
powers of z, and use long division to obtain a power series in increasing powers of z, whose inverse transform
corresponds to the left-sided sequence.
This approach, however, becomes cumbersome if more than just the rst few terms of x[n] are required.
It is not often that the rst few terms of the resulting sequence allow its general nature or form to be
discerned. If we regard the rational z-transform as a transfer function H(z) = P(z)/Q(z), the method of
long division is simply equivalent to nding the rst few terms of its impulse response recursively from its
dierence equation.
REVIEW PANEL 4.15
Finding Inverse Transforms of X(z) = N(z)/D(z) by Long Division
Right-sided: Put N(z), D(z) in ascending powers of z. Obtain a power series in decreasing powers of z.
Left-sided: Put N(z), D(z) in descending powers of z. Obtain a power series in increasing powers of z.
EXAMPLE 4.11 (Inverse Transforms by Long Division)
(a) We nd the right-sided inverse of H(z) =
z 4
1 z +z
2
.
We arrange the polynomials in descending powers of z and use long division to get
z
2
z + 1

z
1
3z
2
4z
3

z 4
z 1 +z
1
3 z
1
3 + 3z
1
3z
2
4z
1
+ 3z
2
4z
1
+ 4z
2
4z
3
z
2
+ 4z
3

This leads to H(z) = z
1
3z
2
4z
3
+ . The sequence h[n] can be written as
h[n] = (n 1) 3[n 2] 4[n 3] + or h[n] =

0, 1, 3, 4, . . .
(b) We could also have found the inverse by setting up the the dierence equation corresponding to
H(z) = Y (z)/X(z), to give
y[n] y[n 1] +y[n 2] = x[n 1] 4x[n 2]
c Ashok Ambardar, September 1, 2003
4.8 The Inverse z-Transform 147
With x[n] = [n], its impulse response h[n] is
h[n] h[n 1] +h[n 2] = [n] 4[n 2]
With h[1] = h[2] = 0, (a relaxed system), we recursively obtain the rst few values of h[n] as
n = 0: h[0] = h[1] h[2] +[1] 4[2] = 0 0 + 0 0 = 0
n = 1: h[1] = h[0] h[1] +[0] 4[1] = 0 0 + 1 0 = 1
n = 2: h[2] = h[1] h[0] +[1] 4[0] = 1 0 + 0 4 = 3
n = 3: h[3] = h[2] h[1] +[2] 4[1] = 3 1 + 0 + 0 = 4
These are identical to the values obtained using long division in part (a).
(c) We nd the left-sided inverse of H(z) =
z 4
1 z +z
2
.
We arrange the polynomials in descending powers of z and use long division to obtain
1 z +z
2

4 3z +z
2

4 +z
4 + 4z 4z
2
3z + 4z
2
3z + 3z
2
3z
3
z
2
+ 3z
3
z
2
z
3
+z
4
4z
3
z
4

Thus, H(z) = 4 3z +z
2
+ . The sequence h[n] can then be written as
h[n] = 4[n] 3[n + 1] +[n + 2] + or h[n] = . . . , 1, 3,

4
(d) We could also have found the inverse by setting up the the dierence equation in the form
h[n 2] = h[n 1] +[n 1] 4[n 2]
With h[1] = h[2] = 0, we can generate h[0], h[1], h[2], . . . , recursively, to obtain the same result as
in part (c).
DRILL PROBLEM 4.15
(a) Let H(z) =
z 4
z
2
+ 1
, [z[ > 1. Find the rst few terms of h[n] by long division.
(b) Let H(z) =
z 4
z
2
+ 4
, [z[ < 2. Find the rst few terms of h[n] by long division.
Answers: (a) 0,

1, 4, 1, 4, . . . (b) . . . , 0.6250, 0.25, 0.25,

1
c Ashok Ambardar, September 1, 2003
148 Chapter 4 z-Transform Analysis
4.8.3 Inverse z-Transform from Partial Fractions
A much more useful method for inversion of the z-transform relies on its partial fraction expansion into terms
whose inverse transform can be identied using a table of transform pairs. This is analogous to nding inverse
Laplace transforms, but with one major dierence. Since the z-transform of standard sequences in Table 4.1
involves the factor z in the numerator, it is more convenient to perform the partial fraction expansion for
Y (z) = X(z)/z rather than for X(z). We then multiply through by z to obtain terms describing X(z) in a
form ready for inversion. This also implies that, for partial fraction expansion, it is Y (z) = X(z)/z and not
X(z) that must be a proper rational function. The form of the expansion depends on the nature of the poles
(denominator roots) of Y (z). The constants in the partial fraction expansion are often called residues.
Distinct Linear Factors
If Y (z) contains only distinct poles, we express it in the form:
Y (z) =
P(z)
(z +p
1
)(z +p
2
) (z +p
N
)
=
K
1
z +p
1
+
K
2
z +p
2
+ +
K
N
z +p
N
(4.29)
To nd the mth coecient K
m
, we multiply both sides by (z +p
m
) to get
(z +p
m
)Y (z) = K
1
(z +p
m
)
(z +p
1
)
+ +K
m
+ +K
N
(z +p
m
)
(z +p
N
)
(4.30)
With both sides evaluated at z = p
m
, we obtain K
m
as
K
m
= (z +p
m
)Y (z)

z=p
m
(4.31)
In general, Y (z) will contain terms with real constants and terms with complex conjugate residues, and may
be written as
Y (z) =
K
1
z +p
1
+
K
2
z +p
2
+ +
A
1
z +r
1
+
A

1
z +r

1
+
A
2
z +r
2
+
A

2
z +r

2
+ (4.32)
For a real root, the residue (coecient) will also be real. For each pair of complex conjugate roots, the
residues will also be complex conjugates, and we thus need compute only one of these.
Repeated Factors
If the denominator of Y (z) contains the repeated term (z +r)
k
, the partial fraction expansion corresponding
to the repeated terms has the form
Y (z) = (other terms) +
A
0
(z +r)
k
+
A
1
(z +r)
k1
+ +
A
k1
z +r
(4.33)
Observe that the constants A
j
ascend in index j from 0 to k 1, whereas the denominators (z +r)
n
descend
in power n from k to 1. Their evaluation requires (z +r)
k
Y (z) and its derivatives. We successively nd
A
0
= (z +r)
k
Y (z)

z=r
A
2
=
1
2!
d
2
dz
2
[(z +r)
k
Y (z)]

z=r
A
1
=
d
dz
[(z +r)
k
Y (z)]

z=r
A
n
=
1
n!
d
n
dz
n
[(z +r)
k
Y (z)]

z=r
(4.34)
Even though this process allows us to nd the coecients independently of each other, the algebra in nding
the derivatives can become tedious if the multiplicity k of the roots exceeds 2 or 3. Table 4.3 lists some
transform pairs useful for inversion of the z-transform of causal signals by partial fractions.
c Ashok Ambardar, September 1, 2003
4.8 The Inverse z-Transform 149
Table 4.3 Inverse z-Transform of Partial Fraction Expansion (PFE) Terms
Entry PFE Term X(z) Causal Signal x[n], n 0
Note 1: Where applicable,

K = Ke
j
= C +jD
Note 2: For anti-causal sequences, we get the signal x[n]u[n 1] where x[n] is as listed.
1
z
z

n
2
z
(z )
2
n
(n1)
3
z
(z )
N+1
(N > 1)
n(n 1) (n N + 1)
N!

(nN)
4
z

K
z e
j
+
z

K

z e
j
2K
n
cos(n +) = 2
n
[C cos(n) Dsin(n)]
5
z

K
(z e
j
)
2
+
z

K

(z e
j
)
2
2Kn
n1
cos[(n 1) +]
6
z

K
(z e
j
)
N+1
+
z

K

(z e
j
)
N+1
2K
n(n 1) (n N + 1)
N!

(nN)
cos[(n N) +]
REVIEW PANEL 4.16
Partial Fraction Expansion of Y (z) = X(z)/z Depends on Its Poles (Denominator Roots)
Distinct roots: Y (z) =
N

m=1
P(z)
z +p
m
=
N

m=1
K
m
z +p
m
, where K
m
= (z +p
m
)Y (z)[
z=pm
Repeated:
1
(z +r)
k
N

m=1
P(z)
z +p
m
=
N

m=1
K
m
z +p
m
+
k1

n=0
A
n
(z +r)
kn
, where A
n
=
1
n!
d
n
dz
n
[(z +r)
k
Y (z)]

z=r
EXAMPLE 4.12 (Inverse Transform of Right-Sided Signals)
(a) (Non-Repeated Roots) We nd the causal inverse of X(z) =
1
(z 0.25)(z 0.5)
.
We rst form Y (z) =
X(z)
z
, and expand Y (z) into partial fractions, to obtain
Y (z) =
X(z)
z
=
1
z(z 0.25)(z 0.5)
=
8
z

16
z 0.25
+
8
z 0.5
Multiplying through by z, we get
X(z) = 8
16z
z 0.25
+
8z
z 0.5
x[n] = 8[n] 16(0.25)
n
u[n] + 8(0.5)
n
u[n]
Its rst few samples, x[0] = 0, x[1] = 0, x[2] = 1, and x[3] = 0.75 can be checked by long division.
c Ashok Ambardar, September 1, 2003
150 Chapter 4 z-Transform Analysis
Comment: An alternate approach (not recommended) is to expand X(z) itself as
X(z) =
4
z 0.25
+
4
z 0.5
x[n] = 4(0.25)
n1
u[n 1] + 4(0.5)
n1
u[n 1]
Its inverse requires the shifting property. This form is functionally equivalent to the previous case. For
example, we nd that x[0] = 0, x[1] = 0, x[2] = 1, and x[3] = 0.25, as before.
(b) (Repeated Roots) We nd the inverse of X(z) =
z
(z 1)
2
(z 2)
.
We obtain Y (z) =
X(z)
z
, and set up its partial fraction expansion as
Y (z) =
X(z)
z
=
1
(z 1)
2
(z 2)
=
A
z 2
+
K
0
(z 1)
2
+
K
1
z 1
The constants in the partial fraction expansion are
A =
1
(z 1)
2

z=2
= 1 K
0
=
1
z 2

z=1
= 1 K
1
=
d
dz

1
z 2

z=1
= 1
Substituting into Y (z) and multiplying through by z, we get
X(z) =
z
z 2

z
(z 1)
2

z
z 1
x[n] = (2)
n
u[n] nu[n] u[n] = (2
n
n 1)u[n]
The rst few values x[0] = 0, x[1] = 0, x[2] = 1, x[3] = 4, and x[4] = 11 can be easily checked by long
division.
(c) (Complex Roots) We nd the causal inverse of X(z) =
z
2
3z
(z 2)(z
2
2z + 2)
.
We set up the partial fraction expansion for Y (z) =
X(z)
z
as
Y (z) =
X(z)
z
=
z 3
(z 2)(z 1 j)(z 1 +j)
=
A
z 2
+

K
z

2e
j/4
+

K

2e
j/4
We evaluate the constants A and

K, to give
A =
z 3
z
2
2z + 2

z=2
= 0.5

K =
z 3
(z 2)(z 1 +j)

z=1+j
= 0.7906e
j71.56

= 0.25 j0.75
Multiplying through by z, we get
X(z) =
0.5z
z 2
+
z

K
z

2e
j/4
+
z

K

2e
j/4
The inverse of the rst term is easy. For the remaining pair, we use entry 4 of the table for inversion
of partial fraction forms with =

2, =

4
, K = 0.7906, = 71.56

, to give
x[n] = 0.5(2)
n
u[n] + 2(0.7906)(

2)
n
cos(
n
4
71.56

)u[n]
c Ashok Ambardar, September 1, 2003
4.8 The Inverse z-Transform 151
With C = 0.25, D = 0.75, this may also be expressed in the alternate form
x[n] = 0.5(2)
n
u[n] + 2(

2)
n
[0.25 cos(
n
4
) + 0.75 sin(
n
4
)]u[n]
(d) (Inverse Transform of Quadratic Forms) We nd the causal inverse of X(z) =
z
z
2
+ 4
.
The numerator suggests the generic form x[n] = B
n
sin(n)u[n] because
B
n
sin(n)u[n] zt
Bzsin
z
2
2zcos +
2
Comparing denominators, we nd that
2
= 4 and 2cos = 0. Thus, = 2. If we pick = 2, we
get 4 cos = 0 or cos = 0 and thus = /2.
Finally, comparing numerators, Bzsin = z or B = 0.5. Thus,
x[n] = B
n
sin(n)u[n] = 0.5(2)
n
sin(n/2)u[n]
(e) (Inverse Transform of Quadratic Forms) Let X(z) =
z
2
+z
z
2
2z + 4
.
The quadratic numerator suggests the form x[n] = A
n
cos(n)u[n] +B
n
sin(n)u[n] because
A
n
cos(n)u[n] zt
A(z
2
zcos )
z
2
2zcos +
2
B
n
sin(n)u[n] zt
Bzsin
z
2
2zcos +
2
Comparing denominators, we nd
2
= 4 and 2cos = 2. Thus, = 2. If we pick = 2, we get
cos = 0.5 or = /3.
Now, A(z
2
zcos ) = A(z
2
z) and Bzsin = Bz

3. We express the numerator of X(z) as


a sum of these forms to get z
2
+ z = (z
2
z) + 2z = (z
2
z) + (2/

3)(z

3) (with A = 1 and
B = 2/

3 = 1.1547). Thus,
x[n] = A
n
cos(n)u[n] +B
n
sin(n)u[n] = (2)
n
cos(
n
3
)u[n] + 1.1547(2)
n
sin(
n
3
)u[n]
The formal approach is to use partial fractions. With z
2
2z +4 = (z 2e
j/3
)(z 2e
j/3
), we nd
X(z)
z
=
z + 1
z
2
2z + 4
=

K
z 2e
j/3
+

K

z 2e
j/3
We nd

K = 0.7638e
j49.11

= 0.5 j0.5774 and entry 4 of the table for inversion of partial fraction
forms (with = 2, =

3
) gives
x[n] = 1.5275(2)
n
cos(
n
3
49.11

)u[n] = (2)
n
cos(
n
3
)u[n] + 1.1547(2)
n
sin(
n
3
)u[n]
The second form of this result is identical to what was found earlier.
c Ashok Ambardar, September 1, 2003
152 Chapter 4 z-Transform Analysis
4.8.4 The ROC and Inversion
We have so far been assuming right-sided sequences when no ROC is given. Only when the ROC is specied
do we obtain a unique sequence from X(z). Sometimes, the ROC may be specied indirectly by requiring
the system to be stable, for example. Since the ROC of a stable system includes the unit circle, this gives
us a clue to the type of inverse we require.
EXAMPLE 4.13 (Inversion and the ROC)
(a) Find all possible inverse transforms of X(z) =
z
(z 0.25)(z 0.5)
.
The partial fraction expansion of Y (z) =
X(z)
z
leads to X(z) =
4z
z 0.25
+
4z
z 0.5
.
1. If the ROC is [z[ > 0.5, x[n] is causal and stable, and we obtain
x[n] = 4(0.25)
n
u[n] + 4(0.5)
n
u[n]
2. If the ROC is [z[ < 0.25, x[n] is anti-causal and unstable, and we obtain
x[n] = 4(0.25)
n
u[n 1] 4(0.5)
n
u[n 1]
3. If the ROC is 0.25 < [z[ < 0.5, x[n] is two-sided and unstable. This ROC is valid only if
4z
z0.25
describes a causal sequence (ROC [z[ > 0.25), and
4z
z0.5
describes an anti-causal sequence (ROC
[z[ < 0.5). With this in mind, we obtain
x[n] = 4(0.25)
n
u[n]
. .. .
ROC: |z|>0.25
4(0.5)
n
u[n 1]
. .. .
ROC: |z|<0.5
(b) Find the unique inverse transforms of the following, assuming each system is stable:
H
1
(z) =
z
(z 0.4)(z + 0.6)
H
2
(z) =
2.5z
(z 0.5)(z + 2)
H
3
(z) =
z
(z 2)(z + 3)
Partial fraction expansion leads to
H
1
(z) =
z
z 0.4

z
z + 0.6
H
2
(z) =
z
z 0.5

z
z + 2
H
3
(z) =
z
z 2

z
z + 3
To nd the appropriate inverse, the key is to recognize that the ROC must include the unit circle.
Looking at the pole locations, we see that
H
1
(z) is stable if its ROC is [z[ > 0.6.
Its inverse is causal, with h
1
[n] = (0.4)
n
u[n] (0.6)
n
u[n].
H
2
(z) is stable if its ROC is 0.5 < [z[ < 2.
Its inverse is two-sided, with h
2
[n] = (0.5)
n
u[n] + (2)
n
u[n 1].
H
3
(z) is stable if its ROC is [z[ < 2.
Its inverse is anti-causal, with h
3
[n] = (2)
n
u[n 1] + (3)
n
u[n 1].
c Ashok Ambardar, September 1, 2003
4.9 The One-Sided z-Transform 153
4.9 The One-Sided z-Transform
The one-sided z-transform is particularly useful in the analysis of causal LTI systems. It is dened by
X(z) =

k=0
x[k]z
k
(one-sided z-transform) (4.35)
The lower limit of zero in the summation implies that the one-sided z-transform of an arbitrary signal x[n]
and its causal version x[n]u[n] are identical. Most of the properties of the two-sided z-transform also apply
to the one-sided version.
REVIEW PANEL 4.17
The Scaling, Times-n, and Convolution Properties of the One-Sided z-Transform

n
x[n] zt X(z/) nx[n] zt z
dX(z)
dz
x[n] h[n] zt X(z)H(z)
However, the shifting property of the two-sided z-transform must be modied for use with right-sided (or
causal) signals that are nonzero for n < 0. We also develop new properties, such as the initial value theorem
and the nal value theorem, that are unique to the one-sided z-transform. These properties are summarized
in Table 4.4.
Table 4.4 Properties Unique to the One-Sided z-Transform
Property Signal One-Sided z-Transform
Right shift x[n 1] z
1
X(z) +x[1]
x[n 2] z
2
X(z) +z
1
x[1] +x[2]
x[n N] z
N
X(z) +z
(N1)
x[1] +z
(N2)
x[2] + +x[N]
Left shift x[n + 1] zX(z) zx[0]
x[n + 2] z
2
X(z) z
2
x[0] zx[1]
x[n +N] z
N
X(z) z
N
x[0] z
N1
x[1] zx[N 1]
Switched
periodic
x
p
[n]u[n]
X
1
(z)
1 z
N
(x
1
[n] is the rst period of x
p
[n])
Initial value theorem: x[0] = lim
z
X(z)
Final value theorem: lim
n
x[n] = lim
z1
(z 1)X(z)
EXAMPLE 4.14 (Properties of the One-Sided z-Transform)
(a) Find the z-transform of x[n] = n(4)
0.5n
u[n].
We rewrite this as x[n] = n(2)
n
u[n] to get X(z) =
2z
(z 2)
2
.
c Ashok Ambardar, September 1, 2003
154 Chapter 4 z-Transform Analysis
(b) Find the z-transform of x[n] = (2)
n+1
u[n 1].
We rewrite this as x[n] = (2)
2
(2)
n1
u[n 1] to get X(z) =
z
1
(4z)
z 2
=
4
z 2
.
(c) Let x[n] zt
4z
(z + 0.5)
2
= X(z), with ROC: [z[ > 0.5. Find the z-transform of the signals
h[n] = nx[n] and y[n] = x[n] x[n].
By the times-n property, we have H(z) = zX

(z), which gives


H(z) = z

8z
(z + 0.5)
3
+
4
(z + 0.5)
2

=
4z
2
2z
(z + 0.5)
3
By the convolution property, Y (z) = X
2
(z) =
16z
2
(z + 0.5)
4
.
(d) Let (4)
n
u[n] zt X(z). Find the signal corresponding to F(z) = X
2
(z) and G(z) = X(2z).
By the convolution property, f[n] = (4)
n
u[n] (4)
n
u[n] = (n + 1)(4)
n
u[n].
By the scaling property, G(z) = X(2z) = X(z/0.5) corresponds to the signal g[n] = (0.5)
n
x[n].
Thus, we have g[n] = (2)
n
u[n].
4.9.1 The Right-Shift Property of the One-Sided z-Transform
The one-sided z-transform of a sequence x[n] and its causal version x[n]u[n] are identical. A right shift of
x[n] brings samples for n < 0 into the range n 0, as illustrated in Figure 4.9, and leads to the z-transforms
x[n 1] zt z
1
X(z) +x[1] x[n 2] zt z
2
X(z) +z
1
x[1] +x[2] (4.36)
x [1]
x [2]
[n] x
1 2 2 1
x [1]
x [1] z
1
X(z)
1 1 2 3
[n1] x
x [2]
x [2]
x [1]
x [2] z
2
X(z) x [1] z
1
2 3 4 1
[n2] x
Shift
Shift
right
right
X(z)
n
+
n
+ +
n
Figure 4.9 Illustrating the right-shift property of the one-sided z-transform
These results generalize to
x[n N] zt z
N
X(z) +z
(N1)
x[1] +z
(N2)
x[2] + +x[N] (4.37)
For causal signals (for which x[n] = 0, n < 0), this result reduces to x[n N] zt z
N
X(z).
c Ashok Ambardar, September 1, 2003
4.9 The One-Sided z-Transform 155
REVIEW PANEL 4.18
The Right-Shift Property of the One-Sided z-Transform
x[n 1] zt z
1
X(z) +x[1] x[n 2] zt z
2
X(z) +z
1
x[1] +x[2]
4.9.2 The Left-Shift Property of the One-Sided z-Transform
A left shift of x[n]u[n] moves samples for n 0 into the range n < 0, and these samples no longer contribute
to the z-transform of the causal portion, as illustrated in Figure 4.10.
x [0]
x [1]
[n] x
x [1]
x [0]
x [0] z
+1 n [ ] x
x [0]
x [1]
x [1] z z
2
X(z)
n x +2 [ ]
z
2
x [0]
Shift
Shift
left
left
X(z)
2 3 1
n
+
1 1 2
n
zX(z)
1 2 1
n
+
Figure 4.10 Illustrating the left-shift property of the one-sided z-transform
This leads to the z-transforms
x[n + 1] zt zX(z) zx[0] x[n + 2] zt z
2
X(z) z
2
x[0] zx[1] (4.38)
By successively shifting x[n]u[n] to the left, we obtain the general relation
X[n +N] zt z
N
X(z) z
N
x[0] z
N1
x[1] zx[N 1] (4.39)
The right-shift and left-shift properties of the one-sided z-transform form the basis for nding the response
of causal LTI systems with nonzero initial conditions.
REVIEW PANEL 4.19
The Left-Shift Property of the One-Sided z-Transform
x[n + 1] zt zX(z) zx[0] x[n + 2] zt z
2
X(z) z
2
x[0] zx[1]
EXAMPLE 4.15 (Using the Shift Properties)
(a) Using the right-shift property and superposition, we obtain the z-transform of the rst dierence of
x[n] as
y[n] = x[n] x[n 1] zt X(z) z
1
X(z) = (1 z
1
)X(z)
(b) Consider the signal x[n] =
n
. Its one-sided z-transform is identical to that of
n
u[n] and equals
X(z) = z/(z ). If y[n] = x[n 1], the right-shift property, with N = 1, yields
Y (z) = z
1
X(z) +x[1] =
1
z
+
1
The additional term
1
arises because x[n] is not causal.
c Ashok Ambardar, September 1, 2003
156 Chapter 4 z-Transform Analysis
(c) (The Left-Shift Property)
Consider the shifted step u[n + 1]. Its one-sided z-transform should be identical to that of u[n] since
u[n] and u[n + 1] are identical for n 0.
With u[n] zt z/(z 1) and u[0] = 1, the left-shift property gives
u[n + 1] zt zU(z) zu[0] =
z
2
z 1
z =
z
z 1
(d) With y[n] =
n
u[n] zt z/(z ) and y[0] = 1, the left-shift property gives
y[n + 1] =
n+1
u[n + 1] zt z

z
z

z =
z
z
4.9.3 The Initial Value Theorem and Final Value Theorem
The initial value theorem and nal value theorem apply only to the one-sided z-transform and the proper
part X(z) of a rational z-transform.
REVIEW PANEL 4.20
The Initial Value Theorem and Final Value Theorem for the One-Sided z-Transform
Initial value: x[0] = lim
z
X(z) Final value: x[] = lim
z1
(z 1)X(z)
Final value theorem is meaningful only if: Poles of (z 1)X(z) are inside unit circle.
With X(z) described by x[0] + x[1]z
1
+ x[2]z
2
+ , it should be obvious that only x[0] survives as
z and the initial value equals x[0] = lim
z
X(z).
To nd the nal value, we evaluate (z 1)X(z) at z = 1. It yields meaningful results only when the poles
of (z 1)X(z) have magnitudes smaller than unity (lie within the unit circle in the z-plane). As a result:
1. x[] = 0 if all poles of X(z) lie within the unit circle (since x[n] will then contain only exponentially
damped terms).
2. x[] is constant if there is a single pole at z = 1 (since x[n] will then include a step).
3. x[] is indeterminate if there are complex conjugate poles on the unit circle (since x[n] will then
include sinusoids). The nal value theorem can yield absurd results if used in this case.
EXAMPLE 4.16 (Initial and Final Value Theorems)
Let X(z) =
z(z 2)
(z 1)(z 0.5)
. We then nd
The initial value: x[0] = lim
z
X(z) = lim
z
1 2z
1
(1 z
1
)(1 0.5z
1
)
= 1
The nal value: lim
n
x[n] = lim
z1
(z 1)X(z) = lim
z1
z(z 2)
z 0.5
= 2
c Ashok Ambardar, September 1, 2003
4.9 The One-Sided z-Transform 157
4.9.4 The z-Transform of Switched Periodic Signals
Consider a causal signal x[n] = x
p
[n]u[n] where x
p
[n] is periodic with period N. If x
1
[n] describes the rst
period of x[n] and has the z-transform X
1
(z), then the z-transform of x[n] can be found as the superposition
of the z-transform of the shifted versions of x
1
[n]:
X(z) = X
1
(z) +z
N
X
1
(z) +z
2N
X
1
(z) + = X
1
(z)[1 +z
N
+z
2N
+ ] (4.40)
Expressing the geometric series in closed form, we obtain
X(z) =
1
1 z
N
X
1
(z) =
z
N
z
N
1
X
1
(z) (4.41)
REVIEW PANEL 4.21
The z-Transform of a Switched Periodic Signal x
p
[n]u[n] with Period N
X(z) =
X
1
(z)
1 z
N
(X
1
(z) is z-transform of rst period x
1
[n])
EXAMPLE 4.17 (z-Transform of Switched Periodic Signals)
(a) Find the z-transform of a periodic signal whose rst period is x
1
[n] =

0, 1, 2.
The period of x[n] is N = 3. We then nd the z-transform of x[n] as
X(z) =
X
1
(z)
1 z
N
=
z
1
2z
2
1 z
3
(b) Find the z-transform of x[n] = sin(0.5n)u[n].
The digital frequency of x[n] is F =
1
4
. So N = 4. The rst period of x[n] is x
1
[n] =

0, 1, 0, 1.
The z-transform of x[n] is thus
X(z) =
X
1
(z)
1 z
N
=
z
1
z
3
1 z
4
=
z
1
1 +z
2
(c) Find the causal signal corresponding to X(z) =
2 +z
1
1 z
3
.
Comparing with the z-transform for a switched periodic signal, we recognize N = 3 and X
1
(z) = 2+z
1
.
Thus, the rst period of x[n] is

2, 1, 0.
(d) Find the causal signal corresponding to X(z) =
z
1
1 +z
3
.
We rst rewrite X(z) as
X(z) =
z
1
(1 z
3
)
(1 +z
3
)(1 z
3
)
=
z
1
z
4
1 z
6
Comparing with the z-transform for a switched periodic signal, we recognize the period as N = 6, and
X
1
(z) = z
1
z
4
. Thus, the rst period of x[n] is

0, 1, 0, 0, 1, 0.
c Ashok Ambardar, September 1, 2003
158 Chapter 4 z-Transform Analysis
4.10 The z-Transform and System Analysis
The one-sided z-transform serves as a useful tool for analyzing LTI systems described by dierence equations
or transfer functions. The key is of course that the solution methods are much simpler in the transformed
domain because convolution transforms to a multiplication. Naturally, the time-domain response requires
an inverse transformation, a penalty exacted by all methods in the transformed domain.
4.10.1 Systems Described by Dierence Equations
For a system described by a dierence equation, the solution is based on transformation of the dierence
equation using the shift property and incorporating the eect of initial conditions (if present), and subsequent
inverse transformation using partial fractions to obtain the time-domain response. The response may be
separated into its zero-state component (due only to the input), and zero-input component (due to the
initial conditions), in the z-domain itself.
REVIEW PANEL 4.22
Solving Dierence Equations Using the z-Transform
Relaxed system: Transform the dierence equation, then nd Y (z) and its inverse.
Not relaxed: Transform using the shift property and initial conditions. Find Y (z) and its inverse.
EXAMPLE 4.18 (Solution of Dierence Equations)
(a) Solve the dierence equation y[n] 0.5y[n 1] = 2(0.25)
n
u[n] with y[1] = 2.
Transformation using the right-shift property yields
Y (z) 0.5(z
1
Y (z) +y[1]) =
2z
z 0.25
Y (z) =
z(z + 0.25)
(z 0.25)(z 0.5)
We use partial fractions to get
Y (z)
z
=
z + 0.25
(z 0.25)(z 0.5)
=
2
z 0.25
+
3
z 0.5
.
Multiplying through by z and taking inverse transforms, we obtain
Y (z) =
2z
z 0.25
+
3z
z 0.5
y[n] = [2(0.25)
n
+ 3(0.5)
n
]u[n]
(b) Let y[n + 1] 0.5y[n] = 2(0.25)
n+1
u[n + 1] with y[1] = 2.
We transform the dierence equation using the left-shift property. The solution will require y[0].
By recursion, with n = 1, we obtain y[0] 0.5y[1] = 2 or y[0] = 2 + 0.5y[1] = 2 1 = 1.
Let x[n] = (0.25)
n
u[n]. Then, by the left-shift property x[n+1] zt zX(z)zx[0] (with x[0] = 1),
(0.25)
n+1
u[n + 1] zt z

z
z 0.25

z =
0.25z
z 0.25
We now transform the dierence equation using the left-shift property:
zY (z) zy[0] 0.5Y (z) =
0.5z
z 0.25
Y (z) =
z(z + 0.25)
(z 0.25)(z 0.5)
c Ashok Ambardar, September 1, 2003
4.10 The z-Transform and System Analysis 159
This is identical to the result of part (a), and thus y[n] = 2(0.25)
n
+ 3(0.5)
n
, as before.
Comment: By time invariance, this represents the same system as in part (a).
(c) (Zero-Input and Zero-State Response) Let y[n] 0.5y[n 1] = 2(0.25)
n
u[n], with y[1] = 2.
Upon transformation using the right-shift property, we obtain
Y (z) 0.5(z
1
Y (z) +y[1]) =
2z
z 0.25
(1 0.5z
1
)Y (z) =
2z
z 0.25
1
1. Zero-state response: For the zero-state response, we assume zero initial conditions to obtain
(1 0.5z
1
)Y
zs
(z) =
2z
z 0.25
Y
zs
(z) =
2z
2
(z 0.25)(z 0.5)
Upon partial fraction expansion, we obtain
Y
zs
(z)
z
=
2z
(z 0.25)(z 0.5)
=
2
z 0.25
+
4
z 0.5
Multiplying through by z and inverse transforming the result, we get
Y
zs
(z) =
2z
z 0.25
+
4z
z 0.5
y
zs
[n] = 2(0.25)
n
u[n] + 4(0.5)
n
u[n]
2. Zero-input response: For the zero-input response, we assume zero input (the right-hand side)
and use the right-shift property to get
Y
zi
(z) 0.5(z
1
Y
zi
(z) +y[1]) = 0 Y
zi
(z) =
z
z 0.5
This is easily inverted to give y
zi
[n] = (
1
2
)
n
u[n].
3. Total response: We nd the total response as
y[n] = y
zs
[n] +y
zi
[n] = 2(0.25)
n
u[n] + 3(0.5)
n
u[n]
4.10.2 Systems Described by the Transfer Function
The response Y (z) of a relaxed LTI system equals the product X(z)H(z) of the transformed input and the
transfer function. It is often much easier to work with the transfer function description of a linear system.
If we let H(z) = N(z)/D(z), the zero-state response Y (z) of a relaxed system to an input X(z) may be
expressed as Y (z) = X(z)H(z) = X(z)N(z)/D(z). If the system is not relaxed, the initial conditions result
in an additional contribution, the zero-input response Y
zi
(z), which may be written as Y
zi
(z) = N
zi
(z)/D(z).
To evaluate Y
zi
(z), we rst set up the system dierence equation, and then use the shift property to transform
it in the presence of initial conditions.
REVIEW PANEL 4.23
System Analysis Using the Transfer Function
Zero-state response: Evaluate Y (z) = X(z)H(z) and take inverse transform.
Zero-input response: Find dierence equation. Transform this using the shift property and initial
conditions. Find the response in the z-domain and take inverse transform.
c Ashok Ambardar, September 1, 2003
160 Chapter 4 z-Transform Analysis
EXAMPLE 4.19 (System Response from the Transfer Function)
(a) (A Relaxed System) Let H(z) =
3z
z 0.4
. To nd the zero-state response of this system to
x[n] = (0.4)
n
u[n], we rst transform the input to X(z) =
z
z 0.4
. Then,
Y (z) = H(z)X(z) =
3z
2
(z 0.4)
2
y[n] = 3(n + 1)(0.4)
n
u[n]
(b) (Step Response) Let H(z) =
4z
z 0.5
.
To nd its step response, we let x[n] = u[n]. Then X(z) =
z
z 1
, and the output equals
Y (z) = H(z)X(z) =

4z
z 0.5

z
z 1

=
4z
2
(z 1)(z 0.5)
Using partial fraction expansion of Y (z)/z, we obtain
Y (z)
z
=
4z
(z 1)(z 0.5)
=
8
z 1

4
z 0.5
Thus,
Y (z) =
8z
z 1

4z
z 0.5
y[n] = 8u[n] 4(0.5)
n
u[n]
The rst term in y[n] is the steady-state response, which can be found much more easily as described
shortly.
(c) (A Second-Order System) Let H(z) =
z
2
z
2

1
6
z
1
6
. Let the input be x[n] = 4u[n] and the initial
conditions be y[1] = 0, y[2] = 12.
1. Zero-state and zero-input response: The zero-state response is found directly from H(z) as
Y
zs
(z) = X(z)H(z) =
4z
3
(z
2

1
6
z
1
6
)(z 1)
=
4z
3
(z
1
2
)(z +
1
3
)(z 1)
Partial fractions of Y
zs
(z)/z and inverse transformation give
Y
zs
(z) =
2.4z
z
1
2
+
0.4z
z +
1
3
+
6z
z 1
y
zs
[n] = 2.4(
1
2
)
n
u[n] + 0.4(
1
3
)
n
u[n] + 6u[n]
To nd the zero-input response, we rst set up the dierence equation. We start with
H(z) =
Y (z)
X(z)
=
z
2
z
2

1
6
z
1
6
, or (z
2

1
6
z
1
6
)Y (z) = z
2
X(z). This gives
(1
1
6
z
1

1
6
z
2
)Y (z) = X(z) y[n]
1
6
y[n 1]
1
6
y[n 2] = x[n]
c Ashok Ambardar, September 1, 2003
4.10 The z-Transform and System Analysis 161
We now set the right-hand side to zero (for zero input) and transform this equation, using the
right-shift property, to obtain the zero-input response from
Y
zi
(z)
1
6
(z
1
Y
zi
(z) +y[1])
1
6
(z
2
Y
zi
(z) +z
1
y[1] +y[2]) = 0
With y[1] = 0 and y[2] = 12, this simplies to
Y
zi
(z) =
2z
2
z
2

1
6
z
1
6
=
2z
2
(z
1
2
)(z +
1
3
)
Partial fraction expansion of Y
zi
(z)/z and inverse transformation lead to
Y
zi
(z) =
1.2z
z
1
2
+
0.8z
z +
1
3
y
zi
[n] = 1.2(
1
2
)
n
u[n] + 0.8(
1
3
)
n
u[n]
Finally, we nd the total response as
y[n] = y
zs
[n] +y
zi
[n] = 1.2(
1
2
)
n
u[n] + 1.2(
1
3
)
n
u[n] + 6u[n]
2. Natural and forced response: By inspection, the natural and forced components of y[n] are
y
N
[n] = 1.2(
1
2
)
n
u[n] + 1.2(
1
3
)
n
u[n] y
F
[n] = 6u[n]
Comment: Alternatively, we could transform the system dierence equation to obtain
Y (z)
1
6
(z
1
Y (z) +y[1])
1
6
(z
2
Y (z) +z
1
y[1] +y[2]) =
4z
z 1
This simplies to
Y (z) =
z
2
(6z 2)
(z 1)(z
2

1
6
z
1
6
)
=
1.2z
z
1
2
+
1.2z
z +
1
3
+
6z
z 1
The steady-state response corresponds to terms of the form z/(z 1) (step functions). For this
example, Y
F
(z) =
6z
z1
and y
F
[n] = 6u[n].
Since the poles of (z 1)Y (z) lie within the unit circle, y
F
[n] can also be found by the nal value
theorem:
y
F
[n] = lim
z1
(z 1)Y (z) = lim
z1
z
2
(6z 2)
z
2

1
6
z
1
6
= 6
4.10.3 Forced and Steady-State Response from the Transfer Function
In the time domain, the forced response is found by assuming that it has the same form as the input
and then satisfying the dierence equation describing the LTI system. If the LTI system is described
by its transfer function H(z), the forced response may also be found by evaluating H(z) at the complex
frequency z
0
of the input. For example, the input x[n] = K
n
cos(n + ) has a complex frequency given
by z
0
= e
j
. Once H(z
0
) = H
0
e
j
0
is evaluated as a complex quantity, the forced response equals
y
F
[n] = KH
0

n
cos(n++
0
). For multiple inputs, we simply add the forced response due to each input.
For dc and sinusoidal inputs (with = 1), the forced response is also called the steady-state response.
c Ashok Ambardar, September 1, 2003
162 Chapter 4 z-Transform Analysis
EXAMPLE 4.20 (Finding the Forced Response)
(a) Find the steady-state response of the a lter described by H(z) =
z
z 0.4
to the input x[n] =
cos(0.6n).
The complex input frequency is z
0
= e
j0.6
. We evaluate H(z) at z = z
0
to give
H(z)

z=e
j0.6
=
e
j0.6
e
j0.6
0.4
= 0.843e
j18.7

The steady-state response is thus y


F
[n] = 0.843 cos(0.6n 18.7

).
(b) Find the forced response of the system H(z) =
z
z 0.4
to the input x[n] = 5(0.6)
n
.
The complex input frequency is z
0
= 0.6. We evaluate H(z) at z = 0.6 to give
H(z)

z=0.6
=
0.6
0.6 0.4
= 3
The forced response is thus y
F
[n] = (3)(5)(0.6)
n
= 15(0.6)
n
.
(c) Find the forced response of the system H(z) =
3z
2
z
2
z + 1
. The input contains two components and
is given by x[n] = x
1
[n] +x
2
[n] = (0.6)
n
+ 2(0.4)
n
cos(0.5n 100

).
The forced response will be the sum of the forced component y
1
[n] due to x
1
[n] and y
2
[n] due to x
2
[n].
The complex input frequency of x
1
[n] is z
0
= 0.6. We evaluate H(z) at z = 0.6 to give
H(z)

z=0.6
=
3(0.36)
0.36 0.6 + 1
= 1.4211
So, y
1
[n] = 1.4211(0.6)
n
.
The complex input frequency of x
2
[n] is z
0
= 0.4e
j0.5
= j0.4. We evaluate H(z) at z = j0.4 to give
H(z)

z=j0.4
=
3(0.16)
0.16 j0.4 + 1
= 0.5159e
j154.54

So, y
2
[n] = (0.5159)(2)(0.4)
n
cos(0.5n 100

+ 154.54

) = 1.0318(0.4)
n
cos(0.5n + 54.54

).
Finally, by superposition, the complete forced response is
y
F
[n] = y
1
[n] +y
2
[n] = 1.4211(0.6)
n
+ 1.0318(0.4)
n
cos(0.5n + 54.54

)
c Ashok Ambardar, September 1, 2003
Chapter 4 Problems 163
CHAPTER 4 PROBLEMS
4.1 (The z-Transform of Sequences) Use the dening relation to nd the z-transform and its region
of convergence for the following:
(a) x[n] = 1, 2,

3, 2, 1 (b) y[n] = 1, 2,

0, 2, 1
(c) f[n] =

1, 1, 1, 1 (d) g[n] = 1, 1, 1,

1
4.2 (z-Transforms) Find the z-transforms and specify the ROC for the following:
(a) x[n] = (2)
n+2
u[n] (b) y[n] = n(2)
2n
u[n]
(c) f[n] = (2)
n+2
u[n 1] (d) g[n] = n(2)
n+2
u[n 1]
(e) p[n] = (n + 1)(2)
n
u[n] (f ) q[n] = (n 1)(2)
n+2
u[n]
[Hints and Suggestions: For (a), (2)
n+2
= (2)
2
(2)
n
= 4(2)
n
. For (b), (2)
2n
= ((2
2
)
n
= (4)
n
. For
(e), use superposition with (n + 1)(2)
n
= n(2)
n
+ (2)
n
.]
4.3 (The z-Transform of Sequences) Find the z-transform and its ROC for the following:
(a) x[n] = u[n + 2] u[n 2]
(b) y[n] = (0.5)
n
(u[n + 2] u[n 2])
(c) f[n] = (0.5)
|n|
(u[n + 2] u[n 2])
[Hints and Suggestions: First write each signal as a sequence of sample values.]
4.4 (z-Transforms) Find the z-transforms and specify the ROC for the following:
(a) x[n] = (0.5)
2n
u[n] (b) x[n] = n(0.5)
2n
u[n] (c) (0.5)
n
u[n]
(d) (0.5)
n
u[n] (e) (0.5)
n
u[n 1] (f ) (0.5)
n
u[n 1]
[Hints and Suggestions: For (a)(b), (0.5)
2n
= (0.5
2
)
n
= (0.25)
n
. For (c) and (f), (0.5)
n
= (2)
n
.
For (d), fold the results of (c).]
4.5 (z-Transforms) Find the z-transforms and specify the ROC for the following:
(a) x[n] = cos(
n
4


4
)u[n] (b) y[n] = (0.5)
n
cos(
n
4
)u[n]
(c) f[n] = (0.5)
n
cos(
n
4


4
)u[n] (d) g[n] = (
1
3
)
n
(u[n] u[n 4])
(e) p[n] = n(0.5)
n
cos(
n
4
)u[n] (f ) q[n] = [(0.5)
n
(0.5)
n
]nu[n]
[Hints and Suggestions: For (a), cos(0.25n 0.25) = 0.7071 cos(0.25n) + 0.7071 sin(0.25n).
For (b), start with the transform of cos(0.25n) and use properties. For (c), start with the result for
(a) and use the times-
n
property. For (e) start with the result for (b) and use the times-n property.]
4.6 (Two-Sided z-Transform) Find the z-transform X(z) and its ROC for the following:
(a) x[n] = u[n 1] (b) y[n] = (0.5)
n
u[n 1]
(c) f[n] = (0.5)
|n|
(d) g[n] = u[n 1] + (
1
3
)
n
u[n]
(e) p[n] = (0.5)
n
u[n 1] + (
1
3
)
n
u[n] (f ) q[n] = (0.5)
|n|
+ (0.5)
|n|
[Hints and Suggestions: For (a), start with the transform of u[n 1] and use the folding property.
For (c), note that (0.5)
|n|
= (0.5)
n
u[n] + (0.5)
n
u[n] [n]. For the ROC, note that (c)(f) are
two-sided signals.]
c Ashok Ambardar, September 1, 2003
164 Chapter 4 z-Transform Analysis
4.7 (The ROC) The transfer function of a system is H(z). What can you say about the ROC of H(z)
for the following cases?
(a) h[n] is a causal signal.
(b) The system is stable.
(c) The system is stable, and h[n] is a causal signal.
4.8 (Poles, Zeros, and the ROC) The transfer function of a system is H(z). What can you say about
the poles and zeros of H(z) for the following cases?
(a) The system is stable.
(b) The system is causal and stable.
(c) The system is an FIR lter with real coecients.
(d) The system is a linear-phase FIR lter with real coecients.
(e) The system is a causal, linear-phase FIR lter with real coecients.
4.9 (z-Transforms and ROC) Consider the signal x[n] =
n
u[n] +
n
u[n 1]. Find its z-transform
X(z). Will X(z) represent a valid transform for the following cases?
(a) > (b) < (c) =
4.10 (z-Transforms) Find the z-transforms (if they exist) and specify their ROC.
(a) x[n] = (2)
n
u[n] + 2
n
u[n] (b) y[n] = (0.25)
n
u[n] + 3
n
u[n]
(c) f[n] = (0.5)
n
u[n] + 2
n
u[n 1] (d) g[n] = (2)
n
u[n] + (0.5)
n
u[n 1]
(e) p[n] = cos(0.5n)u[n] (f ) q[n] = cos(0.5n + 0.25)u[n]
(g) s[n] = e
jn
u[n] (h) t[n] = e
jn/2
u[n]
(i) v[n] = e
jn/4
u[n] (j) w[n] = (

j)
n
u[n] + (

j)
n
u[n]
[Hints and Suggestions: Parts (a)(c) are two-sided signals. Part (d) does not have a valid transform.
For (f), cos(0.5n +0.25n) = 0.7071[cos(0.5n) sin(0.5n)]. For part (j), use j = e
j/2
and Eulers
relation to express w[n] as a sinusoid.]
4.11 (z-Transforms and ROC) The causal signal x[n] =
n
u[n] has the transform X(z) whose ROC is
[z[ > . Find the ROC of the z-transform of the following:
(a) y[n] = x[n 5]
(b) p[n] = x[n + 5]
(c) g[n] = x[n]
(d) h[n] = (1)
n
x[n]
(e) p[n] =
n
x[n]
4.12 (z-Transforms and ROC) The anti-causal signal x[n] =
n
u[n 1] has the transform X(z)
whose ROC is [z[ < . Find the ROC of the z-transform of the following:
(a) y[n] = x[n 5]
(b) p[n] = x[n + 5]
(c) g[n] = x[n]
(d) h[n] = (1)
n
x[n]
(e) r[n] =
n
x[n]
4.13 (z-Transforms) Find the z-transform X(z) of x[n] =
|n|
and specify the region of convergence of
X(z). Consider the cases [[ < 1 and [[ > 1 separately.
c Ashok Ambardar, September 1, 2003
Chapter 4 Problems 165
4.14 (Properties) Let x[n] = nu[n]. Find X(z), using the following:
(a) The dening relation for the z-transform
(b) The times-n property
(c) The convolution result u[n] u[n] = (n + 1)u[n + 1] and the shifting property
(d) The convolution result u[n] u[n] = (n + 1)u[n] and superposition
4.15 (Properties) The z-transform of x[n] is X(z) =
4z
(z + 0.5)
2
, [z[ > 0.5. Find the z-transform of the
following using properties and specify the region of convergence.
(a) y[n] = x[n 2] (b) d[n] = (2)
n
x[n] (c) f[n] = nx[n]
(d) g[n] = (2)
n
nx[n] (e) h[n] = n
2
x[n] (f ) p[n] = [n 2]x[n]
(g) q[n] = x[n] (h) r[n] = x[n] x[n 1] (i) s[n] = x[n] x[n]
[Hints and Suggestions: For (d)(f), use the results of (c).]
4.16 (Properties) The z-transform of x[n] = (2)
n
u[n] is X(z). Use properties to nd the time signal
corresponding to the following:
(a) Y (z) = X(2z) (b) F(z) = X(1/z) (c) G(z) = zX

(z)
(d) H(z) =
zX(z)
z 1
(e) D(z) =
zX(2z)
z 1
(f ) P(z) = z
1
X(z)
(g) Q(z) = z
2
X(2z) (h) R(z) = X
2
(z) (i) S(z) = X(z)
[Hints and Suggestions: Parts(d)(e) require the summation property.]
4.17 (Properties) The z-transform of a signal x[n] is X(z) =
4z
(z + 0.5)
2
, [z[ > 0.5. Find the z-transform
and its ROC for the following.
(a) y[n] = (1)
n
x[n] (b) f[n] = x[2n]
(c) g[n] = (j)
n
x[n] (d) h[n] = x[n + 1] +x[n 1]
[Hints and Suggestions: For part (b), nd x[n] rst and use it to get f[n] = x[2n] and F(z). For
the rest, use properties.]
4.18 (Properties) The z-transform of the signal x[n] = (2)
n
u[n] is X(z). Use properties to nd the time
signal corresponding to the following.
(a) F(z) = X(z) (b) G(z) = X(1/z) (c) H(z) = zX

(z)
4.19 (Properties) The z-transform of a causal signal x[n] is X(z) =
z
z 0.4
.
(a) Let x
e
[n] be the even part of x[n]. Without computing x[n] or x
e
[n], nd X
e
(z) and its ROC.
(b) Conrm your answer by rst computing x
e
[n] from x[n] and then nding its z-transform.
(c) Can you nd X
e
(z) if x[n] represents an anti-causal signal? Explain.
4.20 (Properties) Find the z-transform of x[n] = rect(n/2N) = u[n +N] u[n N 1]. Use this result
to evaluate the z-transform of y[n] = tri(n/M) where M = 2N + 1.
[Hints and Suggestions: Recall that rect(n/2N) rect(n/2N) = Mtri(n/M) where M = 2N + 1
and use the convolution property of z-transforms.]
c Ashok Ambardar, September 1, 2003
166 Chapter 4 z-Transform Analysis
4.21 (Poles and Zeros) Make a rough sketch of the pole and zero locations of the z-transform of each of
the signals shown in Figure P4.21.
Signal 1
n
Signal 2
n
Signal 3
n
Signal 4
n
Signal 5
n
Figure P4.21 Figure for Problem 4.21
[Hints and Suggestions: Signal 1 has only three samples. Signals 2 and 3 appear to be exponentials.
Signal 4 is a ramp. Signal 5 appears to be a sinusoid.]
4.22 (Pole-Zero Patterns and Symmetry) Plot the pole-zero patterns for each X(z). Which of these
correspond to symmetric sequences?
(a) X(z) =
z
2
+z 1
z
(b) X(z) =
z
4
+ 2z
3
+ 3z
2
+ 2z + 1
z
2
(c) X(z) =
z
4
z
3
+z 1
z
2
(d) X(z) =
(z
2
1)(z
2
+ 1)
z
2
[Hints and Suggestions: If a sequence is symmetric about its midpoint, the zeros exhibit conjugate
reciprocal symmetry. Also, X(z) = X(1/z) for symmetry about the origin.]
4.23 (Realization) Sketch the direct form I, direct form II, and transposed realization for each lter.
(a) y[n]
1
6
y[n 1]
1
2
y[n 2] = 3x[n] (b) H(z) =
z 2
z
2
0.25
(c) y[n] 3y[n 1] + 2y[n 2] = 2x[n 2] (d) H(z) =
2z
2
+z 2
z
2
1
[Hints and Suggestions: For each part, start with the generic second-order realization and delete
any signal paths corresponding to missing coecients.]
4.24 (Realization) Find the transfer function and dierence equation for each system realization shown
in Figure P4.24.
[n] y [n] x
1
z
System 1
4
3
+ +
+

+
2

[n] y [n] x
1
z
1
z
System 2
2
4
3
+ +
+

Figure P4.24 Filter realizations for Problem 4.24


[Hints and Suggestions: Compare with the generic rst-order and second-order realizations to get
the dierence equations and transfer functions.]
4.25 (Realization) Find the transfer function and dierence equation for each digital lter realization
shown in Figure P4.25.
c Ashok Ambardar, September 1, 2003
Chapter 4 Problems 167
1
z
1
z
1
z
[n] y
[n] x
[n] x
[n] y
1
z
1
z
Filter 1
2
3
4
+

+
Filter 2
4
+
+

3 2


Figure P4.25 Filter realizations for Problem 4.25
[Hints and Suggestions: For (a), compare with a generic third-order direct form II realization. For
(b), compare with a generic second-order transposed realization.]
4.26 (Inverse Systems) Find the transfer function of the inverse systems for each of the following. Which
inverse systems are causal? Which inverse systems are stable?
(a) H(z) =
z
2
+ 0.1
z
2
0.2
(b) H(z) =
z + 2
z
2
+ 0.25
(c) y[n] 0.5y[n 1] = x[n] + 2x[n 1] (d) h[n] = n(2)
n
u[n]
[Hints and Suggestions: For parts (c) and (d), set up H(z) and nd H
I
(z) = 1/H(z).]
4.27 (Causality and Stability) How can you identify whether a system is a causal and/or stable system
from the following information?
(a) Its impulse response h[n]
(b) Its transfer function H(z) and its region of convergence
(c) Its system dierence equation
(d) Its pole-zero plot
4.28 (Switched Periodic Signals) Find the z-transform of each switched periodic signal.
(a) x[n] =

2, 1, 3, 0, . . ., N = 4 (b) y[n] = cos(n/2)u[n]


(c) f[n] =

0, 1, 1, 0, 0, . . ., N = 5 (d) g[n] = cos(0.5n + 0.25)u[n]


[Hints and Suggestions: For parts (b) and (d), the period is N = 4. Write out the sequence for the
rst period to nd the transform of the periodic signal.]
4.29 (Inverse Transforms of Polynomials) Find the inverse z-transform of the following:
(a) X(z) = 2 z
1
+ 3z
3
(b) Y (z) = (2 +z
1
)
3
(c) F(z) = (z z
1
)
2
(d) G(z) = (z z
1
)
2
(2 +z)
[Hints and Suggestions: For parts (b)(d), expand the transform rst.]
4.30 (Inverse Transforms by Long Division) Assuming right-sided signals, determine the ROC of the
following z-transforms and compute the inverse transform by long division up to n = 3.
(a) X(z) =
(z + 1)
2
z
2
+ 1
(b) Y (z) =
z + 1
z
2
+ 2
(c) F(z) =
1
z
2
0.25
(d) G(z) =
1 z
2
2 +z
1
[Hints and Suggestions: Before dividing, set up the numerator and denominator as polynomials in
descending powers of z and obtain the quotient in powers of z
1
.]
c Ashok Ambardar, September 1, 2003
168 Chapter 4 z-Transform Analysis
4.31 (Inverse Transforms by Partial Fractions) Assuming right-sided signals, determine the ROC of
the following z-transforms and compute the inverse transform using partial fractions.
(a) X(z) =
z
(z + 1)(z + 2)
(b) Y (z) =
16
(z 2)(z + 2)
(c) F(z) =
3z
2
(z
2
1.5z + 0.5)(z 0.25)
(d) G(z) =
3z
3
(z
2
1.5z + 0.5)(z 0.25)
(e) P(z) =
3z
4
(z
2
1.5z + 0.5)(z 0.25)
(f ) Q(z) =
4z
(z + 1)
2
(z + 3)
[Hints and Suggestions: For part (e), P(z) is not proper. Use long division to get P(z) = 3z +P
1
(z)
and use partial fractions for P
1
(z). Only part (f) has repeated roots.]
4.32 (Inverse Transforms by Partial Fractions) Assuming right-sided signals, determine the ROC of
the following z-transforms and compute the inverse transform using partial fractions.
(a) X(z) =
z
(z
2
+z + 0.25)(z + 1)
(b) Y (z) =
z
(z
2
+z + 0.25)(z + 0.5)
(c) F(z) =
1
(z
2
+z + 0.25)(z + 1)
(d) G(z) =
z
(z
2
+z + 0.5)(z + 1)
(e) P(z) =
z
3
(z
2
z + 0.5)(z 1)
(f ) Q(z) =
z
2
(z
2
+z + 0.5)(z + 1)
(g) S(z) =
2z
(z
2
0.25)
2
(h) T(z) =
2
(z
2
0.25)
2
(i) v(z) =
z
(z
2
+ 0.25)
2
(j) w(z) =
z
2
(z
2
+ 0.25)
2
[Hints and Suggestions: Parts (a)(c) and (g)(h) have real repeated roots. Parts (d)(f) have
complex roots. Parts (i)(j) have complex repeated roots.]
4.33 (Inverse Transforms by Long Division) Assuming left-sided signals, determine the ROC of the
following z-transforms and compute the inverse transform by long division for n = 1, 2, 3.
(a) X(z) =
z
2
+ 4z
z
2
z + 2
(b) Y (z) =
z
(z + 1)
2
(c) F(z) =
z
2
z
3
+z 1
(d) G(z) =
z
3
+ 1
z
2
+ 1
[Hints and Suggestions: Before dividing, set up the numerator and denominator as polynomials in
ascending powers of z and obtain the quotient in ascending powers of z.]
4.34 (The ROC and Inverse Transforms) Let X(z) =
z
2
+ 5z
z
2
2z 3
. Which of the following describe a
valid ROC for X(z)? For each valid ROC, nd x[n], using partial fractions.
(a) [z[ < 1 (b) [z[ > 3 (c) 1 < [z[ < 3 (d) [z[ < 1 and [z[ > 3
4.35 (Inverse Transforms) For each X(z), nd the signal x[n] for each valid ROC.
(a) X(z) =
z
(z + 0.4)(z 0.6)
(b) X(z) =
3z
2
z
2
1.5z + 0.5
4.36 (Inverse Transforms) Consider the stable system described by y[n] +y[n 1] = x[n] +x[n 1].
c Ashok Ambardar, September 1, 2003
Chapter 4 Problems 169
(a) Find its causal impulse response h[n] and specify the range of and the ROC of H(z).
(b) Find its anti-causal impulse response h[n] and specify the range of and the ROC of H(z).
4.37 (Inverse Transforms) Each X(z) below represents the z-transform of a switched periodic signal
x
p
[n]u[n]. Find one period x
1
[n] of each signal. You may verify your results using inverse transformation
by long division.
(a) X(z) =
1
1 z
3
(b) X(z) =
1
1 +z
3
(c) X(z) =
1 + 2z
1
1 z
4
(d) X(z) =
3 + 2z
1
1 +z
4
[Hints and Suggestions: Set up X(z) in the form
X
1
(z)
1 z
N
in order to identify the period N and
the sequence representing the rst period x
1
[n]. For part(b), for example, multiply the numerator and
denominator by (1 z
3
).]
4.38 (Inverse Transforms) Let H(z) = z
2
(z 0.5)(2z + 4)(1 z
2
).
(a) Find its inverse transform h[n].
(b) Does h[n] show symmetry about the origin?
(c) Does h[n] describe a linear phase sequence?
[Hints and Suggestions: Expand H(z) rst.]
4.39 (Inverse Transforms) Let H(z) =
z
(z 0.5)(z + 2)
.
(a) Find its impulse response h[n] if it is known that this represents a stable system. Is this system
causal?
(b) Find its impulse response h[n] if it is known that this represents a causal system. Is this system
stable?
4.40 (Inverse Transforms) Let H(z) =
z
(z 0.5)(z + 2)
. Establish the ROC of H(z), nd its impulse
response h[n], and investigate its stability for the following:
(a) A causal h[n] (b) An anti-causal h[n] (c) A two-sided h[n]
4.41 (Convolution) Simplify each convolution using the z-transform. You may verify your results by using
time-domain convolution.
(a) y[n] =

1, 2, 0, 3 2, 0,

3 (b) y[n] = 1, 2,

0, 2, 1 1, 2,

0, 2, 1
(c) y[n] = (2)
n
u[n] (2)
n
u[n] (d) y[n] = (2)
n
u[n] (3)
n
u[n]
4.42 (Periodic Signal Generators) Find the transfer function H(z) of a lter whose impulse response
is a periodic sequence with rst period x[n] =

1, 2, 3, 4, 6, 7, 8. Find the dierence equation and


sketch a realization for this lter.
4.43 (Periodic Signal Generators) It is required to design a lter whose impulse response is a pure
cosine at a frequency of F
0
= 0.25 and unit amplitude.
(a) What is the impulse response of this lter?
(b) Find the transfer function H(z) and the dierence equation of this lter.
c Ashok Ambardar, September 1, 2003
170 Chapter 4 z-Transform Analysis
(c) Sketch a realization for this lter.
(d) Find the step response of this lter.
4.44 (Initial Value and Final Value Theorems) Assuming right-sided signals, nd the initial and nal
signal values without using inverse transforms.
(a) X(z) =
2
z
2
+
1
6
z
1
6
(b) Y (z) =
2z
2
z
2
+z + 0.25
(c) F(z) =
2z
z
2
+z 1
(d) G(z) =
2z
2
+ 0.25
(z 1)(z + 0.25)
(e) P(z) =
z + 0.25
z
2
+ 0.25
(f ) Q(z) =
2z + 1
z
2
0.5z 0.5
[Hints and Suggestions: To nd the initial value, set up each transform as the ratio of polynomials
in z
1
. The nal value is nonzero only if there is a single pole at z = 1 and all other poles are inside
the unit circle. For part (c), the nal value theorem does not hold.]
4.45 (System Representation) Find the transfer function and dierence equation for the following causal
systems. Investigate their stability, using each system representation.
(a) h[n] = (2)
n
u[n] (b) h[n] = [1 (
1
3
)
n
]u[n]
(c) h[n] = n(
1
3
)
n
u[n] (d) h[n] = 0.5[n]
(e) h[n] = [n] (
1
3
)
n
u[n] (f ) h[n] = [(2)
n
(3)
n
]u[n]
[Hints and Suggestions: To nd the dierence equation, set up H(z) as a ratio of polynomials in
z
1
, equate with Y (z)/X(z), cross multiply and nd the inverse transform.]
4.46 (System Representation) Find the dierence equation of the following causal systems. Investigate
the stability of each system.
(a) y[n] + 3y[n 1] + 2y[n 2] = 2x[n] + 3x[n 1]
(b) y[n] + 4y[n 1] + 4y[n 2] = 2x[n] + 3x[n 1]
(c) y[n] = 0.2x[n]
(d) y[n] = x[n] +x[n 1] +x[n 2]
4.47 (System Representation) Set up the system dierence equations of the following causal systems.
Investigate the stability of each system.
(a) H(z) =
3
z + 2
(b) H(z) =
1 + 2z +z
2
(1 +z
2
)(4 +z
2
)
(c) H(z) =
2
1 +z

1
2 +z
(d) H(z) =
2z
1 +z

1
2 +z
[Hints and Suggestions: Set up H(z) as a ratio of polynomials in z
1
, equate with Y (z)/X(z), cross
multiply and nd the inverse transform.]
4.48 (Zero-State Response) Find the zero-state response of the following systems, using the z-transform.
(a) y[n] 0.5y[n 1] = 2u[n] (b) y[n] 0.4y[n 1] = (0.5)
n
u[n]
(c) y[n] 0.4y[n 1] = (0.4)
n
u[n] (d) y[n] 0.5y[n 1] = cos(n/2)
[Hints and Suggestions: Use the z-transform to get Y (z) assuming zero initial conditions. Then
nd the inverse transform by partial fractions.]
c Ashok Ambardar, September 1, 2003
Chapter 4 Problems 171
4.49 (System Response) Consider the system y[n] 0.5y[n 1] = x[n]. Find its zero-state response to
the following inputs, using the z-transform.
(a) x[n] = u[n] (b) x[n] = (0.5)
n
u[n] (c) x[n] = cos(n/2)u[n]
(d) x[n] = (1)
n
u[n] (e) x[n] = (j)
n
u[n] (f ) x[n] = (

j)
n
u[n] + (

j)
n
u[n]
[Hints and Suggestions: In part (e), y[n] will be complex because the input is complex. In part (f),
using j = e
j/2
and Eulers relation, x[n] simplies to a sinusoid. Therefore, the forced response y
F
[n]
is easy to nd. Then, y[n] = K(0.5)
n
+y
F
[n] with y[1] = 0.]
4.50 (Zero-State Response) Find the zero-state response of the following systems, using the z-transform.
(a) y[n] 1.1y[n 1] + 0.3y[n 2] = 2u[n] (b) y[n] 0.9y[n 1] + 0.2y[n 2] = (0.5)
n
(c) y[n] + 0.7y[n 1] + 0.1y[n 2] = (0.5)
n
(d) y[n] 0.25y[n 2] = cos(n/2)
4.51 (System Response) Let y[n] 0.5y[n 1] = x[n], with y[1] = 1. Find the response y[n] of this
system for the following inputs, using the z-transform.
(a) x[n] = 2u[n] (b) x[n] = (0.25)
n
u[n] (c) x[n] = n(0.25)
n
u[n]
(d) x[n] = (0.5)
n
u[n] (e) x[n] = n(0.5)
n
(f ) x[n] = (0.5)
n
cos(0.5n)
4.52 (System Response) Find the response y[n] of the following systems, using the z-transform.
(a) y[n] + 0.1y[n 1] 0.3y[n 2] = 2u[n] y[1] = 0 y[2] = 0
(b) y[n] 0.9y[n 1] + 0.2y[n 2] = (0.5)
n
y[1] = 1 y[2] = 4
(c) y[n] + 0.7y[n 1] + 0.1y[n 2] = (0.5)
n
y[1] = 0 y[2] = 3
(d) y[n] 0.25y[n 2] = (0.4)
n
y[1] = 0 y[2] = 3
(e) y[n] 0.25y[n 2] = (0.5)
n
y[1] = 0 y[2] = 0
4.53 (System Response) For each system, evaluate the response y[n], using the z-transform.
(a) y[n] 0.4y[n 1] = x[n] x[n] = (0.5)
n
u[n] y[1] = 0
(b) y[n] 0.4y[n 1] = 2x[n] +x[n 1] x[n] = (0.5)
n
u[n] y[1] = 0
(c) y[n] 0.4y[n 1] = 2x[n] +x[n 1] x[n] = (0.5)
n
u[n] y[1] = 5
(d) y[n] + 0.5y[n 1] = x[n] x[n 1] x[n] = (0.5)
n
u[n] y[1] = 2
(e) y[n] + 0.5y[n 1] = x[n] x[n 1] x[n] = (0.5)
n
u[n] y[1] = 0
4.54 (System Response) Find the response y[n] of the following systems, using the z-transform.
(a) y[n] 0.4y[n 1] = 2(0.5)
n1
u[n 1] y[1] = 2
(b) y[n] 0.4y[n 1] = (0.4)
n
u[n] + 2(0.5)
n1
u[n 1] y[1] = 2.5
(c) y[n] 0.4y[n 1] = n(0.5)
n
u[n] + 2(0.5)
n1
u[n 1] y[1] = 2.5
4.55 (System Response) The transfer function of a system is H(z) =
2z(z 1)
4 + 4z +z
2
. Find its response
y[n] for the following inputs.
(a) x[n] = [n] (b) x[n] = 2[n] +[n + 1] (c) x[n] = u[n]
(d) x[n] = (2)
n
u[n] (e) x[n] = nu[n] (f ) x[n] = cos(
n
2
)u[n]
4.56 (System Analysis) Find the impulse response h[n] and the step response s[n] of the causal digital
lters described by
(a) H(z) =
4z
z 0.5
(b) y[n] + 0.5y[n 1] = 6x[n]
[Hints and Suggestions: Note that y[1] = 0. Choose x[n] = u[n] to compute the step response.]
c Ashok Ambardar, September 1, 2003
172 Chapter 4 z-Transform Analysis
4.57 (System Analysis) Find the zero-state response, zero-input response, and total response for each of
the following systems, using the z-transform.
(a) y[n]
1
4
y[n 1] = (
1
3
)
n
u[n] y[1] = 8
(b) y[n] + 1.5y[n 1] + 0.5y[n 2] = (0.5)
n
u[n] y[1] = 2 y[2] = 4
(c) y[n] +y[n 1] + 0.25y[n 2] = 4(0.5)
n
u[n] y[1] = 6 y[2] = 12
(d) y[n] y[n 1] + 0.5y[n 2] = (0.5)
n
u[n] y[1] = 1 y[2] = 2
4.58 (Steady-State Response) The transfer function of a system is H(z) =
2z(z 1)
z
2
+ 0.25
. Find its steady-
state response for the following inputs.
(a) x[n] = 4u[n] (b) x[n] = 4 cos(
n
2
+

4
)u[n]
(c) x[n] = cos(
n
2
) + sin(
n
2
) (d) x[n] = 4 cos(
n
4
) + 4 sin(
n
2
)
[Hints and Suggestions: For parts (c)(d), add the forced response due to each component.]
4.59 (Steady-State Response) The lter H(z) = A
z
z 0.5
is designed to have a steady-state response
of unity if the input is u[n] and a steady-state response of zero if the input is cos(n). What are the
values of A and ?
4.60 (Steady-State Response) The lter H(z) = A
z
z 0.5
is designed to have a steady-state response
of zero if the input is u[n] and a steady-state response of unity if the input is cos(n). What are the
values of A and ?
4.61 (System Response) Find the response of the following lters to the unit step x[n] = u[n], and to
the alternating unit step x[n] = (1)
n
u[n].
(a) h[n] = [n] [n 1] (dierencing operation)
(b) h[n] =

0.5, 0.5 (2-point average)


(c) h[n] =
1
N
N1

k=0
[n k], N = 3 (moving average)
(d) h[n] =
2
N(N+1)
N1

k=0
(N k)[n k], N = 3 (weighted moving average)
(e) y[n] y[n 1] = (1 )x[n], =
N1
N+1
, N = 3 (exponential average)
4.62 (Steady-State Response) Consider the following DSP system:
x(t) sampler digital lter H(z) ideal LPF y(t)
The input is x(t) = 2 + cos(10t) + cos(20t). The sampler is ideal and operates at a sampling rate
of S Hz. The digital lter is described by H(z) = 0.1S
z 1
z 0.5
. The ideal lowpass lter has a cuto
frequency of 0.5S Hz.
(a) What is the smallest value of S that will prevent aliasing?
(b) Let S = 40 Hz and H(z) = 1 +z
2
+z
4
. What is the steady-state output y(t)?
(c) Let S = 40 Hz and H(z) =
z
2
+ 1
z
4
+ 0.5
. What is the steady-state output y(t)?
4.63 (Response of Digital Filters) Consider the averaging lter y[n] = 0.5x[n] +x[n1] +0.5x[n2].
c Ashok Ambardar, September 1, 2003
Chapter 4 Problems 173
(a) Find its impulse response h[n] and its transfer function H(z).
(b) Find its response y[n] to the input x[n] =

2, 4, 6, 8.
(c) Find its response y[n] to the input x[n] = cos(
n
3
).
(d) Find its response y[n] to the input x[n] = cos(
n
3
) + sin(
2n
3
) + cos(
n
2
).
[Hints and Suggestions: For part (b), use convolution. For part (c), nd the steady-state response.
For part (d), add the steady state response due to each component.]
4.64 (Transfer Function) The input to a digital lter is x[n] =

1, 0.5, and the response is described


by y[n] = [n + 1] 2[n] [n 1].
(a) What is the lter transfer function H(z)?
(b) Does H(z) describe an IIR lter or FIR lter?
(c) Is the lter stable? Is it causal?
4.65 (System Analysis) Consider a system whose impulse response is h[n] = (0.5)
n
u[n]. Find its response
to the following inputs.
(a) x[n] = [n] (b) x[n] = u[n]
(c) x[n] = (0.25)
n
u[n] (d) x[n] = (0.5)
n
u[n]
(e) x[n] = cos(n) (f ) x[n] = cos(n)u[n]
(g) x[n] = cos(0.5n) (h) x[n] = cos(0.5n)u[n]
[Hints and Suggestions: For parts (e) and (g), the output is the steady-state response y
ss
[n]. For
parts (f) and (h) use y[n] = K(0.5)
n
+y
ss
[n] with y[1] = 0 and nd K.]
4.66 (System Analysis) Consider a system whose impulse response is h[n] = n(0.5)
n
u[n]. What input
x[n] will result in each of the following steady-state outputs?
(a) y[n] = cos(0.5n)
(b) y[n] = 2 + cos(0.5n)
(c) y[n] = cos
2
(0.25n)
[Hints and Suggestions: For (a), assume x[n] = Acos(0.5n) + Bsin(0.5n) and compare the
resulting steady state response with y[n]. For (b), assume x[n] = C +Acos(0.5n) +Bsin(0.5n). For
(c), note that cos
2
= 0.5 + 0.5 cos 2.]
4.67 (System Response) Consider the system y[n] 0.25y[n 2] = x[n]. Find its response y[n], using
z-transforms, for the following inputs.
(a) x[n] = 2[n 1] +u[n] (b) x[n] = 2 + cos(0.5n)
[Hints and Suggestions: For part (a), nd Y (z) and its inverse transform by partial fractions. For
part (b), add the steady state response for each component of the input.]
4.68 (System Response) The signal x[n] = (0.5)
n
u[n] is applied to a digital lter, and the response is
y[n]. Find the lter transfer function and state whether it is an IIR or FIR lter and whether it is a
linear-phase lter if the system output y[n] is the following:
(a) y[n] = [n] + 0.5[n 1]
(b) y[n] = [n] 2[n 1]
(c) y[n] = (0.5)
n
u[n]
c Ashok Ambardar, September 1, 2003
174 Chapter 4 z-Transform Analysis
4.69 (Interconnected Systems) Consider two systems whose impulse response is h
1
[n] = [n] +[n1]
and h
2
[n] = (0.5)
n
u[n]. Find the overall system transfer function and the response y[n] of the overall
system to the input x[n] = (0.5)
n
u[n], and to the input x[n] = cos(n) if
(a) The two systems are connected in parallel with = 0.5.
(b) The two systems are connected in parallel with = 0.5.
(c) The two systems are connected in cascade with = 0.5.
(d) The two systems are connected in cascade with = 0.5.
4.70 (Interconnected Systems) The transfer function H(z) of the cascade of two systems H
1
(z) and
H
2
(z) is known to be H(z) =
z
2
+ 0.25
z
2
0.25
. It is also known that the unit step response of the rst system
is [2 (0.5)
n
]u[n]. Determine H
1
(z) and H
2
(z).
4.71 (Feedback Systems) Consider the lter realization of Figure P4.71. Find the transfer function H(z)
of the overall system if the impulse response of the lter is given by
(a) h[n] = [n] [n 1]. (b) h[n] = 0.5[n] + 0.5[n 1].
Find the dierence equation relating y[n] and x[n] from H(z) and investigate the stability of the overall
system.
[n] x [n] y
1
z
+

Filter
Figure P4.71 Filter realization for Problem 4.71
4.72 (Systems in Cascade) Consider the following system:
x[n] H
1
(z) H
2
(z) H
3
(z) y[n]
It is known that h
1
[n] = 0.5(0.4)
n
u[n], H
2
(z) =
A(z +)
z +
, and h
3
[n] = [n] + 0.5[n 1]. Choose A,
, and such that the overall system represents an identity system.
[Hints and Suggestions: Set up H
1
(z)H
2
(z)H
3
(z) = 1 to nd the constants.]
4.73 (Recursive and Non-Recursive Filters) Consider two lters described by
(1) h[n] =

1, 1, 1 (2) y[n] y[n 1] = x[n] x[n 3]


(a) Find the transfer function of each lter.
(b) Find the response of each lter to the input x[n] = cos(n).
(c) Are the two lters related in any way?
4.74 (Feedback Compensation) Feedback compensation is often used to stabilize unstable lters. It is
required to stabilize the unstable lter G(z) =
6
z 1.2
by putting it in the forward path of a negative
feedback system. The feedback block has the form H(z) =

z
.
(a) What values of and are required for the overall system to have two poles at z = 0.4 and
z = 0.6? What is the overall transfer function and impulse response?
c Ashok Ambardar, September 1, 2003
Chapter 4 Problems 175
(b) What values of and are required for the overall system to have both poles at z = 0.6? What is
the overall transfer function and impulse response? How does the double pole aect the impulse
response?
[Hints and Suggestions: For the negative feedback system, the overall transfer function is given by
T(z) =
G(z)
1+G(z)H(z)
.]
4.75 (Recursive Forms of FIR Filters) An FIR lter may always be recast in recursive form by the
simple expedient of including poles and zeros at identical locations. This is equivalent to multiplying
the transfer function numerator and denominator by identical factors. For example, the lter H(z) =
1 z
1
is FIR but if we multiply the numerator and denominator by the identical term 1 + z
1
, the
new lter and its dierence equation become
H
N
(z) =
(1 z
1
)(1 +z
1
)
1 +z
1
=
1 z
2
1 +z
1
y[n] +y[n 1] = x[n] x[n 2]
The dierence equation can be implemented recursively. Find two dierent recursive dierence equa-
tions (with dierent orders) for each of the following lters.
(a) h[n] = 1,

2, 1
(b) H(z) =
z
2
2z + 1
z
2
(c) y[n] = x[n] x[n 2]
[Hints and Suggestions: Set up H(z) and multiply both numerator and denominator by identical
polynomials in z (linear, quadratic etc). Use this to nd the recursive dierence equation.]
COMPUTATION AND DESIGN
4.76 (System Response in Symbolic Form) The Matlab based routine sysresp2 (on the authors
website) returns the system response in symbolic form. Obtain the response of the following lters and
plot the response for 0 n 30.
(a) The step response of y[n] 0.5y[n 1] = x[n]
(b) The impulse response of y[n] 0.5y[n 1] = x[n]
(c) The zero-state response of y[n] 0.5y[n 1] = (0.5)
n
u[n]
(d) The complete response of y[n] 0.5y[n 1] = (0.5)
n
u[n], y[1] = 4
(e) The complete response of y[n] +y[n 1] + 0.5y[n 2] = (0.5)
n
u[n], y[1] = 4, y[2] = 3
4.77 (Steady-State Response in Symbolic Form) The Matlab based routine ssresp (on the authors
website) yields a symbolic expression for the steady-state response to sinusoidal inputs. Find the
steady-state response to the input x[n] = 2 cos(0.2n

3
) for each of the following systems and plot
the results over 0 n 50.
(a) y[n] 0.5y[n 1] = x[n]
(b) y[n] +y[n 1] + 0.5y[n 2] = 3x[n]
c Ashok Ambardar, September 1, 2003
Chapter 5
FREQUENCY DOMAIN
ANALYSIS
5.0 Scope and Objectives
This chapter develops the discrete-time Fourier transform (DTFT) as an analysis tool in the frequency
domain description of discrete-time signals and systems. It introduces the DTFT as a special case of the
z-transform, develops the properties of the DTFT, and concludes with applications of the DTFT to system
analysis and signal processing.
5.1 The DTFT from the z-Transform
The z-transform describes a discrete-time signal as a sum of weighted harmonics z
k
X(z) =

k=
x[k]z
k
=

k=
x[k]

re
j2F

k
(5.1)
where the complex exponential z = re
j2F
= re
j
includes a real weighting factor r. If we let r = 1, we
obtain z = e
j2F
= e
j
and z
k
= e
j2kF
= e
jk
. The expression for the z-transform then reduces to
X(F) =

k=
x[k]e
j2kF
X() =

k=
x[k]e
jk
(5.2)
The quantity X(F) (or X() is now a function of the frequency F (or ) alone and describes the discrete-
time Fourier transform (DTFT) of x[n] as a sum of weighted harmonics e
j2kF
= e
jk
. The DTFT is a
frequency-domain description of a discrete-time signal. The DTFT of x[n] may be viewed as its z-transform
X(z) evaluated for r = 1 (along the unit circle in the z-plane). The DTFT is also called the spectrum
and the DTFT H(F) of the system impulse response is also referred to as the frequency response or the
frequency domain transfer function.
Note that X(F) is periodic in F with unit period because X(F) = X(F + 1)
X(F + 1) =

k=
x[k]e
j2k(F+1)
=

k=
x[k]e
j2k
e
j2kF
=

k=
x[k]e
j2kF
= X(F)
The unit interval 0.5 F 0.5 (or 0 F 1) denes the principal period or central period.
Similarly, X() is periodic in with period 2 and represents a scaled (stretched by 2) version of X(F).
The principal period of X() corresponds to the interval or 0 2.
176 c Ashok Ambardar, September 1, 2003
5.1 The DTFT from the z-Transform 177
The inverse DTFT allows us to recover x[n] from one period of its DTFT and is dened by
x[n] =

1/2
1/2
X(F)e
j2nF
dF (the F-form) x[n] =
1
2

X()e
jn
d (the -form) (5.3)
We will nd it convenient to work with the F-form, especially while using the inverse transform relation,
because it rids us of factors of 2 in many situations. The discrete signal x[n] and its discrete-time Fourier
transform X(F) or X() form a unique transform pair, and their relationship is shown symbolically using
a double arrow:
x[n] dtft X(F) or x[n] dtft X() (5.4)
REVIEW PANEL 5.1
The DTFT Is a Frequency-Domain Representation of Discrete-Time Signals
Form DTFT Inverse DTFT
F-form X(F) =

k=
x[k]e
j2kF
x[n] =

1/2
1/2
X(F)e
j2nF
dF
-form X() =

k=
x[k]e
jk
x[n] =
1
2

X()e
jn
d
5.1.1 Symmetry of the Spectrum for a Real Signal
The DTFT of a real signal is, in general, complex. A plot of the magnitude of the DTFT against frequency
is called the magnitude spectrum. The magnitude of the frequency response H(F) is also called the
gain. A plot of the phase of the DTFT against frequency is called the phase spectrum. The phase spectrum
may be restricted to a 360

range (180

, 180

). Sometimes, it is more convenient to unwrap the phase (by


adding/subtracting multiples of 360

) to plot it as a monotonic function. The DTFT X(F) of a signal x[n]


may be expressed in any of the following ways
X(F) = R(F) +jI(F) = [X(F)[e
j(F)
= [X(F)[

(F) (the F-form) (5.5)


X() = R() +jI() = [X()[e
j()
= [X()[

() (the -form) (5.6)


For a real signal, the DTFT shows conjugate symmetry about F = 0 (or = 0) with
X(F) = X

(F) [X(F)[ = [X(F)[ (F) = (F) (the F-form) (5.7)


X() = X

() [X()[ = [X()[ () = () (the -form) (5.8)


Conjugate symmetry of X(F) about the origin means that its magnitude spectrum displays even symmetry
about the origin and its phase spectrum displays odd symmetry about the origin. It is easy to show that
X(F) also displays conjugate symmetry about F = 0.5. Since it is periodic, we nd it convenient to plot just
one period of X(F) over the principal period (0.5 F 0.5) with conjugate symmetry about F = 0. We
may even plot X(F) over (0 F 1) with conjugate symmetry about F = 0.5. These ideas are illustrated
in Figure 5.1.
Similarly, X() shows conjugate symmetry about the origin = 0, and about = , and may also
be plotted only over its principal period ( ) (with conjugate symmetry about = 0) or over
(0 2) (with conjugate symmetry about = ). The principal period for each form is illustrated in
Figure 5.2.
c Ashok Ambardar, September 1, 2003
178 Chapter 5 Frequency Domain Analysis
The DTFT is periodic, with period F = 1
0.5 0.5
Magnitude
1 1 1.5
F
Phase
F 1
1
1.5
0.5
0.5
F = 0.5 Odd symmetry about
F = 0.5 Even symmetry about
F
0.5 1
Magnitude
F
0.5 1
Phase
F = 0 Even symmetry about
Odd symmetry about F = 0
F
0.5 0.5
Magnitude
F 0.5
0.5
Phase
Figure 5.1 Illustrating the symmetry in the DTFT spectrum of real signals
X
p
() -form for
= 2 F
= 2 F

0

0
2
Principal period
X
p
(F) F-form for
0.5 0.5 0
F
0 0.5
F
1
Principal period
Figure 5.2 Various ways of plotting the DTFT spectrum
REVIEW PANEL 5.2
The DTFT Is Always Periodic and Shows Conjugate Symmetry for Real Signals
F-form: X(F) is periodic with unit period and conjugate symmetric about F = 0 and F = 0.5.
-form: X() is periodic with period 2 and conjugate symmetric about = 0 and = .
Plotting: It is sucient to plot the DTFT over one period (0.5 F 0.5 or ).
DRILL PROBLEM 5.1
(a) If X(F)[
F=0.2
= 2e
j/3
, nd X(F)[
F=0.2
, X(F)[
F=0.8
.
(b) If X(F)[
F=0.2
= 2e
j/3
, nd X(F)[
F=3.2
, X(F)[
F=5.8
.
Answers: (a) 2e
j/3
, 2e
j/3
(b) 2e
j/3
, 2e
j/3
If we know the spectrum of a real signal over the half-period 0 < F 0.5 (or 0 < ), we can use
conjugate symmetry about the origin to obtain the spectrum for one full period and replicate this to generate
the periodic spectrum. For this reason, the highest useful frequency present in the spectrum is F = 0.5 or
= . For sampled signals, this also corresponds to an analog frequency of 0.5S Hz (half the sampling
frequency).
If a real signal x[n] is even symmetric about n = 0, its DTFT X(F) is always real and even symmetric
in F, and has the form X(F) = A(F). If a real signal x[n] is odd symmetric, X(F) is always imaginary
and odd symmetric in F, and has the form X(F) = jA(F). A real symmetric signal is called a linear-
phase signal. The real quantity A(F) (which may not always be positive for all frequencies) is called the
amplitude spectrum. For a linear-phase signal, it is much more convenient to plot the amplitude (not
c Ashok Ambardar, September 1, 2003
5.1 The DTFT from the z-Transform 179
magnitude) spectrum because its phase is then just zero or just 90

(and uncomplicated by phase jumps


of 180

whenever A(F) changes sign).


REVIEW PANEL 5.3
The DTFT of Real Symmetric Signals Is Purely Real or Purely Imaginary
Even symmetric x[n] dtft A(F) or X() (real)
Odd symmetric x[n] dtft jA(F) or jA() (imaginary)
Plotting: It is convenient to plot just the amplitude spectrum A(F) or A().
DRILL PROBLEM 5.2
(a) Let x[n] =

3, 1, 3, 4. Is X(F) periodic? Is X(F) real? Is X(F) conjugate symmetric?


(b) Let x[n] = 2, 1,

3, 1, 2. Is X(F) periodic? Is X(F) real? Is X(F) conjugate symmetric?


(c) Let x[n] = 2, 1,

0, 1, 2. Is X(F) periodic? Is X(F) real? Is X(F) conjugate symmetric?


(d) Let x[n] = 2, j,

0, j, 2. Is X(F) periodic? Is X(F) real? Is X(F) conjugate symmetric?


Answers: (a) Yes, no, yes (b) Yes, yes, yes (c) Yes, no, yes (d) Yes, no, no
5.1.2 Some DTFT Pairs
The DTFT is a summation. It always exists if the summand x[n]e
j2nF
= x[n]e
jn
is absolutely integrable.
Since [e
j2nF
[ = [e
jn
[ = 1, the DTFT of absolutely summable signals always exists. The DTFT of
sinusoids, steps, or constants, which are not absolutely summable, includes impulses (of the continuous
kind). The DTFT of signals that grow exponentially or faster does not exist. A list of DTFT pairs appears
in Table 5.1. Both the F-form and -form are shown. The transforms are identical (with = 2F), except
for the extra factors of 2 in the impulsive terms of the -form (in the transform of the constant, the step
function, and the sinusoid). The reason for these extra factors is the scaling property of impulses that says
(F) = (/2) = 2().
REVIEW PANEL 5.4
Dierences Between the F-Form and -Form of the DTFT
If the DTFT contains no impulses: H(F) and H() are related by = 2F.
If the DTFT contains impulses: Replace (F) by 2() (and 2F by elsewhere) to get H().
EXAMPLE 5.1 (DTFT from the Dening Relation)
(a) The DTFT of x[n] = [n] follows immediately from the denition as
X(F) =

k=
x[k]e
j2kF
=

k=
[k]e
j2kF
= 1
(b) The DTFT of the sequence x[n] = 1, 0, 3, 2 also follows from the denition as
X(F) =

k=
x[k]e
j2kF
= 1 + 3e
j4F
2e
j6F
c Ashok Ambardar, September 1, 2003
180 Chapter 5 Frequency Domain Analysis
Table 5.1 Some Useful DTFT Pairs
Note: In all cases, we assume [[ < 1.
Entry Signal x[n] The F-Form: X(F) The -Form: X()
1 [n] 1 1
2
n
u[n], [ < 1[
1
1 e
j2F
1
1 e
j
3 n
n
u[n], [ < 1[
e
j2F
(1 e
j2F
)
2
e
j
(1 e
j
)
2
4 (n + 1)
n
u[n], [ < 1[
1
(1 e
j2F
)
2
1
(1 e
j
)
2
5
|n|
, [ < 1
1
2
1 2cos(2F) +
2
1
2
1 2cos +
2
6 1 (F) 2()
7 cos(2nF
0
) = cos(n
0
) 0.5[(F +F
0
) +(F F
0
)] [( +
0
) +(
0
)]
8 sin(2nF
0
) = sin(n
0
) j0.5[(F +F
0
) (F F
0
)] j[( +
0
) (
0
)]
9
2F
C
sinc(2nF
C
) =
sin(n
C
)
n
rect

F
2F
C

rect


2
C

10 u[n]
0.5(F) +
1
1 e
j2F
() +
1
1 e
j
In the -form, we have
X() =

k=
x[k]e
jk
= 1 + 3e
j2
2e
j3
For nite sequences, the DTFT can be written just by inspection. Each term is the product of a sample
value at index n and the exponential e
j2nF
(or e
jn
).
(c) The DTFT of the exponential signal x[n] =
n
u[n] follows from the denition and the closed form for
the resulting geometric series:
X(F) =

k=0

k
e
j2kF
=

k=0

e
j2F

k
=
1
1 e
j2F
, [[ < 1
The sum converges only if [e
j2F
[ < 1, or [[ < 1 (since [e
j2F
[ = 1). In the -form,
c Ashok Ambardar, September 1, 2003
5.1 The DTFT from the z-Transform 181
X() =

k=0

k
e
jk
=

k=0

e
j

k
=
1
1 e
j
, [[ < 1
(d) The signal x[n] = u[n] is a limiting form of
n
u[n] as 1 but must be handled with care, since u[n]
is not absolutely summable. In fact, X(F) also includes an impulse (now an impulse train due to the
periodic spectrum). Over the principal period,
X(F) =
1
1 e
j2F
+ 0.5(F) (F-form) X() =
1
1 e
j
+() (-form)
DRILL PROBLEM 5.3
(a) Let x[n] =

3, 1, 3, 4. Find X(F) and compute X(F) at F = 0.2.


(b) Let x[n] = 2,

3, 0, 1. Find X(F) and compute X(F) at F = 0 and F = 0.5.


(c) Let x[n] = 8(0.6)
n
u[n]. Find X(F) and compute X(F) at F = 0, 0.25, 0.5.
Answers: (a) 2.38e
j171

(b) 6, 2 (c) 20, 6.86e


j31

, 5
5.1.3 Relating the z-Transform and DTFT
The DTFT describes a signal as a sum of weighted harmonics, or complex exponentials. However, it cannot
handle exponentially growing signals. The z-transform overcomes these shortcomings by using exponentially
weighted harmonics in its denition. The z-transform may be viewed as a generalization of the DTFT to
complex frequencies.
For absolutely summable signals, the DTFT is simply the one-sided z-transform with z = e
j2F
. The
DTFT of signals that are not absolutely summable almost invariably contains impulses. However, for such
signals, the z-transform equals just the non-impulsive portion of the DTFT, with z = e
j2F
. In other words,
for absolutely summable signals we can always nd their z-transform from the DTFT, but we cannot always
nd their DTFT from the z-transform.
REVIEW PANEL 5.5
Relating the z-Transform and the DTFT
From X(z) to DTFT: If x[n] is absolutely summable, simply replace z by e
j2F
(or e
j
).
From DTFT to X(z): Delete impulsive terms in DTFT and replace e
j2F
(or e
j
) by z.
EXAMPLE 5.2 (The z-Transform and DTFT)
(a) The signal x[n] =
n
u[n], [[ < 1, is absolutely summable. Its DTFT equals X
p
(F) =
1
1 e
j2F
.
We can nd the z-transform of x[n] from its DTFT as X(z) =
1
1 z
1
=
z
z
. We can also nd
the DTFT from the z-transform by reversing the steps.
(b) The signal x[n] = u[n] is not absolutely summable. Its DTFT is X
p
(F) =
1
1 e
j2F
+ 0.5(F).
We can nd the z-transform of u[n] as the impulsive part in the DTFT, with e
j2F
= z, to give
X(z) =
1
1 z
1
=
z
z 1
. However, we cannot recover the DTFT from its z-transform in this case.
c Ashok Ambardar, September 1, 2003
182 Chapter 5 Frequency Domain Analysis
5.2 Properties of the DTFT
The properties of the DTFT are summarized in Table 5.2. The proofs of most of the properties follow from
the dening relations if we start with the basic transform pair x[n] dtft X(F).
Table 5.2 Properties of the DTFT
Property DT Signal Result (F-Form) Result (-Form)
Folding x[n] X(F) = X

(F) X() = X

()
Time shift x[n m] e
j2mF
X(F) e
jm
X()
Frequency shift e
j2nF
0
x[n] X(F F
0
) X(
0
)
Half-period shift (1)
n
x[n] X(F 0.5) X( )
Modulation cos(2nF
0
)x[n] 0.5[X(F +F
0
) +X(F F
0
)] 0.5[( +
0
) +X(
0
)]
Convolution x[n] y[n] X(F)Y (F) X()Y ()
Product x[n]y[n] X(F) (Y (F)
1
2
[X() (Y ()]
Times-n nx[n]
j
2
dX(F)
dF
j
dX()
d
Parsevals relation

k=
x
2
[k] =

1
[X(F)[
2
dF =
1
2

2
[X()[
2
d
Central ordinates x[0] =

1
X(F) dF =
1
2

2
X() d X(0) =

n=
x[n]
X(F)

F=0.5
= X()

=
=

n=
(1)
n
x[n]
5.2.1 Folding
With x[n] dtft X(F), the DTFT of the signal y[n] = x[n] may be written (using a change of
variable) as
Y (F) =

k=
x[k]e
j2kF
=
. .. .
m=k

m=
x[m]e
j2mF
= X(F) (5.9)
A folding of x[n] to x[n] results in a folding of X(F) to X(F). For real signals, X(F) = X

(F) implying
an identical magnitude spectrum and reversed phase.
REVIEW PANEL 5.6
Folding x[n] to x[n] Results in Folding X(F) to X(F)
The magnitude spectrum stays the same, and only the phase is reversed (changes sign).
c Ashok Ambardar, September 1, 2003
5.2 Properties of the DTFT 183
DRILL PROBLEM 5.4
(a) Let X(F) = 4 2e
j4F
. Find the DTFT of y[n] = x[n] and compute at F = 0.2.
(b) For a signal x[n], we nd X(F)[
F=0.2
= 2e
j/3
. Compute the DTFT of y[n] = x[n] at F = 0.2, 1.8.
Answers: (a) 4 2e
j4F
, 5.74e
j12

(b) 2e
j/3
, 2e
j/3
.
5.2.2 Time Shift of x[n]
With x[n] dtft X(F), the DTFT of the signal y[n] = x[n m] may be written (using a change of
variable) as
Y (F) =

k=
x[k m]e
j2kF
=
. .. .
l=km

l=
x[l]e
j2(l+m)F
= X(F)e
j2mF
(5.10)
A time shift of x[n] to x[n m] does not aect the magnitude spectrum. It augments the phase spectrum
by (F) = 2mF (or () = m), which varies linearly with frequency.
REVIEW PANEL 5.7
Time Shift Property of the DTFT
x[n m] dtft X(F)e
j2mF
or X()e
jm
A time delay by m adds a linear-phase component (2mF or m) to the phase.
DRILL PROBLEM 5.5
(a) Let X(F) = 4 2e
j4F
. If y[n] = x[n 2], compute Y (F) at F = 0.2, 0.3.
(b) If g[n] = h[n 2], nd the phase dierence

G(F)

H(F) at F = 0.2, 0.4.


Answers: (a) 4e
j4F
2e
j8F
, 5.74e
j84

, 3.88e
j43

(b) 144

, 216

.
5.2.3 Frequency Shift of X(F)
By duality, a frequency shift of X(F) to X(F F
0
) yields the signal x[n]e
j2nF
0
.
Half-period Frequency Shift
If X(F) is shifted by 0.5 to X(F 0.5), then x[n] changes to e
jn
= (1)
n
x[n]. Thus, samples of x[n] at
odd index values (n = 1, 3, 5, . . .) change sign.
REVIEW PANEL 5.8
Frequency Shift Property of the DTFT
x[n]e
j2nF
0
dtft X(F F
0
) or X(
0
) (
0
= 2F
0
)
(1)
n
x[n] dtft X(F 0.5) or X( )
DRILL PROBLEM 5.6
(a) Let X(F) = 4 2e
j4F
. If y[n] = (1)
n
x[n], compute Y (F) at F = 0.2, 0.4.
(b) Let X(F) = 4 2e
j4F
. If y[n] = (j)
n
x[n], compute Y (F) at F = 0.2, 0.4. [Hint: j = e
j/2
]
Answers: (a) 5.74e
j12

, 3.88e
j29

(b) 2.66e
j26

, 4.99e
j22

c Ashok Ambardar, September 1, 2003


184 Chapter 5 Frequency Domain Analysis
5.2.4 Modulation
Using the frequency-shift property and superposition gives the modulation property

e
j2nF0
+e
j2nF0
2

x[n] = cos(2nF
0
)x[n] dtft
X(F +F
0
) +X(F F
0
)
2
(5.11)
Modulation results in a spreading of the original spectrum.
REVIEW PANEL 5.9
Modulation by cos(2nF
0
): The DTFT Gets Halved, Centered at F
0
, and Added
cos(2nF
0
)x[n] dtft
X(F +F
0
) +X(F F
0
)
2
or
X( +
0
) +X(
0
)
2
(
0
= 2F
0
)
DRILL PROBLEM 5.7
(a) The central period of X(F) is dened by X(F) = 1, [F[ < 0.1 and zero elsewhere. Consider the signal
y[n] = x[n] cos(0.2n). Sketch Y (F) and evaluate at F = 0, 0.1, 0.3, 0.4.
(b) The central period of X(F) is dened by X(F) = 1, [F[ < 0.1 and zero elsewhere. Consider the signal
y[n] = x[n] cos(0.5n). Sketch Y (F) and evaluate at F = 0, 0.1, 0.3, 0.4.
(c) The central period of X(F) is dened by X(F) = 1, [F[ < 0.25 and zero elsewhere. Consider the
signal y[n] = x[n] cos(0.2n). Sketch Y (F) and evaluate at F = 0, 0.1, 0.3, 0.4.
Answers: (a) 0.5, 0.5, 0, 0 (b) 0, 0.5, 0.5, 0 (c) 1, 1, 0.5, 0
5.2.5 Convolution
The regular convolution of discrete-time signals results in the product of their DTFTs. This result follows
from the fact that the DTFT may be regarded as a polynomial in powers of e
j2F
and discrete convolution
corresponds to polynomial multiplication. If two discrete signals are multiplied together, the DTFT of the
product corresponds to the periodic convolution of the individual DTFTs. In other words, multiplication in
one domain corresponds to convolution in the other.
REVIEW PANEL 5.10
Multiplication in One Domain Corresponds to Convolution in the Other
x[n] h[n] dtft X(F)H(F) x[n]h[n] dtft X(F) (H(F) (the F-form)
x[n] h[n] dtft X()H() x[n]h[n] dtft
1
2
X() (H() (the -form)
DRILL PROBLEM 5.8
(a) Let X(F) = 4 2e
j4F
. If y[n] = x[n] x[n], compute Y (F) at F = 0.2, 0.4.
(b) Let X(F) = 4 2e
j4F
. If y[n] = x[n 2] x[n], compute Y (F) at F = 0.2, 0.4.
(c) The central period of X(F) is dened by X(F) = 1, [F[ < 0.2 and zero elsewhere. Consider the signal
y[n] = x
2
[n]. Sketch Y (F) and evaluate at F = 0, 0.1, 0.2, 0.3.
Answers: (a) 32.94e
j24

, 15.06e
j59

(b) 32.94e
j72

, 15.06e
j72

(c) 0.4, 0.3, 0.2, 0.1


c Ashok Ambardar, September 1, 2003
5.2 Properties of the DTFT 185
5.2.6 The times-n property:
With x[n] dtft X(F), dierentiation of the dening DTFT relation gives
dX(F)
dF
=

k=
(j2k)x[k]e
j2kF
(5.12)
The corresponding signal is (j2n)x[n], and thus the DTFT of y[n] = nx[n] is Y (F) =
j
2
d X(F)
dF
.
REVIEW PANEL 5.11
The Times-n Property: Multiply x[n] by n dtft Dierentiate the DTFT
nx[n] dtft
j
2
d X(F)
dF
or j
d X()
d
DRILL PROBLEM 5.9
(a) Let X(F) = 4 2e
j4F
. If y[n] = nx[n], nd Y (F).
(b) Let X(F) = 4 2e
j4F
. If y[n] = nx[n 2], nd Y (F).
(c) Let X(F) = 4 2e
j4F
. If y[n] = (n 2)x[n], nd Y (F).
(d) Let X(F) =
1
4 2e
j4F
. If y[n] = nx[n], nd Y (F).
Answers: (a) 4e
j4F
(b) 8e
j4F
8e
j8F
(c) 8 (d)
4e
j4F
(4 2e
j4F
)
2
5.2.7 Parsevals relation
The DTFT is an energy-conserving transform, and the signal energy may be found from either the signal
x[n] or from one period of its periodic magnitude spectrum [X(F)[ using Parsevals theorem

k=
x
2
[k] =

1/2
1/2
[X(F)[
2
dF =
1
2

[X()[
2
d (Parsevals relation) (5.13)
REVIEW PANEL 5.12
Parsevals Theorem: We can Find the Signal Energy from x[n] or Its Magnitude Spectrum

k=
x
2
[k] =

1/2
1/2
[X(F)[
2
dF =
1
2

[X()[
2
d
DRILL PROBLEM 5.10
(a) Let X(F) = 5, [F[ < 0.2 and zero elsewhere in the central period. Find its total signal energy and its
signal energy in the frequency range [F[ 0.15.
(b) Let X(F) =
6
4 2e
j2F
. Find its signal energy.
Answers: (a) 10, 7.5 (b) 3
c Ashok Ambardar, September 1, 2003
186 Chapter 5 Frequency Domain Analysis
5.2.8 Central ordinate theorems
The DTFT obeys the central ordinate relations (found by substituting F = 0 (or = 0) in the DTFT or
n = 0 in the IDTFT.
x[0] =

1/2
1/2
X(F) dF =
1
2

X() d X(0) =

n=
x[n] (central ordinates) (5.14)
With F = 0.5 (or = ), we also have the useful result
X(F)

F=0.5
= X()

=
=

n=
(1)
n
x[n] (5.15)
The central ordinate theorems allow us to nd the dc gain (at F = 0) and high frequency gain (at F = 0.5)
without having to formally evaluate the DTFT.
REVIEW PANEL 5.13
Central Ordinate Theorems
x[0] =

1/2
1/2
X(F)dF =
1
2

X()d X(0) =

n=
x[n] X(F)

F=0.5
= X()

=
=

n=
(1)
n
x[n]
DRILL PROBLEM 5.11
(a) Let X(F) = 5, [F[ < 0.2 and zero elsewhere in the central period. Find the value of x[n] at n = 0.
(b) Let x[n] = 9(0.8)
n
u[n]. What is the value of X(F) at F = 0 and F = 0.5.
(c) What is the dc gain and high-frequency gain of the lter described by h[n] =

1, 2, 3, 4.
Answers: (a) 2 (b) 45, 5 (c) 10, 2
EXAMPLE 5.3 (Some DTFT Pairs Using the Properties)
(a) The DTFT of x[n] = n
n
u[n], [ < 1[ may be found using the times-n property as
X(F) =
j
2
d
dF

1
1 e
j2F

=
e
j2F
(1 e
j2F
)
2
In the -form,
X() = j
d
d

1
1 e
j

=
e
j
(1 e
j
)
2
(b) The DTFT of the signal x[n] = (n + 1)
n
u[n] may be found if we write x[n] = n
n
u[n] +
n
u[n], and
use superposition, to give
X(F) =
e
j2F
(1 e
j2F
)
2
+
1
1 e
j2F
=
1
(1 e
j2F
)
2
In the -form,
X() =
e
j
(1 e
j
)
2
+
1
1 e
j
=
1
(1 e
j
)
2
c Ashok Ambardar, September 1, 2003
5.2 Properties of the DTFT 187
By the way, if we recognize that x[n] =
n
u[n]
n
u[n], we can also use the convolution property to
obtain the same result.
(c) To nd DTFT of the N-sample exponential pulse x[n] =
n
, 0 n < N, express it as x[n] =

n
(u[n] u[n N]) =
n
u[n]
N

nN
u[n N] and use the shifting property to get
X(F) =
1
1 e
j2F

N
e
j2FN
1 e
j2F
=
1 (e
j2F
)
N
1 e
j2F
In the -form,
X() =
1
1 e
j

N
e
jN
1 e
j
=
1 (e
j
)
N
1 e
j
(d) The DTFT of the two-sided decaying exponential x[n] =
|n|
, [[ < 1, may be found by rewriting this
signal as x[n] =
n
u[n] +
n
u[n] [n] and using the folding property to give
X(F) =
1
1 e
j2F
+
1
1 e
j2F
1
Simplication leads to the result
X(F) =
1
2
1 2cos(2F) +
2
or X() =
1
2
1 2cos +
2
(e) (Properties of the DTFT)
Find the DTFT of x[n] = 4(0.5)
n+3
u[n] and y[n] = n(0.4)
2n
u[n].
1. For x[n], we rewrite it as x[n] = 4(0.5)
3
(0.5)
n
u[n] to get
X(F) =
0.5
1 0.5e
j2F
or X() =
0.5
1 0.5e
j
2. For y[n], we rewrite it as y[n] = n(0.16)
n
u[n] to get
Y (F) =
0.16e
j2F
(1 0.16e
j2F
)
2
or Y () =
0.16e
j
(1 0.16e
j
)
2
(f ) (Properties of the DTFT)
Let x[n] dtft
4
2 e
j2F
= X(F).
Find the DTFT of y[n] = nx[n], c[n] = x[n], g[n] = x[n] x[n], and h[n] = (1)
n
x[n].
c Ashok Ambardar, September 1, 2003
188 Chapter 5 Frequency Domain Analysis
1. By the times-n property,
Y (F) =
j
2
d
dF
X(F) =
4(j/2)(j2e
j2F
)
(2 e
j2F
)
2
=
4e
j2F
(2 e
j2F
)
2
In the -form,
Y () = j
d
d
X() =
4(j/2)(j2e
j
)
(2 e
j
)
2
=
4e
j
(2 e
j
)
2
2. By the folding property,
C(F) = X(F) =
4
2 e
j2F
or C() = X() =
4
2 e
j
3. By the convolution property,
G(F) = X
2
(F) =
16
(2 e
j2F
)
2
or G() = X
2
() =
16
(2 e
j
)
2
4. By the modulation property,
H(F) = X(F 0.5) =
4
2 e
j2(F0.5)
=
4
2 +e
j2F
In the -form,
H() = X( ) =
4
2 e
j()
=
4
2 +e
j
(g) (Properties of the DTFT)
Let X(F) dtft (0.5)
n
u[n] = x[n]. Find the time signals corresponding to
Y (F) = X(F) (X(F) H(F) = X(F + 0.4) +X(F 0.4) G(F) = X
2
(F)
1. By the convolution property, y[n] = x
2
[n] = (0.25)
n
u[n].
2. By the modulation property, h[n] = 2 cos(2nF
0
)x[n] = 2(0.5)
n
cos(0.8n)u[n] (where F
0
= 0.4).
3. By the convolution property, g[n] = x[n] x[n] = (0.5)
n
u[n] (0.5)u[n] = (n + 1)(0.5)
n
u[n].
5.3 The DTFT of Discrete-Time Periodic Signals
There is a unique relationship between the description of signals in the time domain and their spectra in
the frequency domain. One useful result is that sampling in one domain results in a periodic extension
in the other and vice versa. If a time-domain signal is made periodic by replication, the transform of the
periodic signal is an impulse sampled version of the original transform divided by the replication period.
The frequency spacing of the impulses equals the reciprocal of the period. For example, consider a periodic
analog signal x(t) with period T whose one period is x
1
(t) with Fourier transform X
1
(f). When x
1
(t) is
replicated every T units to generate the periodic signal x(t), the Fourier transform X(f) of the periodic
signal x(t) becomes an impulse train of the form

X
1
(kf
0
)(f kf
0
). The frequency spacing f
0
of the
c Ashok Ambardar, September 1, 2003
5.3 The DTFT of Discrete-Time Periodic Signals 189
impulses is given by f
0
=
1
T
, the reciprocal of the period. The impulse strengths X
1
(kf
0
) are found by
sampling X
1
(f) at intervals of f
0
=
1
T
and dividing by the period T to give
1
T
X
1
(kf
0
). These impulse
strengths
1
T
X
1
(kf
0
) also dene the Fourier series coecients of the periodic signal x(t). Similarly, consider
a discrete periodic signal x[n] with period N whose one period is x
1
[n], 0 n N 1 with DTFT X
1
(F).
When x
1
[n] is replicated every N samples to generate the discrete periodic signal x[n], the DTFT X(F) of
the periodic signal x[n] becomes an impulse train of the form X
1
(kF
0
)(F kF
0
). The frequency spacing F
0
of the impulses is given by F
0
=
1
N
, the reciprocal of the period. The impulse strengths X
1
(kF
0
) are found
by sampling X
1
(F) at intervals of F
0
=
1
N
and dividing by the period N to give
1
N
X
1
(kF
0
). Since X
1
(F) is
periodic, so too is X(F) and one period of X(F) contains N impulses described by
X(F) =
1
N
N1

k=0
X
1
(kF
0
)(F kF
0
) (over one period 0 F < 1) (5.16)
By convention, one period is chosen to cover the range 0 F < 1 (and not the central period) in order
to correspond to the summation index n = 0 n N 1. Note that X(F) exhibits conjugate symmetry
about k = 0 (corresponding to F = 0 or = 0) and k =
N
2
(corresponding to F = 0.5 or = ).
REVIEW PANEL 5.14
The DTFT of x[n] (Period N) Is a Periodic Impulse Train (N Impulses per Period)
If x[n] is periodic with period N and its one-period DTFT is x
1
[n] dtft X
1
(F), then
x[n] dtft X(F) =
1
N
N1

k=0
X
1
(kF
0
)(F kF
0
) (N impulses per period 0 F < 1)
EXAMPLE 5.4 (DTFT of Periodic Signals)
Let one period of x
p
[n] be given by x
1
[n] = 3, 2, 1, 2, with N = 4.
Then, X
1
(F) = 3 + 2e
j2F
+e
j4F
+ 2e
j6F
.
The four samples of X
1
(kF
0
) over 0 k 3 are
X
1
(kF
0
) = 3 + 2e
j2k/4
+e
j4k/4
+ 2e
j6k/4
= 8, 2, 0, 2
The DTFT of the periodic signal x
p
[n] for one period 0 F < 1 is thus
X(F) =
1
4
3

k=0
X
1
(kF
0
)(F
k
4
) = 2(F) + 0.5(F
1
4
) + 0.5(F
3
4
) (over one period 0 F < 1)
The signal x
p
[n] and its DTFT X(F) are shown in Figure E5.4. Note that the DTFT is conjugate symmetric
about F = 0 (or k = 0) and F = 0.5 (or k = 0.5N = 2).
[n] x
p
1 2 3 4 5 2 3 1
3 3 3
2 2 2 2 2
1 1 1
n
X
p
(F)
0.5 1 0.5 1
F
(0.5)
(0.5)
(2) (2) (2)
Figure E5.4 Periodic signal for Example 5.4 and its DTFT
c Ashok Ambardar, September 1, 2003
190 Chapter 5 Frequency Domain Analysis
DRILL PROBLEM 5.12
Find the DTFT of a periodic signal x[n] over 0 F < 1 if its one period is given by
(a) x
1
[n] =

4, 0, 0, 0 (b) x
1
[n] =

4, 4, 4, 4 (c) x
1
[n] =

4, 0, 4, 0
Answers: (a)
3

k=0
(F 0.25k) (b) 4(F) (c) 2(F) + 2(F 0.5)
5.3.1 The DFS and DFT
The discrete Fourier transform (DFT) of the signal x
1
[n] is dened as the sampled version of its DTFT
X
DFT
[k] = X
1
(F)

F=kF
0
=k/N
=
N1

n=0
x
1
[n]e
j2nk/N
, k = 0, 1, 2, . . . , N 1 (5.17)
The N-sample sequence that results when we divide the DFT by N denes the discrete Fourier series
(DFS) coecients of the periodic signal x[n] whose one period is x
1
[n]
X
DFS
[k] =
1
N
X
1
(F)

F=kF
0
=k/N
=
1
N
N1

n=0
x
1
[n]e
j2nk/N
, k = 0, 1, 2, . . . , N 1 (5.18)
Note that the DFT and DFS dier only by a factor of N with X
DFT
[k] = NX
DFS
[k]. The DFS result may
be linked to the Fourier series coecients of a periodic signal with period T
X[k] =
1
T

T
0
x
1
(t)e
j2kt/T
dt
Here, x
1
(t) describes one period of the periodic signal. The discrete version of this result using a sampling
interval of t
s
allows us to set t = nt
s
, dt = t
s
and x
1
(t) = x
1
(nT
s
) = x
1
[n]. Assuming N samples per period
(T = Nt
s
), we replace the integral over one period (from t = 0 to t = T) by a summation over N samples
(from n = 0 to n = N 1) to get the required expression for the DFS.
REVIEW PANEL 5.15
The DFT Is a Sampled Version of the DTFT
If x
1
[n] is an N-sample sequence, its N-sample DFT is X
DFT
[k] = X
1
(F)[
F=k/N
, k = 0, 1, 2, . . . , N 1
The DFS coecients of a periodic signal x[n] whose one period is x
1
[n] are X
DFS
[k] =
1
N
X
DFT
[k]
EXAMPLE 5.5 (The DFT, DFS and DTFT)
Let x
1
[n] = 1, 0, 2, 0, 3 describe one period of a periodic signal x[n].
The DTFT of x
1
[n] is X
1
(F) = 1 + 2e
j4F
+ 3e
j8F
.
The period of x[n] is N = 5. The discrete Fourier transform (DFT) of x
1
[n] is
X
DFT
[k] = X
1
(F)

F=k/N
= 1 + 2e
j4F
+ 3e
j8F

F=k/5
, k = 0, 1, . . . , 4
We nd that
X
DFT
[k] = 6, 0.3090 +j1.6776, 0.8090 +j3.6655, 0.8090 j3.6655, 0.3090 j1.6776
c Ashok Ambardar, September 1, 2003
5.4 The Inverse DTFT 191
The discrete Fourier series (DFS) coecients of x[n] are given by X
DFS
[k] =
1
N
X
DFT
[k]. We get
X
DFS
[k] = 1.2, 0.0618 +j0.3355, 0.1618 +j0.7331, 0.1618 j0.7331, 0.0618 j0.3355
The DTFT X(F) of the periodic signal x[n], for one period 0 F < 1, is then
X(F) =
1
5
4

k=0
X
1
(
k
5
)(F
k
5
) (over one period 0 F < 1)
Note that each of the transforms X
DFS
[k], X
DFT
[k], and X(F) is conjugate symmetric about both k = 0
and k = 0.5N = 2.5.
DRILL PROBLEM 5.13
Find the DFT of the following signals
(a) x[n] =

4, 0, 0, 0 (b) x[n] =

4, 4, 4, 4 (c) x[n] =

4, 4, 0, 0
Answers: (a)

4, 4, 4, 4 (b)

16, 0, 0, 0 (c)

8, 4 j4, 0, 4 +j4
5.4 The Inverse DTFT
For a nite sequence X(F) whose DTFT is a polynomial in e
j2F
(or e
j
), the inverse DTFT x[n] corresponds
to the sequence of the polynomial coecients. In many other situations, X(F) can be expressed as a ratio
of polynomials in e
j2F
(or e
j
). This allows us to split X(F) into a sum of simpler terms (using partial
fraction expansion) and nd the inverse transform of these simpler terms through a table look-up. In special
cases or only as a last resort, if all else fails, do we need to resort to the brute force method of nding the
inverse DTFT by using the dening relation. Some examples follow.
EXAMPLE 5.6 (The Inverse DTFT)
(a) Let X(F) = 1 + 3e
j4F
2e
j6F
.
Its IDFT is simply x[n] = [n] + 3[n 2] 2[n 3] or x[n] =

1, 0, 3, 2.
(b) Let X() =
2e
j
1 0.25e
j2
. We factor the denominator and use partial fractions to get
X() =
2e
j
(1 0.5e
j
)(1 + 0.5e
j
)
=
2
1 0.5e
j

2
1 + 0.5e
j
We then nd x[n] = 2(0.5)
n
u[n] 2(0.5)
n
u[n].
(c) An ideal dierentiator is described by H(F) = j2F, [F[ < 0.5. Its magnitude and phase spectrum
are shown in Figure E5.6C.
c Ashok Ambardar, September 1, 2003
192 Chapter 5 Frequency Domain Analysis
2F
0.5 0.5
Magnitude
F

F
0.5 0.5
/2
/2
Phase (radians)
Figure E5.6C DTFT of the ideal dierentiator for Example 5.6(c)
To nd its inverse h[n], we note that h[0] = 0 since H(F) is odd. For n = 0, we also use the odd
symmetry of H(F) in the IDTFT to obtain
h[n] =

1/2
1/2
j2F[cos(2nF) +j sin(2nF)] dF = 4

1/2
0
F sin(2nF) dF
Using tables and simplifying the result, we get
h[n] =
4[sin(2nF) 2nF cos(2nF)]
(2n)
2

1/2
0
=
cos(n)
n
Since H(F) is odd and imaginary, h[n] is odd symmetric, as expected.
(d) A Hilbert transformer shifts the phase of a signal by 90

. Its magnitude and phase spectrum are


shown in Figure E5.6D.
0.5 0.5
Magnitude
F
F
0.5
/2
/2
0.5
Phase (radians)
Figure E5.6D DTFT of the Hilbert transformer for Example 5.6(d)
Its DTFT given by H(F) = j sgn(F), [F[ < 0.5. This is imaginary and odd. To nd its inverse h[n],
we note that h[0] = 0 and
h[n] =

1/2
1/2
j sgn(F)[cos(2nF) +j sin(2nF)] dF = 2

1/2
0
sin(2nF) dF =
1 cos(n)
n
DRILL PROBLEM 5.14
(a) Let X(F) = 4 2e
j4F
. Find x[n].
(b) Let X(F) = (4 2e
j4F
)
2
. Find x[n].
(c) Let X(F) = 2, [F[ < 0.2. Find x[n].
Answers: (a)

4, 0, 2 (b)

16, 0, 16, 0, 4 (c) 0.8 sinc(0.4n)


c Ashok Ambardar, September 1, 2003
5.5 The Frequency Response 193
5.5 The Frequency Response
The time-domain response y[n] of a relaxed discrete-time LTI system with impulse response h[n] to the input
x[n] is given by the convolution
y[n] = x[n] h[n] (5.19)
Since convolution transforms to multiplication, transformation results in
Y (F) = X(F)H(F) or Y () = X()H() (5.20)
The frequency response or steady-state transfer function then equals
H(F) =
Y (F)
X(F)
or H() =
Y ()
X()
(5.21)
We emphasize that the frequency response is dened only for a relaxed LTI systemeither as the ratio
Y (F)/X(F) (or Y ()/X()) of the DTFT of the output y[n] and input x[n], or as the DTFT of the impulse
response h[n]. The equivalence between the time-domain and frequency-domain operations is illustrated in
Figure 5.3.
h[n] x[n]
*
= h[n] x[n]
*
=
x[n] and h[n]
Input Output x[n] y[n]
h[n]
System
Input System Output
and
X(F)
H(F) H(F) X(F)
Y(F) = X(F)H(F)
Output = convolution of impulse response =
transfer function = Output = product of
Figure 5.3 The equivalence between the time domain and frequency domain
A relaxed LTI system may also be described by the dierence equation
y[n] +A
1
y[n 1] + +A
N
y[n N] = B
0
x[n] +B
1
x[n 1] + +B
M
x[n M] (5.22)
The DTFT results in the transfer function, or frequency response, given by
H(F) =
Y (F)
X(F)
=
B
0
+B
1
e
j2F
+ +B
M
e
j2MF
1 +A
1
e
j2F
+ +A
N
e
j2NF
(5.23)
In the -form,
H() =
Y ()
X()
=
B
0
+B
1
e
j
+ +B
M
e
jM
1 +A
1
e
j
+ +A
N
e
jN
(5.24)
The transfer function is thus a ratio of polynomials in e
j2F
(or e
j
).
REVIEW PANEL 5.16
The Frequency Response Is a Frequency-Domain Description of a Relaxed LTI System
The transfer function H(F) equals the transform of the system impulse response h[n].
It is also dened as H(F) = Y (F)/X(F), the ratio of the transformed output and transformed input.
c Ashok Ambardar, September 1, 2003
194 Chapter 5 Frequency Domain Analysis
EXAMPLE 5.7 (Frequency Response of a Recursive Filter)
Let y[n] = y[n 1] +x[n], 0 < < 1. To nd its frequency response, we transform the dierence equation
and nd H(z) as
H(z) =
Y (z)
X(z)
=
1
1 z
1
Next, we let z = e
j2F
and evaluate H(F) as
H(F) =
1
1 e
j2F
=
1
1 cos(2F) +jsin(2F)
The magnitude and phase of H(F) then equal
[H(F)[ =

1
1 2cos(2F) +
2

1/2
(F) = tan
1

1 cos(2F)
sin(2F)

Typical plots of the magnitude and phase are shown in Figure E5.7 over the principal period (0.5, 0.5).
Note the conjugate symmetry (even symmetry of [H(F)[ and odd symmetry of (F)).
F
0.5 0.5
Magnitude
F 0.5
0.5
Phase
Figure E5.7 Magnitude and phase of H
p
(F) for Example 5.7
DRILL PROBLEM 5.15
(a) Find the frequency response H(F) of the lter described by h[n] =

4, 0, 2.
(b) Find the frequency response H(F) of the lter described by h[n] = 2(0.5)
n
u[n] [n].
(c) Find the frequency response H(F) of the lter described by y[n] y[n 2] = 2x[n] 4x[n 1].
Answers: (a) 4 2e
j4F
(b)
1 + 0.5e
j2F
1 0.5e
j2F
(c)
2 4e
j2F
1 e
j4F
5.6 System Analysis Using the DTFT
In concept, the DTFT may be used to nd the zero-state response (ZSR) of relaxed LTI systems to arbitrary
inputs. All it requires is the system transfer function H(F) and the DTFT X(F) of the input x[n]. We rst
nd the response as Y (F) = H(F)X(F) in the frequency domain, and obtain the time-domain response y[n]
by using the inverse DTFT. We emphasize, once again, that the DTFT cannot handle the eect of initial
conditions.
EXAMPLE 5.8 (The DTFT in System Analysis)
c Ashok Ambardar, September 1, 2003
5.6 System Analysis Using the DTFT 195
(a) Consider a system described by y[n] = y[n 1] + x[n]. To nd the response of this system to the
input
n
u[n], we rst set up the transfer function as H() =
1
1 e
j
. Next, we nd the DTFT of
x[n] as X() =
1
1 e
j
and multiply the two to obtain
Y () = H()X() =
1
(1 e
j
)
2
Its inverse transform gives the response as y[n] = (n + 1)
n
u[n]. We could, of course, also use
convolution to obtain y[n] = h[n] x[n] directly in the time domain.
(b) Consider the system described by y[n] = 0.5y[n 1] + x[n]. Its response to the step x[n] = 4u[n] is
found using Y (F) = H(F)X(F):
Y (F) = H(F)X(F) =
1
1 0.5e
j2F

4
1 e
j2F
+ 2(F)

We separate terms and use the product property of impulses to get


Y (F) =
4
(1 0.5e
j2F
)(1 e
j2F
)
+ 4(F)
Splitting the rst term into partial fractions, we obtain
Y (F) =
4
1 0.5e
j2F
+

8
1 e
j2F
+ 4(F)

The response y[n] then equals y[n] = 4(0.5)


n
u[n] + 8u[n]. The rst term represents the transient
response, and the second term describes the steady-state response, which can be found much more
easily, as we now show.
DRILL PROBLEM 5.16
(a) Let H(F) = 4 2e
j4F
. If the input is x[n] = 4[n] + 2[n 2], nd the response y[n].
(b) A lter is described by h[n] = 2(0.5)
n
u[n] [n]. The input is X(F) = 1 0.5e
j2F
. Find y[n].
(c) A lter is described by y[n] 0.5y[n 1] = 2x[n]. The input is x[n] = 2(0.4)
n
u[n]. Find y[n].
Answers: (a) 16[n] 4[n 4] (b) [n] + 0.5[n 1] (c) [20(0.5)
n
16(0.4)
n
]u[n]
5.6.1 The Steady-State Response to Discrete-Time Harmonics
The DTFT is much better suited to nding the steady-state response to discrete-time harmonics. Since
everlasting harmonics are eigensignals of discrete-time linear systems, the response is simply a harmonic at
the input frequency whose magnitude and phase is changed by the system function H(F). We evaluate H(F)
at the input frequency, multiply its magnitude by the input magnitude to obtain the output magnitude, and
add its phase to the input phase to obtain the output phase. The steady-state response is useful primarily
for stable systems for which the natural response does indeed decay with time.
The response of an LTI system to a sinusoid (or harmonic) is called the steady-state response and is
a sinusoid (or harmonic) at the input frequency. If the input is x[n] = Acos(2F
0
n + ), the steady-state
c Ashok Ambardar, September 1, 2003
196 Chapter 5 Frequency Domain Analysis
response is y
ss
[n] = AH
0
cos[2Fn + +
0
)], where H
0
and
0
are the gain and phase of the frequency
response H(F) evaluated at the input frequency F = F
0
. If the input consists of harmonics at dierent
frequencies, we use superposition of the individual time-domain responses.
We can also nd the steady-state component from the total response (using z-transforms), but this defeats
the whole purpose if all we are after is the steady-state component.
REVIEW PANEL 5.17
Finding the Steady-State Response of an LTI System to a Sinusoidal Input
Input: x[n] = Acos(2nF
0
+) Transfer function: Evaluate H(F) at F = F
0
as H
0

0
.
Steady-state output: y
ss
[n] = AH
0
cos(2nF
0
+ +
0
)
EXAMPLE 5.9 (The DTFT and Steady-State Response)
(a) Consider a system described by y[n] = 0.5y[n 1] + x[n]. We nd its steady-state response to the
sinusoidal input x[n] = 10 cos(0.5n + 60

). The transfer function H(F) is given by


H(F) =
1
1 0.5e
j2F
We evaluate H(F) at the input frequency, F = 0.25:
H(F)

F=0.25
=
1
1 0.5e
j/2
=
1
1 + 0.5j
= 0.4

26.6

= 0.8944

26.6

The steady-state response then equals


y
ss
[n] = 10(0.4

5)cos(0.5n + 60

26.6

) = 8.9443 cos(0.5n + 33.4

)
Note that if the input was x[n] = 10 cos(0.5n + 60

)u[n] (switched on at n = 0), the steady-state


component would still be identical to what we calculated but the total response would dier and require
a dierent method (such as z-transforms) to obtain.
(b) Consider a system described by h[n] = (0.8)
n
u[n]. We nd its steady-state response to the step input
x[n] = 4u[n]. The transfer function H(F) is given by
H(F) =
1
1 0.8e
j2F
We evaluate H(F) at the input frequency F = 0 (corresponding to dc):
H(F)

F=0
=
1
1 0.8
= 5
The steady-state response is then y
ss
[n] = (5)(4) = 20.
c Ashok Ambardar, September 1, 2003
5.7 Connections 197
(c) Let H(z) =
2z 1
z
2
+ 0.5z + 0.5
. We nd its steady-state response to x[n] = 6u[n].
With z = e
j2F
, we obtain the frequency response H(F) as
H(F) =
2e
j2F
1
e
j4F
+ 0.5e
j2F
+ 0.5
Since the input is a constant for n 0, the input frequency is F = 0.
At this frequency, H(F)[
F=0
= 0.5. Then, y
ss
[n] = (6)(0.5) = 3.
(d) Design a 3-point FIR lter with impulse response h[n] = ,

, that completely blocks the frequency


F =
1
3
and passes the frequency F = 0.125 with unit gain. What is the dc gain of this lter?
The lter transfer function is H(F) = e
j2F
+ +e
j2F
= + 2cos(2F).
From the information given, we have
H(
1
3
) = 0 = + 2cos(
2
3
) = H(0.125) = 1 = + 2cos(
2
8
) =

2
This gives = = 0.4142 and h[n] = 0.4142,

0.4142, 0.4142.
The dc gain of this lter is H(0) =

h[n] = 3(0.4142) = 1.2426.
DRILL PROBLEM 5.17
(a) Let H(F) = 4 2e
j4F
. If the input is x[n] = 4 cos(0.4n), what is the response y[n].
(b) A lter is described by h[n] = 2(0.5)
n
u[n] [n]. The input is x[n] = 1 + cos(0.5n). Find y[n].
(c) A lter is described by y[n] = x[n] + x[n 1] + x[n 2]. Choose the values of and such that
the input x[n] = 1 + 4 cos(n) results in the output y[n] = 4.
Answers: (a) 22.96 cos(0.4n + 12

) (b) 3 + cos(0.5n + 53

) (c) = 2, = 1
5.7 Connections
A relaxed LTI system may be described by its dierence equation, its impulse response, its transfer function,
its frequency response, its pole-zero plot or even its realization. Depending on what is required, one form may
be better suited that others and, given one form, we should be able to obtain the others using time-domain
methods and/or frequency-domain transformations. The connections are summarized below:
1. Given the transfer function H(z), we can use it directly to generate a pole-zero plot. We can also use
it to nd the frequency response H(F) by the substitution z = e
j2F
. The frequency response will
allow us to sketch the gain and phase. The inverse z-transform of H(z) leads directly to the impulse
response h[n]. Finally,