0% found this document useful (0 votes)
65 views66 pages

Signals and Systems

The document provides an overview of signals and systems, detailing the classification of signals, properties of continuous and discrete signals, and the concept of energy and power signals. It also discusses linear time-invariant (LTI) systems, their characteristics, and the importance of convolution in analyzing these systems. Key topics include signal types, signal-to-noise ratio, and the mathematical tools used in signal processing such as Laplace and Fourier transforms.

Uploaded by

Himanshu Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
65 views66 pages

Signals and Systems

The document provides an overview of signals and systems, detailing the classification of signals, properties of continuous and discrete signals, and the concept of energy and power signals. It also discusses linear time-invariant (LTI) systems, their characteristics, and the importance of convolution in analyzing these systems. Key topics include signal types, signal-to-noise ratio, and the mathematical tools used in signal processing such as Laplace and Fourier transforms.

Uploaded by

Himanshu Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 66

Signals & Systems

Index
Topics Page
1. Basics of Signals & Systems 2

2. L.T.I. Systems 20

e
3. Laplace Transform 29

om
4. Fourier Series & Fourier Transform

5. Z Transform & Sampling Theorem


37

53
c hr
jk
Basics of Signals & Systems

Properties of Signals

A signal can be classified as periodic or aperiodic; discrete or continuous time;


discrete of continuous-valued; or as a power or energy signal. The following
defines each of these terms. In addition, the signal-to-noise ratio of a signal
corrupted by noise is defined.

Periodic / Aperiodic:

e
A periodic signal repeats itself at regular intervals. In general, any signal x(t) for
which

x(t) = x(t+T)

om
for all t is said to be periodic.

The fundamental period of the signal is the minimum positive, non-zero value
of T for which above equation is satisfied. If a signal is not periodic, then it
is aperiodic.

Symmetric / Asymmetric:
hr
There are two types of signal symmetry: odd and even. A signal x(t) has odd
symmetry if and only if x(-t) = -x(t) for all t. It has even symmetry if and only if x(-
t) = x(t).

Continuous and Discrete Signals and Systems


c

A continuous signal is a mathematical function of an independent variable, which


represents a set of real numbers. It is required that signals are uniquely defined
jk

in except for a finite number of points.


• A continuous time signal is one which is defined for all values of time. A
continuous time signal does not need to be continuous (in the
mathematical sense) at all points in time. A continuous-time signal
contains values for all real numbers along the X-axis. It is denoted by x(t).
• Basically, the Signals are detectable quantities which are used to convey
some information about time-varying physical phenomena. some
examples of signals are human speech, temperature, pressure, and stock
prices.
• Electrical signals, normally expressed in the form of voltage or current
waveforms, they are some of the easiest signals to generate and process.

e
Example: A rectangular wave is discontinuous at several points but it is
continuous time signal.

om
hr
Discrete / Continuous-Time Signals:

A continuous time signal is defined for all values of t. A discrete time signal is
only defined for discrete values of t = ..., t-1, t0, t1, ..., tn, tn+1, tn+2, ... It is uncommon
c

for the spacing between tn and tn+1 to change with n. The spacing is most often
some constant value referred to as the sampling rate,
jk

Ts = tn+1 - tn.

It is convenient to express discrete time signals as x(nTs)=x[n].

That is, if x(t) is a continuous-time signal, then x[n] can be considered as


the nth sample of x(t).

Sampling of a continuous-time signal x(t) to yield the discrete-time signal x[n] is


an important step in the process of digitizing a signal.
Energy and Power Signal:

When the strength of a signal is measured, it is usually the signal power or signal
energy that is of interest.

The signal power of x(t) is defined as

e
and the signal energy as





om
A signal for which Px is finite and non-zero is known as a power signal.
A signal for which Ex is finite and non-zero is known as an energy signal.
Px is also known as the mean-square value of the signal.
Signal power is often expressed in the units of decibels (dB).
• The decibel is defined as
hr
• where P0 is a reference power level, usually equal to one squared SI unit of
c

the signal.
• For example if the signal is a voltage then the P0 is equal to one square
Volt.
jk

• A Signal can be Energy Signal or a Power Signal but it can not be both.
Also a signal can be neither a Energy nor a Power Signal.
• As an example, the sinusoidal test signal of amplitude A,

x(t)=Asin(ωt)

has energy Ex that tends to infinity and power ,


or in decibels (dB): 20log(A)-3

The signal is thus a power signal.

Signal to Noise Ratio:

Any measurement of a signal necessarily contains some random noise in


addition to the signal. In the case of additive noise, the measurement is

x(t)=s(t)+n(t)

e
where s(t) is the signal component and n(t) is the noise component.

The signal to noise ratio is defined as

or in decibels,
om
hr
The signal to noise ratio is an indication of how much noise is contained in a
measurement.

Standard Continuous Time Signals


c

• Impulse Signal
jk

where ∞ is the height of impulse signal having unit area.

and
When A = 1 (unit impulse Area)

e
• Step Signal

om
Unit Step Signal if A =1,
c hr
jk

• Ramp Signal

Unit Ramp Signal (A=1)


e
• Parabolic Signal

om
Unit Parabolic Signal when A = 1,
c hr
jk

• Unit Pulse Signal


e
Sinusoidal Signal

• Co-sinusoidal Signal:

om
Where, ω0 is the angular frequency in rad/sec

f0 = frequency in cycle/sec or Hz

T = time period in second


hr
When

When ϕ = positive,
c
jk

When ϕ = negative,

Sinusoidal Signal:

Where,
Angular frequency in red/sec

f0 = frequency in cycle/sec or Hz

T = time period in second

When

e
When ϕ = positive,

When ϕ = negative,

om
c hr
jk

Exponential Signal:

• Real Exponential Signal

where, A and b are real.


• Complex Exponential signal

e
The complex exponential signal can be represented in a complex plane by a
rotating vector, which rotates with a constant angular velocity of ω0 red/sec.

om
hr
• Exponentially Rising/Decaying Sinusoidal Signal
c
jk

• Triangular Pulse Signal


e
• Signum Signal
om
c hr
jk

• SinC Signal
e
• Gaussian Signal

om
c hr
jk

Important points:

• The sinusoidal and complex exponential signals are always periodic.


• The sum of two periodic signals is also periodic if the ratio of their
fundamental periods is a rational number.
• Ideally, an impulse signal is a signal with infinite magnitude and zero
duration.
• Practically, an impulse signal is a signal with large magnitude and short
duration.
Classification of Continuous Time Signal: The continuous time signal can be
classified as

1. Deterministic and Non-deterministic Signals:


o The signal that can be completely specified by a mathematical
equation is called a deterministic signal. The step, ramp, exponential
and sinusoidal signals are examples of deterministic signals.
o The signal whose characteristics are random in nature is called a
non-deterministic signal. The noise signal from various sources like
electronic amplifiers, oscillator etc., are examples of non-
deterministic signals.

e
o Periodic and Non-periodic Signals
o A periodic signal will have a definite pattern that repeats again and
again over a certain period of time.

x(t+T) = x(t)

om
2. Symmetric (even) and Anti-symmetric (odd) Signals

When a signal exhibits symmetry with respect to t = 0, then it is called an even


signal.

x(-t) = x(t)
c hr
jk

When a signal exhibits anti-symmetry with respect to t = 0, then it is called an odd


signal.

x(-t) = -x(t)

Let
Where,

even part of

e
odd part of

om
c hr

Discrete-Time Signals
jk

The discrete signal is a function of a discrete independent variable. In a discrete


time signal, the value of discrete time signal and the independent variable time
are discrete. The digital signal is same as discrete signal except that the
magnitude of the signal is quantized. Basically, discrete time signals can be
obtained by sampling a continuous-time signal. It is denoted as x(n).
e
Standard Discrete Time Signals

• Digital Impulse Signal or Unit Sample Sequence

Impulse signal,

om
c hr

• Unit Step Signal


jk
• Ramp Signal

Ramp signal,

e
om
hr
• Exponential Signal
c

Exponential Signal,
jk
• Discrete Time Sinusoidal Signal

e

om
A discrete-time sinusoid is periodic only if its frequency is a rational
number.
Discrete-time sinusoids whose frequencies are separated by an integer
hr

multiple of 2π are identical.

Operations in Continuous Time Signals:

Periodic & Non-Periodic Signals:


c

• A signal is a periodic signal if it completes a pattern within a measurable


time frame, called a period and repeats that pattern over identical
jk

subsequent periods.
• The period is the smallest value of T satisfying g(t + T) = g(t) for all t. The
period is defined so because if g(t + T) = g(t) for all t, it can be verified
that g(t + T') = g(t) for all t where T' = 2T, 3T, 4T, ... In essence, it's the
smallest amount of time it takes for the function to repeat itself. If the
period of a function is finite, the function is called "periodic".
• Functions that never repeat themselves have an infinite period, and are
known as "aperiodic functions".
Even & Odd Signals:

e
A function even function if it is symmetric about the y-axis. While, A signal is odd
if it is inversely symmetrical about the y-axis.

Even Signal, f(x) = f(-x)

Odd Signal, f(x) = - f(-x) om


c hr
Note: Some functions are neither even nor odd. These functions can be written
as a sum of even and odd functions. A function f(x) can be expressed in terms of
sum of an odd function and an even function.
jk

Invertibility and Inverse Systems:

A system is invertible if distinct inputs results distinct outputs. As shown in the


figure for the continuous-time case, if a system is invertible, then an inverse
system exists that, when cascaded with the original system, results an output
w(t) equal to the input x(t) to the first system.
An example of an invertible continuous-time system is y(t) = 2x(t),

for which the inverse system is w(t) = 1/2 y(t)

Causal System:

A system is causal if the output depends only on the input at the present time
and in the past. Such systems are often referred as non anticipative, as the

e
system output does not anticipate future values of the input. Similarly, if two
inputs to a causal system are identical up to some point in time to or no the
corresponding outputs must also be equal up to this same time.

om
y1(t) = 2x(t) + x(t-1) + [x(t)]2 ⇒ Causal Signal

y1(t) = 2x(t) + x(t-1) + [x(t+2)] ⇒ Non-Causal Signal

Homogeneity (Scaling):

A system is said to be homogeneous if, for any input signal X(t), i.e. When the
input signal is scaled, the output signal is scaled by the same factor.
hr
Time-Shifting / Time Reversal / Time Scaling:

Time-Shifting
c

Time Shifting can be understood as shifting the signal in time. When a constant
is added to the time, we obtain the advanced signal, & when we decrease the
jk

time, we get the delayed signal.


Time Scaling:

Due to the scaling in time the output Signal may shrink or stretch it depends on
the numerical value of scaling factor.

e
Time Inversion:

om
Time Inversion referred as flipping the signal about the y-axis.
hr
L.T.I. Systems
c

Linear Time-Invariant System:

Linear time-invariant systems (LTI systems) are a class of systems used


jk

in signals and systems that are both linear and time-invariant. Linear systems are
systems whose outputs for a linear combination of inputs are the same as a
linear combination of individual responses to those inputs. Time-invariant
systems are systems where the output does not depend on when input was
applied. These properties make LTI systems easy to represent and understand
graphically.

Linear systems have the property that the output is linearly related to the input.
Changing the input in a linear way will change the output in the same linear way.
So if the input x1(t) produces the output y1(t) and the input x2(t) produces the
output y2(t), then linear combinations of those inputs will produce linear
combinations of those outputs. The input {x1(t)+x2(t)} will produce the
output {y1(t)+y2(t)}. Further, the input {a1x1(t)+a2x2(t)} will produce the
output {a1y1(t)+a2y2(t)} for some constants a1 and a2.

In other words, for a system T over time t, composed of


signals x1(t) and x2(t) with outputs y1(t) and y2(t) ,

Homogeneity Principle:

e
Superposition Principle:
om
Thus, the entirety of an LTI system can be described by a single function called
its impulse response. This function exists in the time domain of the system. For
hr
an arbitrary input, the output of an LTI system is the convolution of the input
signal with the system's impulse response.

Conversely, the LTI system can also be described by its transfer function. The
transfer function is the Laplace transform of the impulse response. This
c

transformation changes the function from the time domain to the frequency
domain. This transformation is important because it turns differential
equations into algebraic equations, and turns convolution into multiplication.
jk

In the frequency domain, the output is the product of the transfer function with
the transformed input. The shift from time to frequency is illustrated in the
following image:
e
om
Homogeneity, additivity, and shift-invariance may, at first, sound a bit abstract
but they are very useful. To characterize a shift-invariant linear system, we need
to measure only one thing: the way
the system responds to a unit impulse. This response is called the impulse
response function of the system. Once we’ve measured this function, we can (in
principle) predict how the system will
respond to any other possible stimulus.
hr
Introduction to Convolution

Because here’s not a single answer to define what is? In “Signals and Systems”
probably we saw convolution in connection with Linear Time-Invariant
Systems and the impulse response for such a system. This multitude of
c

interpretations and applications is somewhat like the situation with the definite
integral.

To pursue the analogy with the integral, in pretty much all applications of the
jk

integral there is a general method at work:

• Cut the problem into small pieces where it can be solved approximately.
• Sum up the solution for the pieces, and pass to a limit.

Convolution Theorem

F(g∗f)(s)=Fg(s)Ff(s)

• In other notation: If f(t)⇔ F(s) and g(t) ⇔ G(s) then (g∗f)(t)⇔ G(s)F(s)
• In words: Convolution in the time domain corresponds to multiplication in
the frequency domain.

• For the Integral to make sense i.e., to be able to evaluate g(t−x) at points
outside the interval from 0 to 1, we need to assume that g is periodic. it is
not the issue the present case, where we assume that f(t) and g(t) are
defined for all t, so the factors in the integral

e
Convolution in the Frequency Domain

F(g ∗ f)=Fg ·Ff


om
In Frequency Domain convolution theorem states that

here we have seen that the whole thing is carried out for inverse Fourier
transform, as follow:

F−1(g∗f)=F−1g·F−1f
hr
F(gf)(s)=(Fg∗Ff)(s)

• Multiplication in the time domain corresponds to convolution in the


frequency domain.
c

By applying Duality Formula


jk

F(Ff)(s)=f(−s) or F(Ff)=f− without the variable.

• To derive the identity F(gf) = Fg∗Ff, we assume for convenience, h =


Ff and k = Fg

then we can write as F(gf)=k∗h

• The one thing we know is how to take the Fourier transform of a


convolution, so, in the present notation, F(k∗h)=(Fk)(Fh).

But now Fk =FFg = g−


and likewise Fh =FFf = f

So F(k∗h)=g−f− =(gf)−, or gf =F(k∗h)−

Now, finally, take the Fourier transform of both sides of this last equation

FF identity : F(gf)=F(F(k∗h)−)=k∗h =Fg∗Ff

Note: Here we are trying to prove F(gf)(s) = (Fg∗Ff)(s) rather


than F(g∗f)=(Ff)(Fg) Because, it seems more “natural” to multiply signals in the
time domain and see what effect this has in the frequency domain, so why not

e
work with F(fg) directly? But write the integral for F(gf); there’s nothing you can
do with it to get toward Fg∗Ff.

There is also often a general method of convolutions:


om
Usually there’s something that has to do
with smoothing and averaging,understood broadly.
You see this in both the continuous case and the discrete case.

Some of you who have seen convolution in earlier courses,you’ve probably heard
the expression “flip and drag”

Meaning of Flip & Drag: here’s the meaning of Flip & Drag is as follow
hr
• Fix a value t.The graph of the function g(x−t) has the same shape as g(x)
but shifted to the right by t. Then forming g(t − x) flips the graph (left-right)
about the line x = t.
• If the most interesting or important features of g(x) are near x = 0, e.g., if
c

it’s sharply peaked there, then those features are shifted to x = t for the
function g(t − x) (but there’s the extra “flip” to keep in mind).Multiply f(x)
and g(t − x) and integrate with respect to x.
jk

Averaging

• I prefer to think of the convolution operation as using one function to


smooth and average the other. Say g is used to smooth f in g∗f. In many
common applications g(x) is a positive function, concentrated near 0, with
total area 1.
• Like a sharply peaked Gaussian, for example (stay tuned). Then g(t−x) is
concentrated near t and still has area 1. For a fixed t, forming the integral

• The last expression is like a weighted average of the values of f(x) near x
= t, weighted by the values of (the flipped and shifted) g. That’s the
averaging part of the convolution, computing the convolution g∗f at t
replaces the value f(t) by a weighted average of the values of f near t.

e
Smoothing

• Again take the case of an averaging-type function g(t), as above. At a given


value of t,( g ∗ f)(t) is a weighted average of values of f near t.


om
Then Move t a little to a point t0. Then (g∗f)(t0) is a weighted average of
values of f near t0, which will include values of f that entered into the
average near t.
Thus the values of the convolutions (g∗f)(t) and (g∗f)(t0) will likely be
closer to each other than are the values f(t) and f(t0). That is, (g ∗f)(t) is
“smoothing” f as t varies — there’s less of a change between values of the
convolution than between values of f.
hr
Other identities of Convolution

It’s not hard to combine the various rules we have and develop an algebra of
convolutions. Such identities can be of great use — it beats calculating integrals.
Here’s an assortment. (Lower and uppercase letters are Fourier pairs.)
c

• (f ·g)∗(h·k)(t) ⇔ (F ∗G)·(H ∗K)(s)


• {(f(t)+g(t))·(h(t)+k(t)} ⇔ {[(F + G)∗(H + K)]}(s)
• f(t)·(g∗h)(t) ⇔ F ∗(G·H)(s)
jk

Properties of Convolution

Here we are explaining the properties of convolution in both continuous and


discrete domain

• Associative
• Commutative
• Distributive properties
• As a LTI system is completely specified by its impulse response, we look
into the conditions on the impulse response for the LTI system to obey
properties like memory, stability, invertibility, and causality.
• According to the Convolution theorem in Continuous & Discrete time as
follow:

For Discrete system .

e
For Continuous System

om
hr
We shall now discuss the important properties of convolution for LTI systems.

1) Commutative property :
c

• In Discrete time: x[n]*h[n] ⇔ h[n]*x[n]

Proof: Since we know that y[n] = x[n]*h[n]


jk

Let us assume n-k = l

so,
• So it clear from the derived expression that ⇒ x[n]*h[n] ⇔ h[n]*x[n]
• In Continuous time:

Proof

e
2. Distributive Property
om
So x[t]*h[t] ⇔ h[t]*x[t]

By this Property we will conclude that convolution is distributive over addition.

• Discrete time: x[n]{α h1[n] + βh2[n]} = α {x[n] h1[n]}+ β{x[n] h2[n]} α&
β are constant.
hr
• Continuous Time: x(t){α h1(t) + βh2(t)} = α{x(t)h1(t)} + β {x(t)h2(t)} α
& β are constant.

3. Associative Property
c

• Discrete Time y[n] = x[n]*h[n]*g[n]

x[n] * h1[n] * h2[n] = x[n] * (h1[n] * h2[n])


jk
• In Continuous Time:

[x(t) * h1(t)] * h2(t) = x(t) * [h1(t) * h2(t)]

If systems are connected in cascade:

e
∴ Overall impulse response of the system is:

4. Invertibility
om
A system is said to be invertible if there exist an inverse system which when
connected in series with the original system produces an output identical to input
hr
.

(x*δ)[n]= x[n]

(x*h*h-1)[n]= x[n]
c

(h*h-1)[n]= (δ)[n]

5. Causality
jk

• Discrete Time

• Continuous Time
6. Stability

• Discrete Time

e
• Continuous Time

Laplace Transform
om
The Laplace Transform is a very important tool to analyse any electrical
hr
containing by which we can convert the Integral-Differential Equation in Algebraic
by converting the given situation in Time Domain to Frequency
Domain
c


• is also called bilateral or two-sided Laplace transform.
If x(t) is defined for t≥0, [i.e., if x(t) is causal], then
jk

is also called unilateral or one-sided Laplace transform.

Below we have listed the Following advantage of accepting Laplace transform:

• Analysis of general R-L-C circuits become easier.


• Natural and Forced response can be easily analyzed.
• The circuit can be analyzed with impedances.
• Analysis of stability can be done easiest way.

Statement of Laplace Transform

• The direct Laplace transform or the Laplace integral of a function f(t)


defined for 0 ≤ t < ∞ is the ordinary calculus integration problem for a
given function f(t).
• Its Laplace transform is the function, denoted F(s) = L{f}(s), defined by

e


om
A causal signal x(t) is said to be of exponential order if a real, positive
constant σ (where σ is the real part of s) exists such that the function,
e- σt|X(t)| approaches zero as t approaches infinity.
For a causal signal, if lim e-σt|x(t)|=0, for σ > σc and if lim e-σt|x(t)|=∞ for σ >
σc then σc is called the abscissa of convergence, (where σc is a point on real
axis in s-plane).
The value of s for which the integral
hr
converges is called Region of Convergence (ROC).

For a causal signal, the ROC includes all points on the s-plane to the right
c


of abscissa of convergence.
• For an anti-causal signal, the ROC includes all points on the s-plane to the
left of the abscissa of convergence.
jk

• For a two-sided signal, the ROC includes all points on the s-plane in the
region in between two abscissae of convergence.

Properties of the ROC

The region of convergence has the following properties

• ROC consists of strips parallel to the jω-axis in the s-plane.


• ROC does not contain any poles.
• If x(t) is a finite duration signal, x(t) ≠ 0, t1 < t < t2 and is absolutely
integrable, the ROC is the entire s-plane.
• If x(t) is a right sided signal, x(t) = 0, t1 < t0, the ROC is of the form R{s} >
max {R{pk}}
• If x(t) is a left sided signal x(t) = 0, t1 > t0, the ROC is of the form R{s} > min
{R{pk}}
• If x(t) is a double-sided signal, the ROC is of the form p1 < R{s} < p2
• If the ROC includes the jω-axis. Fourier transform exists and the system is
stable.

Inverse Laplace Transform

e
• It is the process of finding x(t) given X(s)

X(t) = L-1{X(s)}


om
There are two methods to obtain the inverse Laplace transform.

Inversion using Complex Line Integral


hr
• Inversion of Laplace Using Standard Laplace Transform Table.

Note A: Derivatives in t → Multiplication by s.


c
jk

B: Multiplication by t → Derivatives in s.
Laplace Transform of Some Standard Signals

e
om
c hr
jk
e
om
c hr
jk

Some Standard Laplace Transform Pairs


jk
chr
om
e
e
om
c hr
jk

Properties of Laplace Transform


jk
chr
om
e
e
Key Points


om
The convolution theorem of Laplace transform says that Laplace
transform of convolution of two time-domain signals is given by the
product of the Laplace transform of the individual signals.
hr
• The zeros and poles are two critical complex frequencies at which a
rational function of a takes two extreme value zero and infinity
respectively.

Fourier Series & Fourier Transform


c

Fourier Theorem

Any arbitrary continuous-time signal x(t), which is periodic with a fundamental


jk

period To, can be expressed as a series of harmonically related sinusoids whose


frequencies are multiples of fundamental frequency or first harmonic. In other
words, any periodic function of (t) can be represented by an infinite series of
sinusoids called the Fourier Series.

The periodic waveform is expressed in the form of Fourier series, while a non-
periodic waveform may be expressed by the Fourier transform.

The different forms of the Fourier series are given as follows.


(i) Trigonometric Fourier series

(ii) Complex exponential Fourier series

(iii) Polar or harmonic form Fourier series.

Trigonometric Fourier Series

Any arbitrary periodic function x(t) with fundamental period T0 can be expressed
as follows.

e
om
This is called the trigonometric Fourier series representation of signal x(t). Here,
ω0 = 2π/T0 is the fundamental frequency of x(t), and coefficients a0, an, and bn are
referred to as the trigonometric continuous-time Fourier series (CTFS)
coefficients. The coefficients are calculated as follows.

Fourier Series Coefficient


c hr
jk

From equation (ii), it is clear that coefficient a0 represents the average or mean
value (also referred to as the dc component) of signal x(t).

In these formulas, the limits of integration are either (–T0/2 to +T0/2) or (0 to T0).
In general, the limit of integration is any period of the signal, and so the limits can
be from (t1 to t2 + T0), where t1 is any time instant.

Trigonometric Fourier Series Coefficients for Symmetrical Signals


If the periodic signal x(t) possesses some symmetry, then the continuous-time
Fourier series (CTFS) coefficients become easy to obtain. The various types of
symmetry and simplification of Fourier series coefficients are disused below.

Consider the Fourier series representation of a periodic signal x(t) defined in the
equation.

e
om
Even Symmetry: x(t) = x(–t)

If x(t) is an even function, then product x(t) sinωot is odd, and integration in
equation (iv) becomes zero. That is bn = 0 for all n, and the Fourier series
hr
representation expressed as
c
jk

For example, the signal x(t) shown below figure has even symmetry, so b n = 0,
and the Fourier series expansion of x(t) is given as
e
The trigonometric Fourier series representation of even signals contains cosine
terms only. The constant a0 may or may not be zero.

om
Odd Symmetry: x(t) = –x(–t)

If x(t) is an odd function, then product x(t) cosωot is also odd and integration in
equation (iii) becomes zero i.e. an = 0 for all n. Also, a0 = 0 because an odd
symmetric function has a zero-average value. The Fourier series representation
is expressed as
hr
For example, the signal x(t) shown in below figure is odd symmetric, so an = a0 =
c

0, and the Fourier series expansion of x(t) is given as


jk
e
Fourier Sine Series

om
The Fourier Sine series can be written as

------(2)

• Sum S(x) will inherit all three properties:


hr
• (i): Periodic S(x +2π)=S(x); (ii): Odd S(−x)=−S(x); (iii): S(0) = S(π)=0
• Our first step is to compute from S(x) the number bk that multiplies sin(kx).

Suppose S(x)=∑ bn sin(nx). Multiply both sides by sin(kx). Integrate from 0 to π in


Sine Series in equation (2)
c
jk

• On the right side, all integrals are zero except for n = k. Here the property of
“orthogonality” will dominate. The sines make 90o angles in function space
when their inner products are integrals from 0 to π.
• Orthogonality for sine Series

Condition for Orthogonality:


------(3)

• Zero comes quickly if we integrate the term cos(mx) from 0


to π. ⇒ 0∫π cos(mx) dx = 0-0=0.
• Integrating cos(mx) with m = n−k and m = n + k proves the orthogonality of
the sines.
• The exception is when n = k. Then we are integrating sin2(kx) = 1/2 − 1/2

e
cos(2kx)

------(4)
om
• Notice that S(x)sin(kx is even (equal integrals from −π to 0 and from 0 to
hr
π).
• We will immediately consider the most important example of a Fourier sine
series. S(x) is an odd square wave with SW(x) = 1 for 0<x<π. It is an odd
function with period 2 π, that vanishes at x=0 and x= π.
c
jk

Example:

As given above, finding the Fourier sine coefficients bk of the square wave SW(x).

Solution:

For k =1,2,...using the formula of sine coefficient with S(x)=1 between 0 and π:
• Then even-numbered coefficients b2k are all zero because cos(2kπ) =
cos(0) = 1.
• The odd-numbered coefficients bk =4/πk decrease at the rate 1/k.
• We will see that same 1/k decay rate for all functions formed from smooth
pieces and jumps. Put those coefficients 4/πk and zero into the Fourier
sine series for SW(x).

e
Fourier Cosine Series

om
The cosine series applies to even functions with C(−x)=C(x) as
hr
-----(5)
c
jk

Cosine has period 2π shown as above in figure two even functions, the repeating
ramp RR(x), and the up-down train UD(x) of delta functions.

• That sawtooth ramp RR is the integral of the square wave. The delta
functions in UD give the derivative of the square wave. RR and UD will be
valuable examples, one smoother than SW and one less smooth.
• First, we find formulas for the cosine coefficients a0 and ak. The constant
term a0 is the average value of the function C(x):
-----(6)

• We will integrate the cosine series from 0 to π. On the right side, the
integral of a0=a0π (divide both sides by π). All other integrals are zero.

e

doubled.
om
Again the integral over a full period from −π to π (also 0 to 2π) is just

Orthogonality Relations of Fourier Series

Since from the Fourier Series Representation, we concluded that a periodic


hr
Signal it could be written as
c
jk

-------(7)

The condition of orthogonality is as follows:


Proof of the orthogonality relations:

e
This is just a straightforward calculation using the periodicity of sine and cosine
and either (or both) of these two methods:

om
hr
Energy in Function = Energy in Coefficients

There is also another important equation (the energy identity) that comes from
integrating (F(x))2. When we square the Fourier series of F(x) and integrate from
−π to π, all the “cross-terms” drop out. The only nonzero integrals come from
c

12 and cos2 kx and sin2 kx, multiplied by a02,ak2 bk2.

• Energy in F(x) equals the energy in the coefficients.


jk

• The left-hand side is like the length squared of a vector, except the vector
is a function.
• The right-hand side comes from an infinitely long vector of a’s and b’s.
• If the lengths are equal, which says that the Fourier transforms from
function to vector is like an orthogonal matrix.
• Normalized by constants √2π and √π, we have an orthonormal basis in
function space.

Complex Fourier Series


• In place of separate formulas for a0 and ak and bk, we may consider one
formula for all the complex coefficients ck.
• So that the function F(x) will be complex, The Discrete Fourier Transform
will be much simpler when we use N complex exponentials for a vector.

The exponential form of the Fourier series of a periodic signal x(t) with period
T0 is defined as

e
where ω0 is the fundamental frequency given as ω0 = 2π /T0. The exponential
Fourier series coefficients cn are calculated from the following expression



om
Since c0 = a0 is still the average of F(x), because e0 = 1.
The orthogonality of einx and eikx is to be checked by integrating.
hr
Example:

Compute the Fourier series of f(t), where f(t) is the square wave with period 2π.
defined over one period.
c
jk

The graph over several periods is shown below.


Solution:

Computing a Fourier series means computing its Fourier coefficients. We do this


using the integral formulas for the coefficients given with Fourier’s theorem in
the previous note. For convenience, we repeat the theorem here.

e
om
By applying these formulas to the above waveform, we have to split the integrals
into two pieces corresponding to where f(t) is +1 and where it is −1.

thus for n ≠ 0 ;

for n = 0
c hr
jk

We have used the simplification cos nπ = (−1)n to get a nice formula for the
coefficients bn.
This then gives the Fourier series for f(t)

Fourier Transform:

Fourier transform is a transformation technique that transforms non-periodic


signals from the continuous-time domain to the corresponding frequency
domain. The Fourier transform of a continuous-time non-periodic signal x(t) is
defined as

e
om
where X(jω) is the frequency domain representation of the signal x(t), and F
denotes the Fourier transformation. The variable ‘ ω’ is the radian frequency in
rad/sec. Sometimes X(jω) is also written as X(ω).

If the frequency is represented in terms of cyclic frequency f (in Hz), then the
above equation is written as
hr
Note:

The signal x(t) and its Fourier transform X(jω) are said to form a Fourier
c

transform pair denoted as


jk

Existence of Fourier Transform:

A function x(t) has a unique Fourier transform if the following conditions are
satisfied, which are also referred to as Dirichlet Conditions:

Dirichlet Conditions:

(i) is absolutely integrable. That is,


(ii) x(t) has a finite number of maxima and minima and a finite number of
discontinuities within any finite interval.

The above conditions are only sufficient conditions but not necessary for the
signal to be Fourier transformable. For example, the signals u(t),r(t), and cos
(ω0t) are not absolutely integrable but still possess a Fourier transform.

Magnitude and Phase Spectrum:

e
The Fourier transform X(jω) of a signal x(t) is, in general, the complex form that
can be expressed as

of
om
The plot of |X(jω)| versus ω is called the magnitude spectrum of x(t), and the plot

versus ω is called the phase spectrum. The amplitude (magnitude) and


phase spectra are together called Fourier spectrum, which is nothing but the
frequency response of X(jω) for the frequency range
hr
.

Inverse Fourier Transform:

The inverse Fourier transform of X(jω) is given as


c
jk

This method of calculating the inverse Fourier transform seems difficult as is


involves integration. There is another method to obtain inverse Fourier transform
using partial fraction. Let a rational Fourier transform is given as

X(jω) can be expressed as a ratio of two factorized polynomials in jω as shown


below.
By partial fraction expansion technique, the above can be expressed as shown
below.

where k1 ,k2 ......kn calculated depending on whether the roots are real and simple
or repeater or complex.

e
Properties of Fourier Transform:

a. Linearity: om
There are some properties of continuous-time Fourier transform (CTFT) based
on the transformation of signals, which are listed below.

The linearity property states that the linear combination of signals in the time
domain is equivalent to a linear combination of their Fourier transform in the
frequency domain.
c hr
where a and b are any arbitrary constants.

b. Time Shifting:
jk

The time-shifting property states that the delay of t0 in the time domain is
equivalent to multiplication of with its Fourier transform. It implies that the
amplitude spectrum of the original signal does not change, but the phase
spectrum is modified by a factor of -jωt0.
c. Conjugation and Conjugate Symmetry:

d. Time Scaling

Time scaling property states that the time compression of a signal in the time
domain is equivalent to expansion in the Frequency domain and vice-versa,

e
om
e. Differentiation in Time-Domain

The time differentiation property states that differentiation in the time domain is
equivalent to the multiplication of jω in the frequency domain.
c hr

f. Integration in Time-Domain:
jk

g. Differentiation in Frequency Domain:

The differentiation of Fourier transform in the frequency domain is equivalent to


the multiplication of time-domain signal with -jt .

Differentiation in Frequency Domain


h. Frequency Shifting:

The frequency-shifting property states that a shift of ω0 in frequency is

e
equivalent to multiplying the time domain signal by

i. Duality Property:
om
hr
j. Time Convolution:

Convolution between two signals in the time domain is equivalent to the


multiplication of Fourier transforms of the two signals in the frequency domain.
c
jk

k. Frequency Convolution:

Convolution in the frequency domain (with a normalization factor of 2π) is


equivalent to multiplying the signals in the time domain.
l. Area Under x(t):

If X(jω) is the Fourier transform of x(t), then,

e
that is, the area under a time function x(t) is equal to the value of its Fourier

m. Area Under X(jω):


om
transform evaluated at ω= 0

If X(jω) is the Fourier transform of x(t), then,


hr
n. Parseval's Energy Theorem:

If X(jω) is the Fourier transform of an energy signal x(t). then


c
jk

where Ex is the total energy of the signal x(t).

Z Transform & Sampling Theorem

Sampling Theorem

The sampling process is usually described in the time domain. In this process, an
analog signal is converted into a corresponding sequence of samples that are
usually spaced uniformly in time. Consider an arbitrary signal x(t) of finite energy,
which is specified for all time as shown in figure 1(a).
Suppose that we sample the signal x(t) instantaneously and at a uniform rate,
once every TS second, as shown in figure 1(b). Consequently, we obtain an
infinite sequence of samples spaced TS seconds apart and denoted by {x(NTS)},
where n takes on all possible integer values.

Thus, we define the following terms:

1. Sampling Period: The time interval between two consecutive samples is


referred to as the sampling period. In figure 1(b), TS is the sampling period.
2. Sampling Rate: The reciprocal of the sampling period is referred to as
sampling rate, i.e.

e
fS = 1/TS

om
c hr
jk

Sampling theorem provides both a method of reconstruction of the original


signal from the sampled values and also gives a precise upper bound on the
sampling interval required for distortion less reconstruction. It states that

• A band-limited signal of finite energy, which has no frequency components


higher than W Hertz, is completely described by specifying the values of
the signal at instants of time separated by 1/2W seconds.
• A band-limited signal of the finite energy, which has no frequency
components higher than W Hertz, may be completely recovered from a
knowledge of its samples taken at the rate of 2W samples per second.
Aliasing & Anti-aliasing

• Aliasing is such an effect of violating the Nyquist-Shannon sampling


theory. During sampling the baseband spectrum of the sampled signal is
mirrored to every multifold of the sampling frequency. These mirrored
spectra are called alias.
• The easiest way to prevent aliasing is the application of a steep-sloped
low-pass filter with half the sampling frequency before the conversion.
Aliasing can be avoided by keeping Fs>2Fmax.
• Since the sampling rate for an analog signal must be at least two times as
high as the highest frequency in the analog signal in order to avoid

e
aliasing. So in order to avoid this, the analogue signal is then filtered by a
low pass filter prior to being sampled, and this filter is called an anti-
aliasing filter. Sometimes the reconstruction filter after a digital-to-

om
analogue converter is also called an anti-aliasing filter.

Explanation of Sampling Theorem

Consider a message signal m(t) bandlimited to W, i.e.

M(f) = 0 For |f| ≥ W

Then, the sampling frequency fS, required to reconstruct the bandlimited


hr
waveform without any error, is given by

Fs ≥ 2 W

Nyquist Rate
c

Nyquist rate is defined as the minimum sampling frequency allowed to


reconstruct a bandlimited waveform without error, i.e.
jk

fN = min {fS} = 2W

Where W is the message signal bandwidth, and fS is the sampling frequency.

Nyquist Interval

The reciprocal of Nyquist rate is called the Nyquist interval (measured in


seconds), i.e.
Where fN is the Nyquist rate, and W is the message signal bandwidth.

The Z - Transform of a discrete-time signal x[n] is defined as

e
where z = r.ejω


om
The discrete-time Fourier Transform (DTFT) is obtained by evaluating Z-
Transform at z = ejω
The z-transform defined above has both sided summation. It is called
bilateral or both sided Z-transform.

Unilateral (one-sided) z-transform

• The unilateral z-transform of a sequence x[n] is defined as


hr
Region of Convergence (ROC):
c

• ROC is the region where z-transform converges. It is clear that z-transform


is an infinite power series. The series is not convergent for all values of z.
jk

Significance of ROC

• ROC gives an idea about values of z for which z-transform can be


calculated.
• ROC can be used to determine the causality of the system.
• ROC can be used to determine the stability of the system.

Summary of ROC of Discrete-Time Signals for the sequences


e
Characteristic Families of Signals and Corresponding ROC

om
c hr
jk
e
om
Note: X(z) = z{x(n)} ; X1 (z) = Z {xl (n)} ; X2(z) = z{x2 (n)}; Y(z) =z (y (n))

Summary of Properties of z- Transform:


c hr
jk
e
om
c hr
jk

Impulse Response and Location of Poles


jk
chr
om
e
jk
chr
om
e
jk
chr
om
e
jk
chr
om
e
jk
chr
om
e
jk
chr
om
e
jk
chr
om
e

You might also like