0% found this document useful (0 votes)
37 views71 pages

Signals and Systems Quick Guide

The document provides a comprehensive overview of signals and systems, defining key concepts such as signals, systems, and their classifications. It explains various types of signals, their basic operations, and how systems can be categorized based on their properties. Additionally, it covers the mathematical representation of signals and operations like amplitude scaling, addition, and time shifting.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views71 pages

Signals and Systems Quick Guide

The document provides a comprehensive overview of signals and systems, defining key concepts such as signals, systems, and their classifications. It explains various types of signals, their basic operations, and how systems can be categorized based on their properties. Additionally, it covers the mathematical representation of signals and operations like amplitude scaling, addition, and time shifting.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 71

Page 1 of 71

Home Whiteboard AI Assistant Online Compilers Jobs Tools Ar

SQL HTML CSS Javascript Python Java C C++ PHP Scala C#

Signals and Systems - Quick Guide

What is Signal?
Signal is a time varying physical phenomenon which is intended to convey information.

OR

Signal is a function of time.

OR

Signal is a function of one or more independent variables, which contain some information.

Example: voice signal, video signal, signals on telephone wires etc.

Note: Noise is also a signal, but the information conveyed by noise is unwanted hence it is
considered as undesirable.

What is System?
System is a device or combination of devices, which can operate on signals and produces
corresponding response. Input to a system is called as excitation and output from it is
called as response.

For one or more inputs, the system can have one or more outputs.

Example: Communication System

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 1/71


Page 2 of 71

Signals Basic Types


Here are a few basic signals:

Unit Step Function


1 t ⩾ 0
Unit step function is denoted by u(t). It is defined as u(t) = {
0 t < 0

It is used as best test signal.

Area under unit step function is unity.

Unit Impulse Function

1 t = 0
Impulse function is denoted by δ(t). and it is defined as δ(t) = {
0 t ≠ 0

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 2/71


Page 3 of 71

∫ δ(t)dt = u(t)
−∞

du(t)
δ(t) =
dt

Ramp Signal
t t ⩾ 0
Ramp signal is denoted by r(t), and it is defined as r(t) = {
0 t < 0

∫ u(t) = ∫ 1 = t = r(t)

dr(t)
u(t) =
dt

Area under unit ramp is unity.

Parabolic Signal
2
t /2 t ⩾ 0
Parabolic signal can be defined as x(t) = {
0 t < 0

2
t
∬ u(t)dt = ∫ r(t)dt = ∫ tdt = = parabolicsignal
2

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 3/71


Page 4 of 71

2
d x(t)
⇒ u(t) =
2
dt

dx(t)
⇒ r(t) =
dt

Signum Function

⎧ 1 t > 0

Signum function is denoted as sgn(t). It is defined as sgn(t) = ⎨ 0 t = 0




−1 t < 0

sgn(t) = 2u(t) 1

Exponential Signal

Exponential signal is in the form of x(t) = .


αt
e

The shape of exponential can be defined by α.

Case i: if =0 x(t) = =1
0
α → e

Case ii: if α < 0 i.e. -ve then x(t) = e


−αt
. The shape is called decaying exponential.

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 4/71


Page 5 of 71

Case iii: if α > 0 i.e. +ve then x(t) = e


αt
. The shape is called raising exponential.

Rectangular Signal

Let it be denoted as x(t) and it is defined as

Triangular Signal

Let it be denoted as x(t)

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 5/71


Page 6 of 71

Sinusoidal Signal

Sinusoidal signal is in the form of x(t) = A cos(w0 ± ϕ) or A sin(w0 ± ϕ)

Where T0 =

w0

Sinc Function

It is denoted as sinc(t) and it is defined as sinc

sinπt
(t) =
πt

= 0 for t = ±1, ±2, ±3. . .

Sampling Function

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 6/71


Page 7 of 71

It is denoted as sa(t) and it is defined as

sint
sa(t) =
t

= 0 for t = ±π, ±2π, ±3π . . .

Signals Classification
Signals are classified into the following categories:

Continuous Time and Discrete Time Signals

Deterministic and Non-deterministic Signals

Even and Odd Signals

Periodic and Aperiodic Signals

Energy and Power Signals

Real and Imaginary Signals

Continuous Time and Discrete Time Signals

A signal is said to be continuous when it is defined for all instants of time.

A signal is said to be discrete when it is defined at only discrete instants of time/

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 7/71


Page 8 of 71

Deterministic and Non-deterministic Signals

A signal is said to be deterministic if there is no uncertainty with respect to its value at any
instant of time. Or, signals which can be defined exactly by a mathematical formula are
known as deterministic signals.

A signal is said to be non-deterministic if there is uncertainty with respect to its value at


some instant of time. Non-deterministic signals are random in nature hence they are called
random signals. Random signals cannot be described by a mathematical equation. They
are modelled in probabilistic terms.

Even and Odd Signals

A signal is said to be even when it satisfies the condition x(t) = x(-t)

Example 1: t2, t4 cost etc.

Let x(t) = t2

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 8/71


Page 9 of 71

x(-t) = (-t)2 = t2 = x(t)

∴, t2 is even function

Example 2: As shown in the following diagram, rectangle function x(t) = x(-t) so it is also
even function.

A signal is said to be odd when it satisfies the condition x(t) = -x(-t)

Example: t, t3 ... And sin t

Let x(t) = sin t

x(-t) = sin(-t) = -sin t = -x(t)

∴, sin t is odd function.

Any function (t) can be expressed as the sum of its even function e(t) and odd
function o(t).

(t ) = e(t )+ 0(t )

where

e(t ) = ½[ (t ) + (-t )]

Periodic and Aperiodic Signals

A signal is said to be periodic if it satisfies the condition x(t) = x(t + T) or x(n) = x(n + N).

Where

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 9/71


Page 10 of 71

T = fundamental time period,

1/T = f = fundamental frequency.

The above signal will repeat for every time interval T0 hence it is periodic with period T0.

Energy and Power Signals


A signal is said to be energy signal when it has finite energy.

2
Energy E = ∫ x (t)dt
−∞

A signal is said to be power signal when it has finite power.

T
1 2
Power P = lim ∫ x (t)dt
T →∞ 2T −T

NOTE:A signal cannot be both, energy and power simultaneously. Also, a signal may be
neither energy nor power signal.

Power of energy signal = 0

Energy of power signal = ∞

Real and Imaginary Signals


A signal is said to be real when it satisfies the condition x(t) = x*(t)

A signal is said to be odd when it satisfies the condition x(t) = -x*(t)

Example:

If x(t)= 3 then x*(t)=3*=3 here x(t) is a real signal.

If x(t)= 3j then x*(t)=3j* = -3j = -x(t) hence x(t) is a odd signal.

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 10/71


Page 11 of 71

Note: For a real signal, imaginary part should be zero. Similarly for an imaginary signal,
real part should be zero.

Signals Basic Operations


There are two variable parameters in general:

Amplitude
Time

The following operation can be performed with amplitude:

Amplitude Scaling

C x(t) is a amplitude scaled version of x(t) whose amplitude is scaled by a factor C.

Addition
Addition of two signals is nothing but addition of their corresponding amplitudes. This can
be best explained by using the following example:

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 11/71


Page 12 of 71

As seen from the diagram above,

-10 < t < -3 amplitude of z(t) = x1(t) + x2(t) = 0 + 2 = 2

-3 < t < 3 amplitude of z(t) = x1(t) + x2(t) = 1 + 2 = 3

3 < t < 10 amplitude of z(t) = x1(t) + x2(t) = 0 + 2 = 2

Subtraction
subtraction of two signals is nothing but subtraction of their corresponding amplitudes. This
can be best explained by the following example:

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 12/71


Page 13 of 71

As seen from the diagram above,

-10 < t < -3 amplitude of z (t) = x1(t) - x2(t) = 0 - 2 = -2

-3 < t < 3 amplitude of z (t) = x1(t) - x2(t) = 1 - 2 = -1

3 < t < 10 amplitude of z (t) = x1(t) + x2(t) = 0 - 2 = -2

Multiplication
Multiplication of two signals is nothing but multiplication of their corresponding amplitudes.
This can be best explained by the following example:

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 13/71


Page 14 of 71

As seen from the diagram above,

-10 < t < -3 amplitude of z (t) = x1(t) x2(t) = 0 2 = 0

-3 < t < 3 amplitude of z (t) = x1(t) x2(t) = 1 2 = 2

3 < t < 10 amplitude of z (t) = x1(t) x2(t) = 0 2 = 0

The following operations can be performed with time:

Time Shifting
x(t ± t0) is time shifted version of the signal x(t).

x (t + t0) → negative shift

x (t - t0) → positive shift

Time Scaling

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 14/71


Page 15 of 71

x(At) is time scaled version of the signal x(t). where A is always positive.

|A| > 1 → Compression of the signal

|A| < 1 → Expansion of the signal

Note: u(at) = u(t) time scaling is not applicable for unit step function.

Time Reversal

x(-t) is the time reversal of the signal x(t).

Systems Classification
Systems are classified into the following categories:

linear and Non-linear Systems

Time Variant and Time Invariant Systems

linear Time variant and linear Time invariant systems

Static and Dynamic Systems

Causal and Non-causal Systems

Invertible and Non-Invertible Systems

Stable and Unstable Systems

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 15/71


Page 16 of 71

linear and Non-linear Systems


A system is said to be linear when it satisfies superposition and homogenate principles.
Consider two systems with inputs as x1(t), x2(t), and outputs as y1(t), y2(t) respectively.
Then, according to the superposition and homogenate principles,

T [a1 x1(t) + a2 x2(t)] = a1 T[x1(t)] + a2 T[x2(t)]

∴, T [a1 x1(t) + a2 x2(t)] = a1 y1(t) + a2 y2(t)

From the above expression, is clear that response of overall system is equal to response
of individual system.

Example:

(t) = x2(t)

Solution:

y1 (t) = T[x1(t)] = x12(t)

y2 (t) = T[x2(t)] = x22(t)


T [a1 x1(t) + a2 x2(t)] = [ a1 x1(t) + a2 x2(t)]2

Which is not equal to a1 y1(t) + a2 y2(t). Hence the system is said to be non linear.

Time Variant and Time Invariant Systems


A system is said to be time variant if its input and output characteristics vary with time.
Otherwise, the system is considered as time invariant.

The condition for time invariant system is:

y (n , t) = y(n-t)

The condition for time variant system is:

y (n , t) ≠ y(n-t)

Where y (n , t) = T[x(n-t)] = input change

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 16/71


Page 17 of 71

y (n-t) = output change

Example:

y(n) = x(-n)

y(n, t) = T[x(n-t)] = x(-n-t)

y(n-t) = x(-(n-t)) = x(-n + t)

∴ y(n, t) y(n-t). Hence, the system is time variant.

linear Time variant (LTV) and linear Time Invariant (LTI) Systems
If a system is both linear and time variant, then it is called linear time variant (LTV) system.

If a system is both linear and time Invariant then that system is called linear time invariant
(LTI) system.

Static and Dynamic Systems


Static system is memory-less whereas dynamic system is a memory system.

Example 1: y(t) = 2 x(t)

For present value t=0, the system output is y(0) = 2x(0). Here, the output is only
dependent upon present input. Hence the system is memory less or static.

Example 2: y(t) = 2 x(t) + 3 x(t-3)

For present value t=0, the system output is y(0) = 2x(0) + 3x(-3).

Here x(-3) is past value for the present input for which the system requires memory to
get this output. Hence, the system is a dynamic system.

Causal and Non-Causal Systems

A system is said to be causal if its output depends upon present and past inputs, and does
not depend upon future input.

For non causal system, the output depends upon future inputs also.

Example 1: y(n) = 2 x(t) + 3 x(t-3)

For present value t=1, the system output is y(1) = 2x(1) + 3x(-2).

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 17/71


Page 18 of 71

Here, the system output only depends upon present and past inputs. Hence, the system is
causal.

Example 2: y(n) = 2 x(t) + 3 x(t-3) + 6x(t + 3)

For present value t=1, the system output is y(1) = 2x(1) + 3x(-2) + 6x(4) Here, the
system output depends upon future input. Hence the system is non-causal system.

Invertible and Non-Invertible systems

A system is said to invertible if the input of the system appears at the output.

Y(S) = X(S) H1(S) H2(S)

= X(S) H1(S) Since H2(S) = 1/( H1(S) )


1

(H1(S))

∴, Y(S) = X(S)

→ y(t) = x(t)

Hence, the system is invertible.

If y(t) ≠ x(t), then the system is said to be non-invertible.

Stable and Unstable Systems


The system is said to be stable only when the output is bounded for bounded input. For a
bounded input, if the output is unbounded in the system then it is said to be unstable.

Note: For a bounded signal, amplitude is finite.

Example 1: y (t) = x2(t)

Let the input is u(t) (unit step bounded input) then the output y(t) = u2(t) = u(t) =
bounded output.

Hence, the system is stable.

Example 2: y (t) = ∫ x(t) dt

Let the input is u (t) (unit step bounded input) then the output y(t) = ∫ u(t) dt = ramp
signal (unbounded because amplitude of ramp is not finite it goes to infinite when t →

infinite).

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 18/71


Page 19 of 71

Hence, the system is unstable.

Signals Analysis

Analogy Between Vectors and Signals


There is a perfect analogy between vectors and signals.

Vector
A vector contains magnitude and direction. The name of the vector is denoted by bold face
type and their magnitude is denoted by light face type.

Example: V is a vector with magnitude V. Consider two vectors V1 and V2 as shown in the
following diagram. Let the component of V1 along with V2 is given by C 12V2. The
component of a vector V1 along with the vector V2 can obtained by taking a perpendicular
from the end of V1 to the vector V2 as shown in diagram:

The vector V1 can be expressed in terms of vector V2

V1= C 12V2 + Ve

Where Ve is the error vector.

But this is not the only way of expressing vector V1 in terms of V2. The alternate
possibilities are:

V1=C 1V2+Ve1

V2=C 2V2+Ve2

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 19/71


Page 20 of 71

The error signal is minimum for large component value. If C 12=0, then two signals are said
to be orthogonal.

Dot Product of Two Vectors

V1 . V2 = V1.V2 cosθ

θ = Angle between V1 and V2

V1 . V2 =V2.V1

The components of V1 alogn V2 = V1 Cos θ =


V 1.V 2

V2

From the diagram, components of V1 alogn V2 = C 12 V2

V1 . V2

V2 = C1 2 V2

V1 . V2
⇒ C12 =
V2

Signal
The concept of orthogonality can be applied to signals. Let us consider two signals f1(t)
and f2(t). Similar to vectors, you can approximate f1(t) in terms of f2(t) as

f1(t) = C 12 f2(t) + fe(t) for (t1 < t < t2)

⇒ fe(t) = f1(t) C 12 f2(t)

One possible way of minimizing the error is integrating over the interval t1 to t2.

t2
1
∫ [fe (t)]dt
t2 − t1
t1

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 20/71


Page 21 of 71

t2
1
∫ [f1 (t) − C12 f2 (t)]dt
t2 − t1 t1

However, this step also does not reduce the error to appreciable extent. This can be
corrected by taking the square of error function.

1 t2 2
ε = ∫ [fe (t) ] dt
t2 − t1 t1

1 t2 2
⇒ ∫ [fe (t) − C12 f2 ] dt
t2 − t1 t1

Where ε is the mean square value of error signal. The value of C 12 which minimizes the
error, you need to calculate

= 0
dC 12

d 1 t2 2
⇒ [ ∫ [f1 (t) − C12 f2 (t) ] dt] = 0
dC 12 t2 − t1 t1

1 t2 d d d
2 2 2
⇒ ∫ [ f (t) − 2f1 (t) C12 f2 (t) + f (t) C ]dt = 0
t2 − t1 t1 dC 12 1 dC 12 dC 12 2 12

Derivative of the terms which do not have C12 term are zero.

t2 t2 2
⇒ ∫ −2f1 (t) f2 (t)dt + 2C12 ∫ [f (t)]dt = 0
t1 t1 2

t
2
∫ f (t)f (t)dt
t 1 2

If component is zero, then two signals are said to be orthogonal.


1
C12 = t
2 2
∫ f (t)dt
t 2
1

Put C 12 = 0 to get condition for orthogonality.


t
2
∫ f (t)f (t)dt
t 1 2

0=
1

t 2
2
∫ f2 (t)dt
t
1

t2

∫ f1 (t) f2 (t)dt = 0
t1

Orthogonal Vector Space


A complete set of orthogonal vectors is referred to as orthogonal vector space. Consider
a three dimensional vector space as shown below:

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 21/71


Page 22 of 71

Consider a vector A at a point (X1, Y1, Z1). Consider three unit vectors (VX, VY, VZ) in the
direction of X, Y, Z axis respectively. Since these unit vectors are mutually orthogonal, it
satisfies that

VX . VX = VY . VY = VZ . VZ = 1

VX . VY = VY . VZ = VZ . VX = 0

You can write above conditions as

1 a = b
Va . Vb = {
0 a ≠ b

The vector A can be represented in terms of its components and unit vectors as

A = X1 VX + Y1 VY + Z1 VZ . . . . . . . . . . . . . . . . (1)

Any vectors in this three dimensional space can be represented in terms of these three unit
vectors only.

If you consider n dimensional space, then any vector A in that space can be represented as

A = X1 VX + Y1 VY + Z1 VZ +. . . +N 1 VN . . . . . (2)

As the magnitude of unit vectors is unity for any vector A

The component of A along x axis = A.VX

The component of A along Y axis = A.VY

The component of A along Z axis = A.VZ

Similarly, for n dimensional space, the component of A along some G axis

= A. V G. . . . . . . . . . . . . . . (3)

Substitute equation 2 in equation 3.

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 22/71


Page 23 of 71

⇒ C G = ( X1 VX + Y1 VY + Z1 VZ +. . . +G1 VG . . . +N 1 VN ) VG

= X1 VX VG + Y1 VY VG + Z1 VZ VG +. . . +G1 VG VG . . . +N 1 VN VG

= G1 since VG VG = 1

I f VG VG ≠ 1 i.e.VG VG = k

A VG = G1 VG VG = G1 K

(AV G )
G1 =
K

Orthogonal Signal Space


Let us consider a set of n mutually orthogonal functions x1(t), x2(t)... xn(t) over the
interval t1 to t2. As these functions are orthogonal to each other, any two signals xj(t),
xk(t) have to satisfy the orthogonality condition. i.e.

t2

∫ xj (t) xk (t)dt = 0 where j ≠ k


t1

t2

2
Let ∫ x (t)dt = kk
k
t1

Let a function f(t), it can be approximated with this orthogonal signal space by adding the
components along mutually orthogonal signals i.e.

f (t) = C1 x1 (t) + C2 x2 (t)+. . . +Cn xn (t) + fe (t)

n
= Σ Cr xr (t)
r=1

n
f (t) = f (t) − Σ Cr xr (t)
r=1

t2
Mean sqaure error ε =
1

t2 − t2

t1
[fe (t) ] dt
2

t2 n
1
2
= ∫ [f [t] − ∑ Cr xr (t) ] dt
t2 − t2 t1
r=1

The component which minimizes the mean square error can be found by

dε dε dε
= =. . . = = 0
dC1 dC2 dCk

Let us consider

= 0
dC k

t2
d 1
n 2
[ ∫ [f (t) − Σ Cr xr (t) ] dt] = 0
r=1
dCk t2 − t1 t1

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 23/71


Page 24 of 71

All terms that do not contain C k is zero. i.e. in summation, r=k term remains and all other
terms are zero.

t2 t2

2
∫ −2f (t) xk (t)dt + 2Ck ∫ [x (t)]dt = 0
k
t1 t1

t2
∫ f (t) xk (t)dt
t1
⇒ Ck =
t2 2
in t x (t)dt
t1 k

t2

⇒ ∫ f (t) xk (t)dt = Ck K k
t1

Mean Square Error


The average of square of error function fe(t) is called as mean square error. It is denoted
by ε (epsilon).

.
1 t2 2
ε = ∫ [fe (t) ] dt
t2 − t1 t1

1 t2 2
n
= ∫ [fe (t) − Σ Cr xr (t) ] dt
t2 − t1 t1 r=1

1 t2 t2 t2
2 n 2 2 n
= [∫ [fe (t)]dt + Σ Cr ∫ xr (t)dt − 2Σ Cr ∫ xr (t)f (t)dt
t2 − t1 t1 r=1 t1 r=1 t1

t2 t2
You know that Cr ∫
2
t1
2
xr (t)dt = Cr ∫
t1
xr (t)f (d)dt = Cr K r
2

1 t2 2 n 2 n 2
ε = [∫ [f (t)]dt + Σ Cr K r − 2 Σ Cr K r ]
t2 − t1 t1 r=1 r=1

1 t2 2 n 2
= [∫ [f (t)]dt − Σ Cr K r ]
t2 − t1 t1 r=1

1 t2 2 2 2 2
∴ ε = [∫ [f (t)]dt + ( C1 K 1 + C2 K 2 +. . . +Cn K n )]
t2 − t1 t1

The above equation is used to evaluate the mean square error.

Closed and Complete Set of Orthogonal Functions


Let us consider a set of n mutually orthogonal functions x1(t), x2(t)...xn(t) over the
interval t1 to t2. This is called as closed and complete set when there exist no function f(t)
t2
satisfying the condition ∫
t1
f (t) xk (t)dt = 0

t2
If this function is satisfying the equation ∫
t1
f (t) xk (t)dt = 0 for k = 1, 2, . . then f(t) is
said to be orthogonal to each and every function of orthogonal set. This set is incomplete
without f(t). It becomes closed and complete set when f(t) is included.

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 24/71


Page 25 of 71

f(t) can be approximated with this orthogonal set by adding the components along
mutually orthogonal signals i.e.

f (t) = C1 x1 (t) + C2 x2 (t)+. . . +Cn xn (t) + fe (t)

If the infinite series C1 x1 (t) + C2 x2 (t)+. . . +Cn xn (t) converges to f(t) then mean
square error is zero.

Orthogonality in Complex Functions


If f1(t) and f2(t) are two complex functions, then f1(t) can be expressed in terms of f2(t)
as

f1 (t) = C12 f2 (t) ..with negligible error


t ∗
2
∫ f (t)f (t)dt
1 2

Where
t1

C12 = t2 2
∫ |f (t)| dt
t 2
1

Where f
2

(t) = complex conjugate of f2(t).

If f1(t) and f2(t) are orthogonal then C 12 = 0

t2

∫ f1 (t) f (t)dt
t1 2
= 0
t2 2
∫ |f2 (t) | dt
t1

t2


⇒ ∫ f1 (t) f (dt) = 0
2
t1

The above equation represents orthogonality condition in complex functions.

Fourier Series
Jean Baptiste Joseph Fourier,a French mathematician and a physicist; was born in
Auxerre, France. He initialized Fourier series, Fourier transforms and their applications to
problems of heat transfer and vibrations. The Fourier series, Fourier transforms and
Fourier's Law are named in his honour.

Jean Baptiste Joseph Fourier (21 March 1768 16 May 1830)

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 25/71


Page 26 of 71

Fourier series
To represent any periodic signal x(t), Fourier developed an expression called Fourier series.
This is in terms of an infinite sum of sines and cosines or exponentials. Fourier series uses
orthoganality condition.

Fourier Series Representation of Continuous Time Periodic Signals

A signal is said to be periodic if it satisfies the condition x (t) = x (t + T) or x (n) = x (n +


N).

Where T = fundamental time period,

ω 0= fundamental frequency = 2π/T

There are two basic periodic signals:

x(t) = cos ω0 t (sinusoidal) &

(complex exponential)
j ω0 t
x(t) = e

These two signals are periodic with period T = 2π/ω0 .

A set of harmonically related complex exponentials can be represented as { ϕk (t)}


jkω0 t jk( )t
ϕk (t) = {e } = {e T }where k = 0 ± 1, ±2. . n . . . . . (1)

All these signals are periodic with period T

According to orthogonal signal space approximation of a function x (t) with n, mutually


orthogonal functions is given by

jkω0 t
x(t) = ∑ ak e . . . . . (2)

k=−∞

jkω0 t
= ∑ ak ke

k=−∞

Where ak = Fourier coefficient = coefficient of approximation.

This signal x(t) is also periodic with period T.

Equation 2 represents Fourier series representation of periodic signal x(t).

The term k = 0 is constant.

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 26/71


Page 27 of 71

The term k = ±1 having fundamental frequency ω0 , is called as 1 st harmonics.

The term k = ±2 having fundamental frequency 2ω0 , is called as 2 nd harmonics, and so


on...

The term k = n having fundamental frequency nω0, is called as nth harmonics.

Deriving Fourier Coefficient

We know that x(t) = Σ



k=−∞
ak e
jkω0 t
. . . . . . (1)

Multiply e
−jnω0 t
on both sides. Then

−jnω0 t jkω0 t −jnω0 t


x(t)e = ∑ ak e .e

k=−∞

Consider integral on both sides.

T T ∞

jkω0 t jkω0 t −jnω0 t


∫ x(t)e dt = ∫ ∑ ak e .e dt
0 0
k=−∞

T ∞

j(k−n) ω0 t
= ∫ ∑ ak e . dt
0
k=−∞

T ∞ T

jkω0 t j(k−n) ω0 t
∫ x(t)e dt = ∑ ak ∫ e dt. . . . . . (2)
0 k=−∞ 0

by Euler's formula,

T T T
j(k−n)ω0 t
∫ e dt. = ∫ cos(k − n)ω0 dt + j ∫ sin(k − n)ω0 t dt
0 0 0

T
T k = n
j(k−n) ω0 t
∫ e dt. = {
0 0 k ≠ n

Hence in equation 2, the integral is zero for all values of k except at k = n. Put k = n in
equation 2.

−jnω0 t
⇒ ∫ x(t)e dt = an T
0

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 27/71


Page 28 of 71

T
1
−jnω0 t
⇒ an = ∫ e dt
T 0

Replace n by k.

T
1
−jkω0 t
⇒ ak = ∫ e dt
T 0

j(k−n) ω0 t
∴ x(t) = ∑ ak e

k=−∞

T
1
−jkω0 t
where ak = ∫ e dt
T 0

Fourier Series Properties


These are properties of Fourier series:

Linearity Property
f ourier series coef f icient f ourier series coef f icient

If x(t) ←−−−−−−−−−−−−−−→ fxn & y(t) ←−−−−−−−−−−−−−−→ fyn

then linearity property states that

f ourier series coef f icient

a x(t) + b y(t) ←−−−−−−−−−−−−−−→ a fxn + b fyn

Time Shifting Property


f ourier series coef f icient

If x(t) ←−−−−−−−−−−−−−−→ fxn

then time shifting property states that

f ourier series coef f icient


−jnω0 t0
x(t − t 0 ) ←−−−−−−−−−−−−−−→ e fxn

Frequency Shifting Property


f ourier series coef f icient

If x(t) ←−−−−−−−−−−−−−−→ fxn

then frequency shifting property states that

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 28/71


Page 29 of 71

f ourier series coef f icient


jnω0 t0
e . x(t) ←−−−−−−−−−−−−−−→ fx(n−n
0)

Time Reversal Property


f ourier series coef f icient

If x(t) ←−−−−−−−−−−−−−−→ fxn

then time reversal property states that

f ourier series coef f icient

If x(−t) ←−−−−−−−−−−−−−−→ f−xn

Time Scaling Property


f ourier series coef f icient

If x(t) ←−−−−−−−−−−−−−−→ fxn

then time scaling property states that

f ourier series coef f icient

If x(at) ←−−−−−−−−−−−−−−→ fxn

Time scaling property changes frequency components from ω0 to aω0 .

Differentiation and Integration Properties


f ourier series coef f icient

If x(t) ←−−−−−−−−−−−−−−→ fxn

then differentiation property states that

f ourier series coef f icient


dx(t)
If ←−−−−−−−−−−−−−−→ jn ω0 . fxn
dt

& integration property states that

f ourier series coef f icient


f
If ∫ x(t)dt ←−−−−−−−−−−−−−−→
xn

jnω0

Multiplication and Convolution Properties


f ourier series coef f icient f ourier series coef f icient

If x(t) ←−−−−−−−−−−−−−−→ fxn & y(t) ←−−−−−−−−−−−−−−→ fyn

then multiplication property states that

f ourier series coef f icient

x(t). y(t) ←−−−−−−−−−−−−−−→ T fxn ∗ fyn

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 29/71


Page 30 of 71

& convolution property states that

f ourier series coef f icient

x(t) ∗ y(t) ←−−−−−−−−−−−−−−→ T fxn . fyn

Conjugate and Conjugate Symmetry Properties


f ourier series coef f icient

If x(t) ←−−−−−−−−−−−−−−→ fxn

Then conjugate property states that

f ourier series coef f icient

x ∗ (t) ←−−−−−−−−−−−−−−→ f ∗xn

Conjugate symmetry property for real valued time signal states that

f ∗xn = f−xn

& Conjugate symmetry property for imaginary valued time signal states that

f ∗xn = −f−xn

Fourier Series Types

Trigonometric Fourier Series (TFS)


sin n ω0 t and sin mω0 t are orthogonal over the interval (t 0 , t 0 +

ω0
). So
sin ω0 t, sin 2ω0 t forms an orthogonal set. This set is not complete without { cos nω0 t }
because this cosine set is also orthogonal to sine set. So to complete this set we must
include both cosine and sine terms. Now the complete orthogonal set contains all cosine
and sine terms i.e. { sin nω0 t, cos n ω0 t } where n=0, 1, 2...

∴ Any function x(t) in the interval (t 0 , t 0 +


ω0
) can be represented as

x(t) = a0 cos 0ω0 t + a1 cos 1ω0 t + a2 cos 2ω0 t+. . . +an cos n ω0 t+. . .

+b0 sin 0ω0 t + b1 sin 1ω0 t+. . . +bn sin n ω0 t+. . .

= a0 + a1 cos 1ω0 t + a2 cos 2ω0 t+. . . +an cos n ω0 t+. . .

+b1 sin 1ω0 t+. . . +bn sin n ω0 t+. . .

∴ x(t) = a0 + ∑( an cos n ω0 t + bn sin n ω0 t) (t0 < t < t0 + T )

n=1

The above equation represents trigonometric Fourier series representation of x(t).

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 30/71


Page 31 of 71

t0 +T
t0 +T
∫ x(t)1dt 1
t0
Where a0 = = ∫ x(t)dt
t0 +T 2 T
∫ 1 dt t0
t0

t0 +T
∫ x(t) cos n ω0 t dt
t0
an =
t0 +T
2
∫ cos n ω0 t dt
t0

t0 +T
∫ x(t) sin n ω0 t dt
t0
bn =
t0 +T 2
∫ sin n ω0 t dt
t0

t0 +T t0 +T
2 2
T
Here ∫ cos n ω0 t dt = ∫ sin n ω0 t dt =
2
t0 t0

t0 +T
2
∴ an = ∫ x(t) cos n ω0 t dt
T t0

t0 +T
2
bn = ∫ x(t) sin n ω0 t dt
T t0

Exponential Fourier Series (EFS)


Consider a set of complex exponential functions {e
jnω0 t
} (n = 0, ±1, ±2...) which is
orthogonal over the interval (t 0 , t 0 + T ). Where . This is a complete set so it is

T =
ω0

possible to represent any function f(t) as shown below

jω0 t j2 ω0 t jnω0 t
f (t) = F0 + F1 e + F2 e +. . . +Fn e +. . .

−jω0 t −j2 ω0 t −jnω0 t


F−1 e + F−2 e +. . . +F−n e +. . .

jnω0 t
∴ f (t) = ∑ Fn e (t0 < t < t0 + T ). . . . . . . (1)

n=−∞

Equation 1 represents exponential Fourier series representation of a signal f(t) over the
interval (t0, t0+T). The Fourier coefficient is given as

t0 +T jnω0 t ∗
∫ f (t)(e ) dt
t0
Fn =
t0 +T
jnω0 t jnω0 t ∗
∫ e (e ) dt
t0

t0 +T −jnω0 t
∫ f (t)e dt
t0
=
t0 +T
−jnω0 t jnω0 t
∫ e e dt
t0

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 31/71


Page 32 of 71

t0 +T −jnω0 t
∫ f (t)e dt t0 +T
t0 1
−jnω0 t
= = ∫ f (t)e dt
t0 +T
T
∫ 1 dt t0
t0

t0 +T
1
−jnω0 t
∴ Fn = ∫ f (t)e dt
T t0

Relation Between Trigonometric and Exponential Fourier Series


Consider a periodic signal x(t), the TFS & EFS representations are given below respectively


x(t) = a0 + Σ (an cos nω0 t + bn sin nω0 t). . . . . . (1)
n=1

∞ jnω0 t
x(t) = Σ n=−∞ Fn e

jω0 t j2 ω0 t jnω0 t
= F0 + F1 e + F2 e +. . . +Fn e +. . .

−jω0 t −j2 ω0 t −jnω0 t


F−1 e + F−2 e +. . . +F−n e +. . .

= F0 + F1 (cos ω0 t + j sin ω0 t) + F2 (cos2ω0 t + j sin 2ω0 t)+. . . +Fn (cos n ω0 t + j sin n

+F−1 (cos ω0 t − j sin ω0 t) + F−2 (cos 2ω0 t − j sin 2ω0 t)+. . . +F−n (cos n ω0 t − j sin n ω

= F0 + ( F1 + F−1 ) cos ω0 t + ( F2 + F−2 ) cos 2ω0 t+. . . +j( F1 − F−1 ) sin ω0 t + j( F2 −


∴ x(t) = F0 + Σ (( Fn + F−n ) cos n ω0 t + j( Fn − F−n ) sin n ω0 t). . . . . . (2)
n=1

Compare equation 1 and 2.

a0 = F0

an = Fn + F−n

bn = j( Fn − F−n )

Similarly,

1
Fn = ( an − jbn )
2

1
F−n = ( an + jbn )
2

Fourier Transforms
The main drawback of Fourier series is, it is only applicable to periodic signals. There are
some naturally produced signals such as nonperiodic or aperiodic, which we cannot
represent using Fourier series. To overcome this shortcoming, Fourier developed a

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 32/71


Page 33 of 71

mathematical model to transform signals between time (or spatial) domain to frequency
domain & vice versa, which is called 'Fourier transform'.

Fourier transform has many applications in physics and engineering such as analysis of LTI
systems, RADAR, astronomy, signal processing etc.

Deriving Fourier transform from Fourier series


Consider a periodic signal f(t) with period T. The complex Fourier series representation of
f(t) is given as

jkω0 t
f (t) = ∑ ak e

k=−∞



j kt
T
= ∑ ak e 0
. . . . . . (1)

k=−∞

Let 1
= Δf , then equation 1 becomes
T0

∞ j2πkΔft
f (t) = ∑ ak e . . . . . . (2)
k=−∞

but you know that

1 t0 +T
−jkω0 t
ak = ∫ f (t)e dt
T0 t0

Substitute in equation 2.

1 t0 +T
∞ −jkω0 t j2πkΔft
(2) ⇒ f (t) = Σ ∫ f (t)e dt e
k=−∞ T0 t0

Let t0 =
T

∞ 2 −j2πkΔft j2πkΔft
= Σ [∫ −T
f (t)e dt] e . Δf
k=−∞
2

In the limit as T → ∞, Δf approaches differential df , kΔf becomes a continuous


variable f, and summation becomes integration

2
∞ −j2πkΔft j2πkΔft
f (t) = limT →∞ {Σ [∫ f (t)e dt] e . Δf }
k=−∞
−T

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 33/71


Page 34 of 71

∞ ∞

−j2πft j2πft
= ∫ [∫ f (t)e dt] e df
−∞ −∞

jωt
f (t) = ∫ F [ω] e dω
−∞

∞ −j2πft
Where F [ω] = [ ∫ f (t)e dt]
−∞

Fourier transform of a signal


−jωt
f (t) = F [ω] = [ ∫ f (t)e dt]
−∞

Inverse Fourier Transform is


jωt
f (t) = ∫ F [ω] e dω
−∞

Fourier Transform of Basic Functions


Let us go through Fourier Transform of basic functions:

FT of GATE Function

ωT
F [ω] = AT S a( )
2

FT of Impulse Function
∞ −jωt
F T [ω(t)] = [∫ δ(t) e dt]
−∞

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 34/71


Page 35 of 71

−jωt
= e |t = 0

0
= e = 1

∴ δ(ω) = 1

FT of Unit Step Function:


U (ω) = πδ(ω) + 1/jω

FT of Exponentials
F.T
−at
e u(t) ⟷ 1/(a + j)

F.T
−at
e u(t) ⟷ 1/(a + jω)

F.T
−a | t | 2a
e ⟷ 2
2
a +

F.T
j ω0 t
e ⟷ δ(ω − ω0 )

FT of Signum Function
F.T
2
sgn(t) ⟷

Conditions for Existence of Fourier Transform


Any function f(t) can be represented by using Fourier transform only when the function
satisfies Dirichlets conditions. i.e.

The function f(t) has finite number of maxima and minima.

There must be finite number of discontinuities in the signal f(t),in the given interval
of time.

It must be absolutely integrable in the given interval of time i.e.



∫ | f (t)| dt < ∞
−∞

Discrete Time Fourier Transforms (DTFT)


The discrete-time Fourier transform (DTFT) or the Fourier transform of a discretetime
sequence x[n] is a representation of the sequence in terms of the complex exponential
sequence .
jωn
e

The DTFT sequence x[n] is given by

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 35/71


Page 36 of 71

∞ −jωn
X(ω) = Σ x(n) e . . . . . . (1)
n=−∞

Here, X(ω) is a complex function of real frequency variable ω and it can be written as

X(ω) = Xre (ω) + jXimg (ω)

Where Xre(ω), Ximg(ω) are real and imaginary parts of X(ω) respectively.

Xre (ω) = | X(ω)| cos θ(ω)

Ximg (ω) = | X(ω)| sin θ(ω)

2 2 2
|X(ω) | = | Xre (ω) | + | Xim (ω) |

And X(ω) can also be represented as X(ω) = | X(ω)|e


jθ()

Where θ(ω) = argX(ω)

| X(ω)|, θ(ω) are called magnitude and phase spectrums of X(ω).

Inverse Discrete-Time Fourier Transform


π
1
jωn
x(n) = ∫ X(ω) e dω . . . . . . (2)
2π −π

Convergence Condition:

The infinite series in equation 1 may be converges or may not. x(n) is absolutely
summable.

when ∑ | x(n)| < ∞

n=−∞

An absolutely summable sequence has always a finite energy but a finite-energy sequence
is not necessarily to be absolutely summable.

Fourier Transforms Properties


Here are the properties of Fourier Transform:

Linearity Property
F.T

If x(t) ⟷ X(ω)

F.T

& y(t) ⟷ Y (ω)

Then linearity property states that

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 36/71


Page 37 of 71

F.T

ax(t) + by(t) ⟷ aX(ω) + bY (ω)

Time Shifting Property


F.T

If x(t) ⟷ X(ω)

Then Time shifting property states that

F.T
−jω t0
x(t − t 0 ) ⟷ e X(ω)

Frequency Shifting Property


F.T

If x(t) ⟷ X(ω)

Then frequency shifting property states that

F.T
j ω0 t
e . x(t) ⟷ X(ω − ω0 )

Time Reversal Property


F.T

If x(t) ⟷ X(ω)

Then Time reversal property states that

F.T

x(−t) ⟷ X(−ω)

Time Scaling Property


F.T

If x(t) ⟷ X(ω)

Then Time scaling property states that

1 ω
x(at) X
| a| a

Differentiation and Integration Properties


F.T

I f x(t) ⟷ X(ω)

Then Differentiation property states that

dx(t) F.T

⟷ jω. X(ω)
dt

n
d x(t) F.T
n
n
⟷ (jω) . X(ω)
dt

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 37/71


Page 38 of 71

and integration property states that

F.T
1
∫ x(t) dt ⟷ X(ω)

F.T
1
∭ . . . ∫ x(t) dt ⟷ n
X(ω)
(jω)

Multiplication and Convolution Properties


F.T

If x(t) ⟷ X(ω)

F.T

& y(t) ⟷ Y (ω)

Then multiplication property states that

F.T

x(t). y(t) ⟷ X(ω) ∗ Y (ω)

and convolution property states that

F.T
1
x(t) ∗ y(t) ⟷ X(ω). Y (ω)

Distortion Less Transmission


Transmission is said to be distortion-less if the input and output have identical wave
shapes. i.e., in distortion-less transmission, the input x(t) and output y(t) satisfy the
condition:

y (t) = Kx(t - td)

Where td = delay time and

k = constant.

Take Fourier transform on both sides

FT[ y (t)] = FT[Kx(t - td)]

= K FT[x(t - td)]

According to time shifting property,

= KX(w)e−jωt d

−jωtd
∴ Y (w) = K X(w) e

Thus, distortionless transmission of a signal x(t) through a system with impulse response
h(t) is achieved when

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 38/71


Page 39 of 71

|H(ω)| = K and (amplitude response)

Φ(ω) = −ωt d = −2πf t d (phase response)

A physical transmission system may have amplitude and phase responses as shown
below:

Hilbert Transform
Hilbert transform of a signal x(t) is defined as the transform in which phase angle of all
components of the signal is shifted by ±90
o
.

Hilbert transform of x(t) is represented with ^ (t),and


x it is given by


1 x(k)
^ (t) =
x ∫ dk
π t − k
−∞

The inverse Hilbert transform is given by



1 x(k)
^ (t) =
x ∫ dk
π t − k
−∞

x(t), ^ (t)
x is called a Hilbert transform pair.

Properties of the Hilbert Transform

A signal x(t) and its Hilbert transform ^ (t)


x have

The same amplitude spectrum.

The same autocorrelation function.

The energy spectral density is same for both x(t) and ^ (t).
x

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 39/71


Page 40 of 71

x(t) and ^ (t)


x are orthogonal.

The Hilbert transform of ^ (t)


x is -x(t)

If Fourier transform exist then Hilbert transform also exists for energy and power
signals.

Convolution and Correlation

Convolution
Convolution is a mathematical operation used to express the relation between input and
output of an LTI system. It relates input, output and impulse response of an LTI system as

y(t) = x(t) ∗ h(t)

Where y (t) = output of LTI

x (t) = input of LTI

h (t) = impulse response of LTI

There are two types of convolutions:

Continuous convolution

Discrete convolution

Continuous Convolution

y(t) = x(t) ∗ h(t)


= ∫ x(τ )h(t − τ )dτ
−∞

(or)

= ∫ x(t − τ )h(τ )dτ
−∞

Discrete Convolution

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 40/71


Page 41 of 71

y(n) = x(n) ∗ h(n)


= Σ x(k)h(n − k)
k=−∞

(or)


= Σ x(n − k)h(k)
k=−∞

By using convolution we can find zero state response of the system.

Deconvolution
Deconvolution is reverse process to convolution widely used in signal and image
processing.

Properties of Convolution

Commutative Property
x1 (t) ∗ x2 (t) = x2 (t) ∗ x1 (t)

Distributive Property
x1 (t) ∗ [x2 (t) + x3 (t)] = [x1 (t) ∗ x2 (t)] + [x1 (t) ∗ x3 (t)]

Associative Property
x1 (t) ∗ [x2 (t) ∗ x3 (t)] = [x1 (t) ∗ x2 (t)] ∗ x3 (t)

Shifting Property
x1 (t) ∗ x2 (t) = y(t)

x1 (t) ∗ x2 (t − t 0 ) = y(t − t 0 )

x1 (t − t 0 ) ∗ x2 (t) = y(t − t 0 )

x1 (t − t 0 ) ∗ x2 (t − t 1 ) = y(t − t 0 − t 1 )

Convolution with Impulse


x1 (t) ∗ δ(t) = x(t)

x1 (t) ∗ δ(t − t 0 ) = x(t − t 0 )

Convolution of Unit Steps

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 41/71


Page 42 of 71

u(t) ∗ u(t) = r(t)

u(t − T1 ) ∗ u(t − T2 ) = r(t − T1 − T2 )

u(n) ∗ u(n) = [n + 1]u(n)

Scaling Property

If x(t) ∗ h(t) = y(t)

then x(at) ∗ h(at) =


1
y(at)
|a|

Differentiation of Output

if y(t) = x(t) ∗ h(t)

dy(t) dx(t)
then dt
=
dt
∗ h(t)

or

dy(t) dh(t)
= x(t) ∗
dt dt

Note:

Convolution of two causal sequences is causal.

Convolution of two anti causal sequences is anti causal.

Convolution of two unequal length rectangles results a trapezium.

Convolution of two equal length rectangles results a triangle.

A function convoluted itself is equal to integration of that function.

Example: You know that u(t) ∗ u(t) = r(t)

According to above note, u(t) ∗ u(t) = ∫ u(t)dt = ∫ 1dt = t = r(t)

Here, you get the result just by integrating u(t) .

Limits of Convoluted Signal


If two signals are convoluted then the resulting convoluted signal has following range:

Sum of lower limits < t < sum of upper limits

Ex: find the range of convolution of signals given below

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 42/71


Page 43 of 71

Here, we have two rectangles of unequal length to convolute, which results a trapezium.

The range of convoluted signal is:

Sum of lower limits < t < sum of upper limits

−1 + −2 < t < 2 + 2

−3 < t < 4

Hence the result is trapezium with period 7.

Area of Convoluted Signal


The area under convoluted signal is given by Ay = Ax Ah

Where Ax = area under input signal

Ah = area under impulse response

Ay = area under output signal


Proof: y(t) = ∫
−∞
x(τ )h(t − τ )dτ

Take integration on both sides



∫ y(t)dt = ∫ ∫ x(τ )h(t − τ )dτ dt
−∞


= ∫ x(τ )dτ ∫ h(t − τ )dt
−∞

We know that area of any signal is the integration of that signal itself.

∴ Ay = Ax Ah

DC Component
DC component of any signal is given by

area of the signal


DC component =
period of the signal

Ex: what is the dc component of the resultant convoluted signal given below?

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 43/71


Page 44 of 71

Here area of x1(t) = length breadth = 1 3 = 3

area of x2(t) = length breadth = 1 4 = 4

area of convoluted signal = area of x1(t) area of x2(t)

= 3 4 = 12

Duration of the convoluted signal = sum of lower limits < t < sum of upper limits

= -1 + -2 < t < 2+2

= -3 < t < 4

Period=7

area of the signal


∴ Dc component of the convoluted signal =
period of the signal

Dc component = 12

Discrete Convolution
Let us see how to calculate discrete convolution:

i. To calculate discrete linear convolution:

Convolute two sequences x[n] = {a,b,c} & h[n] = [e,f,g]

Convoluted output = [ ea, eb+fa, ec+fb+ga, fc+gb, gc]

Note: if any two sequences have m, n number of samples respectively, then the resulting
convoluted sequence will have [m+n-1] samples.

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 44/71


Page 45 of 71

Example: Convolute two sequences x[n] = {1,2,3} & h[n] = {-1,2,2}

Convoluted output y[n] = [ -1, -2+2, -3+4+2, 6+4, 6]

= [-1, 0, 3, 10, 6]

Here x[n] contains 3 samples and h[n] is also having 3 samples so the resulting sequence
having 3+3-1 = 5 samples.

ii. To calculate periodic or circular convolution:

Periodic convolution is valid for discrete Fourier transform. To calculate periodic convolution
all the samples must be real. Periodic or circular convolution is also called as fast
convolution.

If two sequences of length m, n respectively are convoluted using circular convolution then
resulting sequence having max [m,n] samples.

Ex: convolute two sequences x[n] = {1,2,3} & h[n] = {-1,2,2} using circular convolution

Normal Convoluted output y[n] = [ -1, -2+2, -3+4+2, 6+4, 6].

= [-1, 0, 3, 10, 6]

Here x[n] contains 3 samples and h[n] also has 3 samples. Hence the resulting sequence
obtained by circular convolution must have max[3,3]= 3 samples.

Now to get periodic convolution result, 1st 3 samples [as the period is 3] of normal
convolution is same next two samples are added to 1st samples as shown below:

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 45/71


Page 46 of 71

∴ Circular convolution result y[n] = [9 6 3]

Correlation
Correlation is a measure of similarity between two signals. The general formula for
correlation is

∫ x1 (t) x2 (t − τ )dt
−∞

There are two types of correlation:

Auto correlation

Cros correlation

Auto Correlation Function


It is defined as correlation of a signal with itself. Auto correlation function is a measure of
similarity between a signal & its time delayed version. It is represented with R(τ ).

Consider a signals x(t). The auto correlation function of x(t) with its time delayed version
is given by

R11 (τ ) = R(τ ) = ∫ x(t)x(t − τ )dt [+ve shift]


−∞

= ∫ x(t)x(t + τ )dt [-ve shift]


−∞

Where τ = searching or scanning or delay parameter.

If the signal is complex then auto correlation function is given by


R11 (τ ) = R(τ ) = ∫ x(t)x ∗ (t − τ )dt [+ve shift]


−∞

= ∫ x(t + τ )x ∗ (t)dt [-ve shift]


−∞

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 46/71


Page 47 of 71

Properties of Auto-correlation Function of Energy Signal

Auto correlation exhibits conjugate symmetry i.e. R (τ ) = R*(-τ )

Auto correlation function of energy signal at origin i.e. at τ =0 is equal to total


energy of that signal, which is given as:
∞ 2
R (0) = E = ∫
−∞
| x(t) | dt

Auto correlation function ,


1

τ

Auto correlation function is maximum at τ =0 i.e |R (τ ) | ≤ R (0) ∀ τ

Auto correlation function and energy spectral densities are Fourier transform pairs.
i.e.
F . T [R(τ )] = Ψ(ω)
∞ −jωτ
Ψ(ω) = ∫ R(τ ) e dτ
−∞

R(τ ) = x(τ ) ∗ x(−τ )

Auto Correlation Function of Power Signals

The auto correlation function of periodic power signal with period T is given by
T

1 2

R(τ ) = lim ∫ x(t)x ∗ (t − τ )dt


T →∞ T −T

Properties

Auto correlation of power signal exhibits conjugate symmetry i.e.


R(τ ) = R ∗ (−τ )

Auto correlation function of power signal at τ = 0 (at origin)is equal to total power
of that signal. i.e.
R(0) = ρ

Auto correlation function of power signal ∞ 1τ ,

Auto correlation function of power signal is maximum at τ = 0 i.e.,


|R(τ )| ≤ R(0) ∀ τ

Auto correlation function and power spectral densities are Fourier transform pairs.
i.e.,
F . T [R(τ )] = s(ω)
∞ −jωτ
s(ω) = ∫ R(τ ) e dτ
−∞

R(τ ) = x(τ ) ∗ x(−τ )

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 47/71


Page 48 of 71

Density Spectrum
Let us see density spectrums:

Energy Density Spectrum


Energy density spectrum can be calculated using the formula:

2
E = ∫ | x(f ) | df
−∞

Power Density Spectrum


Power density spectrum can be calculated by using the formula:

∞ 2
P = Σ | Cn |
n=−∞

Cross Correlation Function


Cross correlation is the measure of similarity between two different signals.

Consider two signals x1(t) and x2(t). The cross correlation of these two signals R12 (τ ) is
given by

R12 (τ ) = ∫ x1 (t) x2 (t − τ ) dt [+ve shift]


−∞

= ∫ x1 (t + τ ) x2 (t) dt [-ve shift]


−∞

If signals are complex then



R12 (τ ) = ∫ x1 (t) x (t − τ ) dt [+ve shift]
2
−∞


= ∫ x1 (t + τ ) x (t) dt [-ve shift]
2
−∞


R21 (τ ) = ∫ x2 (t) x (t − τ ) dt [+ve shift]
1
−∞


= ∫ x2 (t + τ ) x (t) dt [-ve shift]
1
−∞

Properties of Cross Correlation Function of Energy and Power


Signals

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 48/71


Page 49 of 71

Auto correlation exhibits conjugate symmetry i.e. R12 (τ ) = R



21
(−τ ).

Cross correlation is not commutative like convolution i.e.

R12 (τ ) ≠ R21 (−τ )


If R12(0) = 0 means, if ∫
−∞
x1 (t) x (t)dt = 0,
2

then the two signals are said to be
orthogonal.
T

For power signal if lim T →∞


1

2

−T

x(t) x (t) dt then two signals are said to be
T
2

orthogonal.

Cross correlation function corresponds to the multiplication of spectrums of one


signal to the complex conjugate of spectrum of another signal. i.e.

R12 (τ ) ←→ X1 (ω) X (ω)
2

This also called as correlation theorem.

Parseval's Theorem
Parseval's theorem for energy signals states that the total energy in a signal can be
obtained by the spectrum of the signal as

1 ∞ 2
E = ∫ |X(ω) | dω
2π −∞

Note: If a signal has energy E then time scaled version of that signal x(at) has energy E/a.

Signals Sampling Theorem


Statement: A continuous time signal can be represented in its samples and can be
recovered back when sampling frequency fs is greater than or equal to the twice the
highest frequency component of message signal. i. e.

fs ≥ 2fm .

Proof: Consider a continuous time signal x(t). The spectrum of x(t) is a band limited to fm
Hz i.e. the spectrum of x(t) is zero for |ω|>ω m.

Sampling of input signal x(t) can be obtained by multiplying x(t) with an impulse train δ(t)
of period Ts. The output of multiplier is a discrete signal called sampled signal which is
represented with y(t) in the following diagrams:

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 49/71


Page 50 of 71

Here, you can observe that the sampled signal takes the period of impulse. The process of
sampling can be explained by the following mathematical expression:

Sampled signal y(t) = x(t). δ(t) . . . . . . (1)

The trigonometric Fourier series representation of δ(t) is given by


δ(t) = a0 + Σ ( an cos n ωs t + bn sin n ωs t) . . . . . . (2)
n=1

Where
1 2 1 1
a0 = ∫ −T
δ(t)dt = δ(0) =
Ts Ts Ts
2

2 2 2 2
an = ∫ −T
δ(t) cos n ωs dt = δ(0) cos n ωs 0 =
Ts T2 T
2

2 2 2
bn = ∫ −T
δ(t) sin n ωs t dt = δ(0) sin n ωs 0 = 0
Ts Ts
2

Substitute above values in equation 2.

1 ∞ 2
∴ δ(t) = + Σ ( cos n ωs t + 0)
Ts n=1 Ts

Substitute δ(t) in equation 1.

→ y(t) = x(t). δ(t)

1 ∞ 2
= x(t)[ + Σ ( cos n ωs t)]
Ts n=1 Ts

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 50/71


Page 51 of 71

1 ∞
= [x(t) + 2Σ (cos n ωs t)x(t)]
Ts n=1

1
y(t) = [x(t) + 2 cos ωs t. x(t) + 2 cos 2ωs t. x(t) + 2 cos 3ωs t. x(t) . . . . . . ]
Ts

Take Fourier transform on both sides.

1
Y (ω) = [X(ω) + X(ω − ωs ) + X(ω + ωs ) + X(ω − 2ωs ) + X(ω + 2ωs )+ . . . ]
Ts

1 ∞
∴ Y (ω) = Σ X(ω − n ωs ) where n = 0, ±1, ±2, . . .
Ts n=−∞

To reconstruct x(t), you must recover input signal spectrum X(ω) from sampled signal
spectrum Y(ω), which is possible when there is no overlapping between the cycles of Y(ω).

Possibility of sampled frequency spectrum with different conditions is given by the following
diagrams:

Aliasing Effect
The overlapped region in case of under sampling represents aliasing effect, which can be
removed by

considering fs >2fm

By using anti aliasing filters.

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 51/71


Page 52 of 71

Signals Sampling Techniques


There are three types of sampling techniques:

Impulse sampling.

Natural sampling.

Flat Top sampling.

Impulse Sampling
Impulse sampling can be performed by multiplying input signal x(t) with impulse train

Σn=−∞ δ(t − nT ) of period 'T'. Here, the amplitude of impulse changes with respect to
amplitude of input signal x(t). The output of sampler is given by

y(t) = x(t) impulse train


= x(t) Σ δ(t − nT )
n=−∞


y(t) = yδ (t) = Σ x(nt)δ(t − nT ) . . . . . . 1
n=−∞

To get the spectrum of sampled signal, consider Fourier transform of equation 1 on both
sides

1 ∞
Y (ω) = Σ X(ω − n ωs )
T n=−∞

This is called ideal sampling or impulse sampling. You cannot use this practically because
pulse width cannot be zero and the generation of impulse train is not possible practically.

Natural Sampling
Natural sampling is similar to impulse sampling, except the impulse train is replaced by
pulse train of period T. i.e. you multiply input signal x(t) to pulse train Σ

n=−∞
P (t − nT )

as shown below

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 52/71


Page 53 of 71

The output of sampler is

y(t) = x(t) × pulse train

= x(t) × p(t)


= x(t) × Σ P (t − nT ) . . . . . . (1)
n=−∞

The exponential Fourier series representation of p(t) can be given as

∞ jnωs t
p(t) = Σn=−∞ Fn e . . . . . . (2)

∞ j2πnfs t
= Σ Fn e
n=−∞

1
Where Fn = ∫
2
−T
p(t)e
−jnωs t
dt
T
2

1
= (nωs )
TP

Substitute Fn value in equation 2

∞ 1 jnωs t
∴ p(t) = Σn=−∞ P (nωs )e
T

1 ∞ jnωs t
= Σn=−∞ P (nωs )e
T

Substitute p(t) in equation 1

y(t) = x(t) × p(t)

1 ∞ jnωs t
= x(t) × Σ P (nωs ) e
T n=−∞

1 ∞ jnωs t
y(t) = Σ P (nωs ) x(t) e
T n=−∞

To get the spectrum of sampled signal, consider the Fourier transform on both sides.

1 ∞ jnωs t
F . T [y(t)] = F . T [ Σ P (nωs ) x(t) e ]
T n=−∞

1 ∞ jnωs t
= Σ P (nωs ) F . T [x(t) e ]
T n=−∞

According to frequency shifting property

jnωs t
F . T [x(t) e ] = X[ω − nωs ]

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 53/71


Page 54 of 71

1 ∞
∴ Y [ω] = Σ P (nωs ) X[ω − nωs ]
T n=−∞

Flat Top Sampling


During transmission, noise is introduced at top of the transmission pulse which can be
easily removed if the pulse is in the form of flat top. Here, the top of the samples are flat
i.e. they have constant amplitude. Hence, it is called as flat top sampling or practical
sampling. Flat top sampling makes use of sample and hold circuit.

Theoretically, the sampled signal can be obtained by convolution of rectangular pulse p(t)
with ideally sampled signal say yδ(t) as shown in the diagram:

i.e. y(t) = p(t) × yδ (t) . . . . . . (1)

To get the sampled spectrum, consider Fourier transform on both sides for equation 1

Y [ω] = F . T [P (t) × yδ (t)]

By the knowledge of convolution property,

Y [ω] = P (ω) Yδ (ω)

Here P (ω) = T S a(
ωT
) = 2 sin ωT /ω
2

Nyquist Rate
It is the minimum sampling rate at which signal can be converted into samples and can be
recovered back without distortion.

Nyquist rate fN = 2fm hz

Nyquist interval = = seconds.


1 1

fN 2f m

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 54/71


Page 55 of 71

Samplings of Band Pass Signals


In case of band pass signals, the spectrum of band pass signal X[ω] = 0 for the
frequencies outside the range f1 ≤ f ≤ f2. The frequency f1 is always greater than zero.
Plus, there is no aliasing effect when fs > 2f2. But it has two disadvantages:

The sampling rate is large in proportion with f2. This has practical limitations.

The sampled signal spectrum has spectral gaps.

To overcome this, the band pass theorem states that the input signal x(t) can be
converted into its samples and can be recovered back without distortion when sampling
frequency fs < 2f2.

Also,

1 2f2
fs = =
T m

f
Where m is the largest integer <
2

and B is the bandwidth of the signal. If f2=KB, then

1 2K B
fs = =
T m

For band pass signals of bandwidth 2fm and the minimum sampling rate fs= 2 B = 4fm,

the spectrum of sampled signal is given by Y [ω] =


1
Σ

n=−∞
X[ω − 2nB]
T

Laplace Transforms (LT)


Complex Fourier transform is also called as Bilateral Laplace Transform. This is used to
solve differential equations. Consider an LTI system exited by a complex exponential signal
of the form x(t) = Gest.

Where s = any complex number = σ + jω ,

σ = real of s, and

ω = imaginary of s

The response of LTI can be obtained by the convolution of input with its impulse response
i.e.

y(t) = x(t) × h(t) = ∫ h(τ ) x(t − τ )dτ
−∞

∞ s(t−τ)
= ∫ h(τ ) Ge dτ
−∞

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 55/71


Page 56 of 71

st ∞ (−sτ)
= Ge .∫ h(τ ) e dτ
−∞

st
y(t) = Ge . H(S ) = x(t). H(S )


Where H(S) = Laplace transform of
−sτ
h(τ ) = ∫ h(τ ) e dτ
−∞


Similarly, Laplace transform of x(t) = X(S ) = ∫
−∞
x(t) e
−st
dt . . . . . . (1)

Relation between Laplace and Fourier transforms



Laplace transform of x(t) = X(S ) = ∫
−∞
x(t) e
−st
dt

Substitute s= σ + jω in above equation.

∞ −(σ+jω)t
X(σ + jω) = ∫ x(t) e dt
−∞

∞ −σt −jωt
= ∫ [x(t) e ]e dt
−∞

−σt
∴ X(S ) = F . T [x(t) e ] . . . . . . (2)

X(S ) = X(ω) f or s = jω

Inverse Laplace Transform


You know that
−σt
X(S ) = F . T [x(t) e ]

−σt −1 −1
→ x(t) e = F.T [X(S )] = F . T [X(σ + jω)]

1 ∞ jωt
= π∫ X(σ + jω) e dω
2 −∞

σt 1 ∞ jωt
x(t) = e ∫ X(σ + jω) e dω
2π −∞

1 ∞ (σ+jω)t
= ∫ X(σ + jω) e dω . . . . . . (3)
2π −∞

Here, σ + jω = s

jd = dsd = ds/j

1 ∞ st
∴ x(t) = ∫ X(s) e ds . . . . . . (4)
2πj −∞

Equations 1 and 4 represent Laplace and Inverse Laplace Transform of a signal x(t).

Conditions for Existence of Laplace Transform


Dirichlet's conditions are used to define the existence of Laplace transform. i.e.

The function f(t) has finite number of maxima and minima.

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 56/71


Page 57 of 71

There must be finite number of discontinuities in the signal f(t),in the given interval
of time.

It must be absolutely integrable in the given interval of time. i.e.



∫ | f (t)| dt < ∞
−∞

Initial and Final Value Theorems


If the Laplace transform of an unknown function x(t) is known, then it is possible to
determine the initial and the final values of that unknown signal i.e. x(t) at t=0 + and t=∞.

Initial Value Theorem

Statement: if x(t) and its 1st derivative is Laplace transformable, then the initial value of
x(t) is given by

+
x( 0 ) = lim S X(S )
s→∞

Final Value Theorem


Statement: if x(t) and its 1st derivative is Laplace transformable, then the final value of
x(t) is given by

x(∞) = lim S X(S )


s→∞

Laplace Transforms Properties


The properties of Laplace transform are:

Linearity Property
L.T
If x(t) ⟷ X(s)

L.T
& y(t) ⟷ Y (s)

Then linearity property states that

L.T

ax(t) + by(t) ⟷ aX(s) + bY (s)

Time Shifting Property


L.T
If x(t) ⟷ X(s)

Then time shifting property states that

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 57/71


Page 58 of 71

L.T
−st0
x(t − t 0 ) ⟷ e X(s)

Frequency Shifting Property


L.T

If x(t) ⟷ X(s)

Then frequency shifting property states that

L.T
s0 t
e . x(t) ⟷ X(s − s0 )

Time Reversal Property


L.T
If x(t) ⟷ X(s)

Then time reversal property states that

L.T

x(−t) ⟷ X(−s)

Time Scaling Property


L.T

If x(t) ⟷ X(s)

Then time scaling property states that

L.T
1 s
x(at) ⟷ X( )
|a| a

Differentiation and Integration Properties


L.T
If x(t) ⟷ X(s)

Then differentiation property states that

dx(t) L.T
⟷ s. X(s) − s. X(0)
dt

n
d x(t) L.T
n
n ⟷ (s) . X(s)
dt

The integration property states that

L.T
1
∫ x(t)dt ⟷ X(s)
s

L.T
1
∭ . . . ∫ x(t)dt ⟷ n
X(s)
s

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 58/71


Page 59 of 71

Multiplication and Convolution Properties


L.T

If x(t) ⟷ X(s)

L.T

and y(t) ⟷ Y (s)

Then multiplication property states that

L.T
1
x(t). y(t) ⟷ X(s) ∗ Y (s)
2πj

The convolution property states that

L.T

x(t) ∗ y(t) ⟷ X(s). Y (s)

Region of Convergence (ROC)


The range variation of for which the Laplace transform converges is called region of
convergence.

Properties of ROC of Laplace Transform

ROC contains strip lines parallel to jω axis in s-plane.

If x(t) is absolutely integral and it is of finite duration, then ROC is entire s-plane.

If x(t) is a right sided sequence then ROC : Re{s} > σ o.

If x(t) is a left sided sequence then ROC : Re{s} < σ o.

If x(t) is a two sided sequence then ROC is the combination of two regions.

ROC can be explained by making use of examples given below:

Example 1: Find the Laplace transform and ROC of at


x(t) = e − u(t)

at 1
L. T [x(t)] = L. T [e − u(t)] =
S+a

Re > −a

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 59/71


Page 60 of 71

ROC : Res >> −a

Example 2: Find the Laplace transform and ROC of x(t) = e


at
u(−t)

at 1
L. T [x(t)] = L. T [e u(t)] =
S−a

Res < a

ROC : Res < a

Example 3: Find the Laplace transform and ROC of


−at at
x(t) = e u(t) + e u(−t)

−at at 1 1
L. T [x(t)] = L. T [e u(t) + e u(−t)] = +
S+a S−a

For 1
Re{s} > −a
S+a

For
1
Re{s} < a
S−a

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 60/71


Page 61 of 71

Referring to the above diagram, combination region lies from a to a. Hence,

ROC : −a < Res < a

Causality and Stability

For a system to be causal, all poles of its transfer function must be right half of s-
plane.

A system is said to be stable when all poles of its transfer function lay on the left
half of s-plane.

A system is said to be unstable when at least one pole of its transfer function is
shifted to the right half of s-plane.

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 61/71


Page 62 of 71

A system is said to be marginally stable when at least one pole of its transfer
function lies on the jω axis of s-plane.

ROC of Basic Functions

f(t) F(s) ROC

1
u(t) ROC: Re{s} > 0
s

1
t u(t) ROC:Re{s} > 0
2
s

n!
t
n
u(t) ROC:Re{s} > 0
n+1
s

1
ROC:Re{s} > a
at
e u(t)
s − a

1
ROC:Re{s} > -a
−at
e u(t)
s + a

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 62/71


Page 63 of 71

1
ROC:Re{s} < a
at
e u(t) −
s − a

1
e
−at
u(−t) − ROC:Re{s} < -a
s + a

1
te
at
u(t)
2
ROC:Re{s} > a
(s − a)

n!
t
n
e
at
u(t) ROC:Re{s} > a
n+1
(s − a)

1
ROC:Re{s} > -a
−at
te u(t)
2
(s + a)

n!
t
n
e
−at
u(t)
n+1
ROC:Re{s} > -a
(s + a)

1
ROC:Re{s} < a
at
te u(−t) −
2
(s − a)

n!
t
n
e
at
u(−t) −
n+1
ROC:Re{s} < a
(s − a)

1
te
−at
u(−t) −
2
ROC:Re{s} < -a
(s + a)

n!
ROC:Re{s} < -a
n −at
t e u(−t) −
n+1
(s + a)

s + a
−at
e cos bt 2 2
(s + a) + b

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 63/71


Page 64 of 71

b
−at
e sin bt 2 2
(s + a) + b

Z-Transforms (ZT)
Analysis of continuous time LTI systems can be done using z-transforms. It is a powerful
mathematical tool to convert differential equations into algebraic equations.

The bilateral (two sided) z-transform of a discrete time signal x(n) is given as

∞ −n
Z. T [x(n)] = X(Z) = Σ x(n) z
n=−∞

The unilateral (one sided) z-transform of a discrete time signal x(n) is given as

∞ −n
Z. T [x(n)] = X(Z) = Σ x(n) z
n=0

Z-transform may exist for some signals for which Discrete Time Fourier Transform (DTFT)
does not exist.

Concept of Z-Transform and Inverse Z-Transform


Z-transform of a discrete time signal x(n) can be represented with X(Z), and it is defined
as

∞ −n
X(Z ) = Σ x(n)z . . . . . . (1)
n=−∞

If then equation 1 becomes



Z = re

jω ∞ jω −n
X(re ) = Σ x(n)[re ]
n=−∞

∞ −n −jωn
= Σ x(n)[r ]e
n=−∞

jω −n
X(re ) = X(Z ) = F . T [x(n)r ] . . . . . . (2)

The above equation represents the relation between Fourier transform and Z-transform.

X(Z )| jω = F . T [x(n)].
z=e

Inverse Z-transform
jω −n
X(re ) = F . T [x(n)r ]

−n −1 jω
x(n)r = F. T [X(re ]

n −1 jω
x(n) = r F. T [X(re )]

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 64/71


Page 65 of 71

n 1 j jωn
= r ∫ X(re ω)e dω

1 j jω n
= ∫ X(re ω)[re ] dω . . . . . . (3)

Substitute re

= z.


dz = jre dω = jzdω

1 −1
dω = z dz
j

Substitute in equation 3.

1 n 1 −1 1 n−1
3 → x(n) = ∫ X(z)z z dz = ∫ X(z)z dz
2π j 2πj

−n
X(Z) = ∑ x(n) z

n=−∞

1
n−1
x(n) = ∫ X(z) z dz
2πj

Z-Transforms Properties
Z-Transform has following properties:

Linearity Property
Z.T
If x(n) ⟷ X(Z)

Z.T
and y(n) ⟷ Y (Z)

Then linearity property states that

Z.T

a x(n) + b y(n) ⟷ a X(Z) + b Y (Z)

Time Shifting Property


Z.T
If x(n) ⟷ X(Z)

Then Time shifting property states that

Z.T
−m
x(n − m) ⟷ z X(Z)

Multiplication by Exponential Sequence Property

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 65/71


Page 66 of 71

Z.T

If x(n) ⟷ X(Z)

Then multiplication by an exponential sequence property states that

Z.T
n
a . x(n) ⟷ X(Z/a)

Time Reversal Property


Z.T
If x(n) ⟷ X(Z)

Then time reversal property states that

Z.T

x(−n) ⟷ X(1/Z)

Differentiation in Z-Domain OR Multiplication by n Property


Z.T

If x(n) ⟷ X(Z)

Then multiplication by n or differentiation in z-domain property states that

k
Z.T d X(Z)
k k k
n x(n) ⟷ [−1] z K
dZ

Convolution Property
Z.T

If x(n) ⟷ X(Z)

Z.T
and y(n) ⟷ Y (Z)

Then convolution property states that

Z.T

x(n) ∗ y(n) ⟷ X(Z). Y (Z)

Correlation Property
Z.T

If x(n) ⟷ X(Z)

Z.T

and y(n) ⟷ Y (Z)

Then correlation property states that

Z.T
−1
x(n) ⊗ y(n) ⟷ X(Z). Y ( Z )

Initial Value and Final Value Theorems


https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 66/71
Page 67 of 71

Initial value and final value theorems of z-transform are defined for causal signal.

Initial Value Theorem


For a causal signal x(n), the initial value theorem states that

x(0) = lim z→∞ X(z)

This is used to find the initial value of the signal without taking inverse z-transform

Final Value Theorem


Chapters Categories
For a causal signal x(n), the final value theorem states that

x(∞) = lim z→1 [z − 1]X(z)

This is used to find the final value of the signal without taking inverse z-transform.

Region of Convergence (ROC) of Z-Transform


The range of variation of z for which z-transform converges is called region of
convergence of z-transform.

Properties of ROC of Z-Transforms

ROC of z-transform is indicated with circle in z-plane.

ROC does not contain any poles.

If x(n) is a finite duration causal sequence or right sided sequence, then the ROC is
entire z-plane except at z = 0.

If x(n) is a finite duration anti-causal sequence or left sided sequence, then the ROC
is entire z-plane except at z = ∞.

If x(n) is a infinite duration causal sequence, ROC is exterior of the circle with radius
a. i.e. |z| > a.

If x(n) is a infinite duration anti-causal sequence, ROC is interior of the circle with
radius a. i.e. |z| < a.

If x(n) is a finite duration two sided sequence, then the ROC is entire z-plane
except at z = 0 & z = ∞.

The concept of ROC can be explained by the following example:

Example 1: Find z-transform and ROC of n


a u[n] + a

nu[−n − 1]

n −n Z Z
Z. T [a u[n]] + Z. T [a u[−n − 1]] = +
Z−a −1
Z
a

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 67/71


Page 68 of 71

1
ROC : |z| > a ROC : |z| <
a

The plot of ROC has two conditions as a > 1 and a < 1, as you do not know a.

In this case, there is no combination ROC.

Here, the combination of ROC is from a < |z| <


1

Hence for this problem, z-transform is possible when a < 1.

Causality and Stability

Causality condition for discrete time LTI systems is as follows:


A discrete time LTI system is causal when

ROC is outside the outermost pole.

In The transfer function H[Z], the order of numerator cannot be grater than the
order of denominator.

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 68/71


Page 69 of 71

Stability Condition for Discrete Time LTI Systems

A discrete time LTI system is stable when

its system function H[Z] include unit circle |z|=1.

all poles of the transfer function lay inside the unit circle |z|=1.

Z-Transform of Basic Signals

x(t) X[Z]

δ 1

Z
u(n)
Z−1

Z
u(−n − 1) −
Z−1

−m
δ(n − m) z

n Z
a u[n]
Z−a

n Z
a u[−n − 1] −
Z−a

aZ
n
n a u[n] 2
|Z−a|

aZ
n
n a u[−n − 1] − 2
|Z−a|

2
n Z −aZ cos ω
a cos ωnu[n] 2
Z −2aZ cos ω+ a 2

n aZ sin ω
a sin ωnu[n] 2
Z −2aZ cos ω+ a 2

TOP TUTORIALS

Python Tutorial

Java Tutorial

C++ Tutorial

C Programming Tutorial

C# Tutorial
PHP Tutorial

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 69/71


Page 70 of 71

R Tutorial

HTML Tutorial

CSS Tutorial

JavaScript Tutorial

SQL Tutorial

TRENDING TECHNOLOGIES

Cloud Computing Tutorial

Amazon Web Services Tutorial

Microsoft Azure Tutorial

Git Tutorial
Ethical Hacking Tutorial

Docker Tutorial

Kubernetes Tutorial

DSA Tutorial

Spring Boot Tutorial

SDLC Tutorial

Unix Tutorial

CERTIFICATIONS

Business Analytics Certification

Java & Spring Boot Advanced Certification

Data Science Advanced Certification


Cloud Computing And DevOps

Advanced Certification In Business Analytics

Artificial Intelligence And Machine Learning

DevOps Certification

Game Development Certification

Front-End Developer Certification

AWS Certification Training

Python Programming Certification

COMPILERS & EDITORS

Online Java Compiler

Online Python Compiler


Online Go Compiler

Online C Compiler

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 70/71


Page 71 of 71

Online C++ Compiler

Online C# Compiler

Online PHP Compiler

Online MATLAB Compiler

Online Bash Compiler

Online SQL Compiler

Online Html Editor

ABOUT US | OUR TEAM | CAREERS | JOBS | CONTACT US | TERMS OF USE |

PRIVACY POLICY | REFUND POLICY | COOKIES POLICY | FAQ'S

Tutorials Point is a leading Ed Tech company striving to provide the best learning material on
technical and non-technical subjects.

© Copyright 2025. All Rights Reserved.

https://w w w .tutorialspoint.com/signals_and_systems/signals_and_systems_quick_guide.htm 71/71

You might also like