Signals and Systems Quick Guide
Signals and Systems Quick Guide
What is Signal?
Signal is a time varying physical phenomenon which is intended to convey information.
OR
OR
Signal is a function of one or more independent variables, which contain some information.
Note: Noise is also a signal, but the information conveyed by noise is unwanted hence it is
considered as undesirable.
What is System?
System is a device or combination of devices, which can operate on signals and produces
corresponding response. Input to a system is called as excitation and output from it is
called as response.
For one or more inputs, the system can have one or more outputs.
1 t = 0
Impulse function is denoted by δ(t). and it is defined as δ(t) = {
0 t ≠ 0
∫ δ(t)dt = u(t)
−∞
du(t)
δ(t) =
dt
Ramp Signal
t t ⩾ 0
Ramp signal is denoted by r(t), and it is defined as r(t) = {
0 t < 0
∫ u(t) = ∫ 1 = t = r(t)
dr(t)
u(t) =
dt
Parabolic Signal
2
t /2 t ⩾ 0
Parabolic signal can be defined as x(t) = {
0 t < 0
2
t
∬ u(t)dt = ∫ r(t)dt = ∫ tdt = = parabolicsignal
2
2
d x(t)
⇒ u(t) =
2
dt
dx(t)
⇒ r(t) =
dt
Signum Function
⎧ 1 t > 0
⎪
sgn(t) = 2u(t) 1
Exponential Signal
Case i: if =0 x(t) = =1
0
α → e
Rectangular Signal
Triangular Signal
Sinusoidal Signal
Where T0 =
2π
w0
Sinc Function
sinπt
(t) =
πt
Sampling Function
sint
sa(t) =
t
Signals Classification
Signals are classified into the following categories:
A signal is said to be deterministic if there is no uncertainty with respect to its value at any
instant of time. Or, signals which can be defined exactly by a mathematical formula are
known as deterministic signals.
Let x(t) = t2
∴, t2 is even function
Example 2: As shown in the following diagram, rectangle function x(t) = x(-t) so it is also
even function.
Any function (t) can be expressed as the sum of its even function e(t) and odd
function o(t).
(t ) = e(t )+ 0(t )
where
e(t ) = ½[ (t ) + (-t )]
A signal is said to be periodic if it satisfies the condition x(t) = x(t + T) or x(n) = x(n + N).
Where
The above signal will repeat for every time interval T0 hence it is periodic with period T0.
T
1 2
Power P = lim ∫ x (t)dt
T →∞ 2T −T
NOTE:A signal cannot be both, energy and power simultaneously. Also, a signal may be
neither energy nor power signal.
Example:
Note: For a real signal, imaginary part should be zero. Similarly for an imaginary signal,
real part should be zero.
Amplitude
Time
Amplitude Scaling
Addition
Addition of two signals is nothing but addition of their corresponding amplitudes. This can
be best explained by using the following example:
Subtraction
subtraction of two signals is nothing but subtraction of their corresponding amplitudes. This
can be best explained by the following example:
Multiplication
Multiplication of two signals is nothing but multiplication of their corresponding amplitudes.
This can be best explained by the following example:
Time Shifting
x(t ± t0) is time shifted version of the signal x(t).
Time Scaling
x(At) is time scaled version of the signal x(t). where A is always positive.
Note: u(at) = u(t) time scaling is not applicable for unit step function.
Time Reversal
Systems Classification
Systems are classified into the following categories:
From the above expression, is clear that response of overall system is equal to response
of individual system.
Example:
(t) = x2(t)
Solution:
Which is not equal to a1 y1(t) + a2 y2(t). Hence the system is said to be non linear.
y (n , t) = y(n-t)
y (n , t) ≠ y(n-t)
Example:
y(n) = x(-n)
linear Time variant (LTV) and linear Time Invariant (LTI) Systems
If a system is both linear and time variant, then it is called linear time variant (LTV) system.
If a system is both linear and time Invariant then that system is called linear time invariant
(LTI) system.
For present value t=0, the system output is y(0) = 2x(0). Here, the output is only
dependent upon present input. Hence the system is memory less or static.
For present value t=0, the system output is y(0) = 2x(0) + 3x(-3).
Here x(-3) is past value for the present input for which the system requires memory to
get this output. Hence, the system is a dynamic system.
A system is said to be causal if its output depends upon present and past inputs, and does
not depend upon future input.
For non causal system, the output depends upon future inputs also.
For present value t=1, the system output is y(1) = 2x(1) + 3x(-2).
Here, the system output only depends upon present and past inputs. Hence, the system is
causal.
For present value t=1, the system output is y(1) = 2x(1) + 3x(-2) + 6x(4) Here, the
system output depends upon future input. Hence the system is non-causal system.
A system is said to invertible if the input of the system appears at the output.
(H1(S))
∴, Y(S) = X(S)
→ y(t) = x(t)
Let the input is u(t) (unit step bounded input) then the output y(t) = u2(t) = u(t) =
bounded output.
Let the input is u (t) (unit step bounded input) then the output y(t) = ∫ u(t) dt = ramp
signal (unbounded because amplitude of ramp is not finite it goes to infinite when t →
infinite).
Signals Analysis
Vector
A vector contains magnitude and direction. The name of the vector is denoted by bold face
type and their magnitude is denoted by light face type.
Example: V is a vector with magnitude V. Consider two vectors V1 and V2 as shown in the
following diagram. Let the component of V1 along with V2 is given by C 12V2. The
component of a vector V1 along with the vector V2 can obtained by taking a perpendicular
from the end of V1 to the vector V2 as shown in diagram:
V1= C 12V2 + Ve
But this is not the only way of expressing vector V1 in terms of V2. The alternate
possibilities are:
V1=C 1V2+Ve1
V2=C 2V2+Ve2
The error signal is minimum for large component value. If C 12=0, then two signals are said
to be orthogonal.
V1 . V2 = V1.V2 cosθ
V1 . V2 =V2.V1
V2
V1 . V2
V2 = C1 2 V2
V1 . V2
⇒ C12 =
V2
Signal
The concept of orthogonality can be applied to signals. Let us consider two signals f1(t)
and f2(t). Similar to vectors, you can approximate f1(t) in terms of f2(t) as
One possible way of minimizing the error is integrating over the interval t1 to t2.
t2
1
∫ [fe (t)]dt
t2 − t1
t1
t2
1
∫ [f1 (t) − C12 f2 (t)]dt
t2 − t1 t1
However, this step also does not reduce the error to appreciable extent. This can be
corrected by taking the square of error function.
1 t2 2
ε = ∫ [fe (t) ] dt
t2 − t1 t1
1 t2 2
⇒ ∫ [fe (t) − C12 f2 ] dt
t2 − t1 t1
Where ε is the mean square value of error signal. The value of C 12 which minimizes the
error, you need to calculate
dε
= 0
dC 12
d 1 t2 2
⇒ [ ∫ [f1 (t) − C12 f2 (t) ] dt] = 0
dC 12 t2 − t1 t1
1 t2 d d d
2 2 2
⇒ ∫ [ f (t) − 2f1 (t) C12 f2 (t) + f (t) C ]dt = 0
t2 − t1 t1 dC 12 1 dC 12 dC 12 2 12
Derivative of the terms which do not have C12 term are zero.
t2 t2 2
⇒ ∫ −2f1 (t) f2 (t)dt + 2C12 ∫ [f (t)]dt = 0
t1 t1 2
t
2
∫ f (t)f (t)dt
t 1 2
0=
1
t 2
2
∫ f2 (t)dt
t
1
t2
∫ f1 (t) f2 (t)dt = 0
t1
Consider a vector A at a point (X1, Y1, Z1). Consider three unit vectors (VX, VY, VZ) in the
direction of X, Y, Z axis respectively. Since these unit vectors are mutually orthogonal, it
satisfies that
VX . VX = VY . VY = VZ . VZ = 1
VX . VY = VY . VZ = VZ . VX = 0
1 a = b
Va . Vb = {
0 a ≠ b
The vector A can be represented in terms of its components and unit vectors as
A = X1 VX + Y1 VY + Z1 VZ . . . . . . . . . . . . . . . . (1)
Any vectors in this three dimensional space can be represented in terms of these three unit
vectors only.
If you consider n dimensional space, then any vector A in that space can be represented as
A = X1 VX + Y1 VY + Z1 VZ +. . . +N 1 VN . . . . . (2)
= A. V G. . . . . . . . . . . . . . . (3)
⇒ C G = ( X1 VX + Y1 VY + Z1 VZ +. . . +G1 VG . . . +N 1 VN ) VG
= X1 VX VG + Y1 VY VG + Z1 VZ VG +. . . +G1 VG VG . . . +N 1 VN VG
= G1 since VG VG = 1
I f VG VG ≠ 1 i.e.VG VG = k
A VG = G1 VG VG = G1 K
(AV G )
G1 =
K
t2
t2
2
Let ∫ x (t)dt = kk
k
t1
Let a function f(t), it can be approximated with this orthogonal signal space by adding the
components along mutually orthogonal signals i.e.
n
= Σ Cr xr (t)
r=1
n
f (t) = f (t) − Σ Cr xr (t)
r=1
t2
Mean sqaure error ε =
1
t2 − t2
∫
t1
[fe (t) ] dt
2
t2 n
1
2
= ∫ [f [t] − ∑ Cr xr (t) ] dt
t2 − t2 t1
r=1
The component which minimizes the mean square error can be found by
dε dε dε
= =. . . = = 0
dC1 dC2 dCk
Let us consider
dε
= 0
dC k
t2
d 1
n 2
[ ∫ [f (t) − Σ Cr xr (t) ] dt] = 0
r=1
dCk t2 − t1 t1
All terms that do not contain C k is zero. i.e. in summation, r=k term remains and all other
terms are zero.
t2 t2
2
∫ −2f (t) xk (t)dt + 2Ck ∫ [x (t)]dt = 0
k
t1 t1
t2
∫ f (t) xk (t)dt
t1
⇒ Ck =
t2 2
in t x (t)dt
t1 k
t2
⇒ ∫ f (t) xk (t)dt = Ck K k
t1
.
1 t2 2
ε = ∫ [fe (t) ] dt
t2 − t1 t1
1 t2 2
n
= ∫ [fe (t) − Σ Cr xr (t) ] dt
t2 − t1 t1 r=1
1 t2 t2 t2
2 n 2 2 n
= [∫ [fe (t)]dt + Σ Cr ∫ xr (t)dt − 2Σ Cr ∫ xr (t)f (t)dt
t2 − t1 t1 r=1 t1 r=1 t1
t2 t2
You know that Cr ∫
2
t1
2
xr (t)dt = Cr ∫
t1
xr (t)f (d)dt = Cr K r
2
1 t2 2 n 2 n 2
ε = [∫ [f (t)]dt + Σ Cr K r − 2 Σ Cr K r ]
t2 − t1 t1 r=1 r=1
1 t2 2 n 2
= [∫ [f (t)]dt − Σ Cr K r ]
t2 − t1 t1 r=1
1 t2 2 2 2 2
∴ ε = [∫ [f (t)]dt + ( C1 K 1 + C2 K 2 +. . . +Cn K n )]
t2 − t1 t1
t2
If this function is satisfying the equation ∫
t1
f (t) xk (t)dt = 0 for k = 1, 2, . . then f(t) is
said to be orthogonal to each and every function of orthogonal set. This set is incomplete
without f(t). It becomes closed and complete set when f(t) is included.
f(t) can be approximated with this orthogonal set by adding the components along
mutually orthogonal signals i.e.
If the infinite series C1 x1 (t) + C2 x2 (t)+. . . +Cn xn (t) converges to f(t) then mean
square error is zero.
Where
t1
C12 = t2 2
∫ |f (t)| dt
t 2
1
Where f
2
∗
(t) = complex conjugate of f2(t).
t2
∗
∫ f1 (t) f (t)dt
t1 2
= 0
t2 2
∫ |f2 (t) | dt
t1
t2
∗
⇒ ∫ f1 (t) f (dt) = 0
2
t1
Fourier Series
Jean Baptiste Joseph Fourier,a French mathematician and a physicist; was born in
Auxerre, France. He initialized Fourier series, Fourier transforms and their applications to
problems of heat transfer and vibrations. The Fourier series, Fourier transforms and
Fourier's Law are named in his honour.
Fourier series
To represent any periodic signal x(t), Fourier developed an expression called Fourier series.
This is in terms of an infinite sum of sines and cosines or exponentials. Fourier series uses
orthoganality condition.
(complex exponential)
j ω0 t
x(t) = e
2π
jkω0 t jk( )t
ϕk (t) = {e } = {e T }where k = 0 ± 1, ±2. . n . . . . . (1)
jkω0 t
x(t) = ∑ ak e . . . . . (2)
k=−∞
jkω0 t
= ∑ ak ke
k=−∞
Multiply e
−jnω0 t
on both sides. Then
k=−∞
T T ∞
T ∞
j(k−n) ω0 t
= ∫ ∑ ak e . dt
0
k=−∞
T ∞ T
jkω0 t j(k−n) ω0 t
∫ x(t)e dt = ∑ ak ∫ e dt. . . . . . (2)
0 k=−∞ 0
by Euler's formula,
T T T
j(k−n)ω0 t
∫ e dt. = ∫ cos(k − n)ω0 dt + j ∫ sin(k − n)ω0 t dt
0 0 0
T
T k = n
j(k−n) ω0 t
∫ e dt. = {
0 0 k ≠ n
Hence in equation 2, the integral is zero for all values of k except at k = n. Put k = n in
equation 2.
−jnω0 t
⇒ ∫ x(t)e dt = an T
0
T
1
−jnω0 t
⇒ an = ∫ e dt
T 0
Replace n by k.
T
1
−jkω0 t
⇒ ak = ∫ e dt
T 0
j(k−n) ω0 t
∴ x(t) = ∑ ak e
k=−∞
T
1
−jkω0 t
where ak = ∫ e dt
T 0
Linearity Property
f ourier series coef f icient f ourier series coef f icient
jnω0
Conjugate symmetry property for real valued time signal states that
f ∗xn = f−xn
& Conjugate symmetry property for imaginary valued time signal states that
f ∗xn = −f−xn
ω0
). So
sin ω0 t, sin 2ω0 t forms an orthogonal set. This set is not complete without { cos nω0 t }
because this cosine set is also orthogonal to sine set. So to complete this set we must
include both cosine and sine terms. Now the complete orthogonal set contains all cosine
and sine terms i.e. { sin nω0 t, cos n ω0 t } where n=0, 1, 2...
ω0
) can be represented as
x(t) = a0 cos 0ω0 t + a1 cos 1ω0 t + a2 cos 2ω0 t+. . . +an cos n ω0 t+. . .
n=1
t0 +T
t0 +T
∫ x(t)1dt 1
t0
Where a0 = = ∫ x(t)dt
t0 +T 2 T
∫ 1 dt t0
t0
t0 +T
∫ x(t) cos n ω0 t dt
t0
an =
t0 +T
2
∫ cos n ω0 t dt
t0
t0 +T
∫ x(t) sin n ω0 t dt
t0
bn =
t0 +T 2
∫ sin n ω0 t dt
t0
t0 +T t0 +T
2 2
T
Here ∫ cos n ω0 t dt = ∫ sin n ω0 t dt =
2
t0 t0
t0 +T
2
∴ an = ∫ x(t) cos n ω0 t dt
T t0
t0 +T
2
bn = ∫ x(t) sin n ω0 t dt
T t0
jω0 t j2 ω0 t jnω0 t
f (t) = F0 + F1 e + F2 e +. . . +Fn e +. . .
jnω0 t
∴ f (t) = ∑ Fn e (t0 < t < t0 + T ). . . . . . . (1)
n=−∞
Equation 1 represents exponential Fourier series representation of a signal f(t) over the
interval (t0, t0+T). The Fourier coefficient is given as
t0 +T jnω0 t ∗
∫ f (t)(e ) dt
t0
Fn =
t0 +T
jnω0 t jnω0 t ∗
∫ e (e ) dt
t0
t0 +T −jnω0 t
∫ f (t)e dt
t0
=
t0 +T
−jnω0 t jnω0 t
∫ e e dt
t0
t0 +T −jnω0 t
∫ f (t)e dt t0 +T
t0 1
−jnω0 t
= = ∫ f (t)e dt
t0 +T
T
∫ 1 dt t0
t0
t0 +T
1
−jnω0 t
∴ Fn = ∫ f (t)e dt
T t0
∞
x(t) = a0 + Σ (an cos nω0 t + bn sin nω0 t). . . . . . (1)
n=1
∞ jnω0 t
x(t) = Σ n=−∞ Fn e
jω0 t j2 ω0 t jnω0 t
= F0 + F1 e + F2 e +. . . +Fn e +. . .
+F−1 (cos ω0 t − j sin ω0 t) + F−2 (cos 2ω0 t − j sin 2ω0 t)+. . . +F−n (cos n ω0 t − j sin n ω
∞
∴ x(t) = F0 + Σ (( Fn + F−n ) cos n ω0 t + j( Fn − F−n ) sin n ω0 t). . . . . . (2)
n=1
a0 = F0
an = Fn + F−n
bn = j( Fn − F−n )
Similarly,
1
Fn = ( an − jbn )
2
1
F−n = ( an + jbn )
2
Fourier Transforms
The main drawback of Fourier series is, it is only applicable to periodic signals. There are
some naturally produced signals such as nonperiodic or aperiodic, which we cannot
represent using Fourier series. To overcome this shortcoming, Fourier developed a
mathematical model to transform signals between time (or spatial) domain to frequency
domain & vice versa, which is called 'Fourier transform'.
Fourier transform has many applications in physics and engineering such as analysis of LTI
systems, RADAR, astronomy, signal processing etc.
jkω0 t
f (t) = ∑ ak e
k=−∞
∞
2π
j kt
T
= ∑ ak e 0
. . . . . . (1)
k=−∞
Let 1
= Δf , then equation 1 becomes
T0
∞ j2πkΔft
f (t) = ∑ ak e . . . . . . (2)
k=−∞
1 t0 +T
−jkω0 t
ak = ∫ f (t)e dt
T0 t0
Substitute in equation 2.
1 t0 +T
∞ −jkω0 t j2πkΔft
(2) ⇒ f (t) = Σ ∫ f (t)e dt e
k=−∞ T0 t0
Let t0 =
T
∞ 2 −j2πkΔft j2πkΔft
= Σ [∫ −T
f (t)e dt] e . Δf
k=−∞
2
2
∞ −j2πkΔft j2πkΔft
f (t) = limT →∞ {Σ [∫ f (t)e dt] e . Δf }
k=−∞
−T
∞ ∞
−j2πft j2πft
= ∫ [∫ f (t)e dt] e df
−∞ −∞
jωt
f (t) = ∫ F [ω] e dω
−∞
∞ −j2πft
Where F [ω] = [ ∫ f (t)e dt]
−∞
−jωt
f (t) = F [ω] = [ ∫ f (t)e dt]
−∞
jωt
f (t) = ∫ F [ω] e dω
−∞
FT of GATE Function
ωT
F [ω] = AT S a( )
2
FT of Impulse Function
∞ −jωt
F T [ω(t)] = [∫ δ(t) e dt]
−∞
−jωt
= e |t = 0
0
= e = 1
∴ δ(ω) = 1
FT of Exponentials
F.T
−at
e u(t) ⟷ 1/(a + j)
F.T
−at
e u(t) ⟷ 1/(a + jω)
F.T
−a | t | 2a
e ⟷ 2
2
a +
F.T
j ω0 t
e ⟷ δ(ω − ω0 )
FT of Signum Function
F.T
2
sgn(t) ⟷
jω
There must be finite number of discontinuities in the signal f(t),in the given interval
of time.
∞ −jωn
X(ω) = Σ x(n) e . . . . . . (1)
n=−∞
Here, X(ω) is a complex function of real frequency variable ω and it can be written as
Where Xre(ω), Ximg(ω) are real and imaginary parts of X(ω) respectively.
2 2 2
|X(ω) | = | Xre (ω) | + | Xim (ω) |
Convergence Condition:
The infinite series in equation 1 may be converges or may not. x(n) is absolutely
summable.
n=−∞
An absolutely summable sequence has always a finite energy but a finite-energy sequence
is not necessarily to be absolutely summable.
Linearity Property
F.T
If x(t) ⟷ X(ω)
F.T
F.T
If x(t) ⟷ X(ω)
F.T
−jω t0
x(t − t 0 ) ⟷ e X(ω)
If x(t) ⟷ X(ω)
F.T
j ω0 t
e . x(t) ⟷ X(ω − ω0 )
If x(t) ⟷ X(ω)
F.T
x(−t) ⟷ X(−ω)
If x(t) ⟷ X(ω)
1 ω
x(at) X
| a| a
I f x(t) ⟷ X(ω)
dx(t) F.T
⟷ jω. X(ω)
dt
n
d x(t) F.T
n
n
⟷ (jω) . X(ω)
dt
F.T
1
∫ x(t) dt ⟷ X(ω)
jω
F.T
1
∭ . . . ∫ x(t) dt ⟷ n
X(ω)
(jω)
If x(t) ⟷ X(ω)
F.T
F.T
F.T
1
x(t) ∗ y(t) ⟷ X(ω). Y (ω)
2π
k = constant.
= K FT[x(t - td)]
= KX(w)e−jωt d
−jωtd
∴ Y (w) = K X(w) e
Thus, distortionless transmission of a signal x(t) through a system with impulse response
h(t) is achieved when
A physical transmission system may have amplitude and phase responses as shown
below:
Hilbert Transform
Hilbert transform of a signal x(t) is defined as the transform in which phase angle of all
components of the signal is shifted by ±90
o
.
∞
1 x(k)
^ (t) =
x ∫ dk
π t − k
−∞
x(t), ^ (t)
x is called a Hilbert transform pair.
The energy spectral density is same for both x(t) and ^ (t).
x
If Fourier transform exist then Hilbert transform also exists for energy and power
signals.
Convolution
Convolution is a mathematical operation used to express the relation between input and
output of an LTI system. It relates input, output and impulse response of an LTI system as
Continuous convolution
Discrete convolution
Continuous Convolution
∞
= ∫ x(τ )h(t − τ )dτ
−∞
(or)
∞
= ∫ x(t − τ )h(τ )dτ
−∞
Discrete Convolution
∞
= Σ x(k)h(n − k)
k=−∞
(or)
∞
= Σ x(n − k)h(k)
k=−∞
Deconvolution
Deconvolution is reverse process to convolution widely used in signal and image
processing.
Properties of Convolution
Commutative Property
x1 (t) ∗ x2 (t) = x2 (t) ∗ x1 (t)
Distributive Property
x1 (t) ∗ [x2 (t) + x3 (t)] = [x1 (t) ∗ x2 (t)] + [x1 (t) ∗ x3 (t)]
Associative Property
x1 (t) ∗ [x2 (t) ∗ x3 (t)] = [x1 (t) ∗ x2 (t)] ∗ x3 (t)
Shifting Property
x1 (t) ∗ x2 (t) = y(t)
x1 (t) ∗ x2 (t − t 0 ) = y(t − t 0 )
x1 (t − t 0 ) ∗ x2 (t) = y(t − t 0 )
x1 (t − t 0 ) ∗ x2 (t − t 1 ) = y(t − t 0 − t 1 )
Scaling Property
Differentiation of Output
dy(t) dx(t)
then dt
=
dt
∗ h(t)
or
dy(t) dh(t)
= x(t) ∗
dt dt
Note:
Here, we have two rectangles of unequal length to convolute, which results a trapezium.
−1 + −2 < t < 2 + 2
−3 < t < 4
∞
Proof: y(t) = ∫
−∞
x(τ )h(t − τ )dτ
∞
= ∫ x(τ )dτ ∫ h(t − τ )dt
−∞
We know that area of any signal is the integration of that signal itself.
∴ Ay = Ax Ah
DC Component
DC component of any signal is given by
Ex: what is the dc component of the resultant convoluted signal given below?
= 3 4 = 12
Duration of the convoluted signal = sum of lower limits < t < sum of upper limits
= -3 < t < 4
Period=7
Dc component = 12
Discrete Convolution
Let us see how to calculate discrete convolution:
Note: if any two sequences have m, n number of samples respectively, then the resulting
convoluted sequence will have [m+n-1] samples.
= [-1, 0, 3, 10, 6]
Here x[n] contains 3 samples and h[n] is also having 3 samples so the resulting sequence
having 3+3-1 = 5 samples.
Periodic convolution is valid for discrete Fourier transform. To calculate periodic convolution
all the samples must be real. Periodic or circular convolution is also called as fast
convolution.
If two sequences of length m, n respectively are convoluted using circular convolution then
resulting sequence having max [m,n] samples.
Ex: convolute two sequences x[n] = {1,2,3} & h[n] = {-1,2,2} using circular convolution
= [-1, 0, 3, 10, 6]
Here x[n] contains 3 samples and h[n] also has 3 samples. Hence the resulting sequence
obtained by circular convolution must have max[3,3]= 3 samples.
Now to get periodic convolution result, 1st 3 samples [as the period is 3] of normal
convolution is same next two samples are added to 1st samples as shown below:
Correlation
Correlation is a measure of similarity between two signals. The general formula for
correlation is
∞
∫ x1 (t) x2 (t − τ )dt
−∞
Auto correlation
Cros correlation
Consider a signals x(t). The auto correlation function of x(t) with its time delayed version
is given by
∞
Auto correlation function and energy spectral densities are Fourier transform pairs.
i.e.
F . T [R(τ )] = Ψ(ω)
∞ −jωτ
Ψ(ω) = ∫ R(τ ) e dτ
−∞
The auto correlation function of periodic power signal with period T is given by
T
1 2
Properties
Auto correlation function of power signal at τ = 0 (at origin)is equal to total power
of that signal. i.e.
R(0) = ρ
Auto correlation function and power spectral densities are Fourier transform pairs.
i.e.,
F . T [R(τ )] = s(ω)
∞ −jωτ
s(ω) = ∫ R(τ ) e dτ
−∞
Density Spectrum
Let us see density spectrums:
∞ 2
P = Σ | Cn |
n=−∞
Consider two signals x1(t) and x2(t). The cross correlation of these two signals R12 (τ ) is
given by
∞
∗
R12 (τ ) = ∫ x1 (t) x (t − τ ) dt [+ve shift]
2
−∞
∗
= ∫ x1 (t + τ ) x (t) dt [-ve shift]
2
−∞
∗
R21 (τ ) = ∫ x2 (t) x (t − τ ) dt [+ve shift]
1
−∞
∗
= ∫ x2 (t + τ ) x (t) dt [-ve shift]
1
−∞
∞
If R12(0) = 0 means, if ∫
−∞
x1 (t) x (t)dt = 0,
2
∗
then the two signals are said to be
orthogonal.
T
−T
∗
x(t) x (t) dt then two signals are said to be
T
2
orthogonal.
Parseval's Theorem
Parseval's theorem for energy signals states that the total energy in a signal can be
obtained by the spectrum of the signal as
1 ∞ 2
E = ∫ |X(ω) | dω
2π −∞
Note: If a signal has energy E then time scaled version of that signal x(at) has energy E/a.
fs ≥ 2fm .
Proof: Consider a continuous time signal x(t). The spectrum of x(t) is a band limited to fm
Hz i.e. the spectrum of x(t) is zero for |ω|>ω m.
Sampling of input signal x(t) can be obtained by multiplying x(t) with an impulse train δ(t)
of period Ts. The output of multiplier is a discrete signal called sampled signal which is
represented with y(t) in the following diagrams:
Here, you can observe that the sampled signal takes the period of impulse. The process of
sampling can be explained by the following mathematical expression:
∞
δ(t) = a0 + Σ ( an cos n ωs t + bn sin n ωs t) . . . . . . (2)
n=1
Where
1 2 1 1
a0 = ∫ −T
δ(t)dt = δ(0) =
Ts Ts Ts
2
2 2 2 2
an = ∫ −T
δ(t) cos n ωs dt = δ(0) cos n ωs 0 =
Ts T2 T
2
2 2 2
bn = ∫ −T
δ(t) sin n ωs t dt = δ(0) sin n ωs 0 = 0
Ts Ts
2
1 ∞ 2
∴ δ(t) = + Σ ( cos n ωs t + 0)
Ts n=1 Ts
1 ∞ 2
= x(t)[ + Σ ( cos n ωs t)]
Ts n=1 Ts
1 ∞
= [x(t) + 2Σ (cos n ωs t)x(t)]
Ts n=1
1
y(t) = [x(t) + 2 cos ωs t. x(t) + 2 cos 2ωs t. x(t) + 2 cos 3ωs t. x(t) . . . . . . ]
Ts
1
Y (ω) = [X(ω) + X(ω − ωs ) + X(ω + ωs ) + X(ω − 2ωs ) + X(ω + 2ωs )+ . . . ]
Ts
1 ∞
∴ Y (ω) = Σ X(ω − n ωs ) where n = 0, ±1, ±2, . . .
Ts n=−∞
To reconstruct x(t), you must recover input signal spectrum X(ω) from sampled signal
spectrum Y(ω), which is possible when there is no overlapping between the cycles of Y(ω).
Possibility of sampled frequency spectrum with different conditions is given by the following
diagrams:
Aliasing Effect
The overlapped region in case of under sampling represents aliasing effect, which can be
removed by
considering fs >2fm
Impulse sampling.
Natural sampling.
Impulse Sampling
Impulse sampling can be performed by multiplying input signal x(t) with impulse train
∞
Σn=−∞ δ(t − nT ) of period 'T'. Here, the amplitude of impulse changes with respect to
amplitude of input signal x(t). The output of sampler is given by
∞
= x(t) Σ δ(t − nT )
n=−∞
∞
y(t) = yδ (t) = Σ x(nt)δ(t − nT ) . . . . . . 1
n=−∞
To get the spectrum of sampled signal, consider Fourier transform of equation 1 on both
sides
1 ∞
Y (ω) = Σ X(ω − n ωs )
T n=−∞
This is called ideal sampling or impulse sampling. You cannot use this practically because
pulse width cannot be zero and the generation of impulse train is not possible practically.
Natural Sampling
Natural sampling is similar to impulse sampling, except the impulse train is replaced by
pulse train of period T. i.e. you multiply input signal x(t) to pulse train Σ
∞
n=−∞
P (t − nT )
as shown below
= x(t) × p(t)
∞
= x(t) × Σ P (t − nT ) . . . . . . (1)
n=−∞
∞ jnωs t
p(t) = Σn=−∞ Fn e . . . . . . (2)
∞ j2πnfs t
= Σ Fn e
n=−∞
1
Where Fn = ∫
2
−T
p(t)e
−jnωs t
dt
T
2
1
= (nωs )
TP
∞ 1 jnωs t
∴ p(t) = Σn=−∞ P (nωs )e
T
1 ∞ jnωs t
= Σn=−∞ P (nωs )e
T
1 ∞ jnωs t
= x(t) × Σ P (nωs ) e
T n=−∞
1 ∞ jnωs t
y(t) = Σ P (nωs ) x(t) e
T n=−∞
To get the spectrum of sampled signal, consider the Fourier transform on both sides.
1 ∞ jnωs t
F . T [y(t)] = F . T [ Σ P (nωs ) x(t) e ]
T n=−∞
1 ∞ jnωs t
= Σ P (nωs ) F . T [x(t) e ]
T n=−∞
jnωs t
F . T [x(t) e ] = X[ω − nωs ]
1 ∞
∴ Y [ω] = Σ P (nωs ) X[ω − nωs ]
T n=−∞
Theoretically, the sampled signal can be obtained by convolution of rectangular pulse p(t)
with ideally sampled signal say yδ(t) as shown in the diagram:
To get the sampled spectrum, consider Fourier transform on both sides for equation 1
Here P (ω) = T S a(
ωT
) = 2 sin ωT /ω
2
Nyquist Rate
It is the minimum sampling rate at which signal can be converted into samples and can be
recovered back without distortion.
fN 2f m
The sampling rate is large in proportion with f2. This has practical limitations.
To overcome this, the band pass theorem states that the input signal x(t) can be
converted into its samples and can be recovered back without distortion when sampling
frequency fs < 2f2.
Also,
1 2f2
fs = =
T m
f
Where m is the largest integer <
2
1 2K B
fs = =
T m
For band pass signals of bandwidth 2fm and the minimum sampling rate fs= 2 B = 4fm,
σ = real of s, and
ω = imaginary of s
The response of LTI can be obtained by the convolution of input with its impulse response
i.e.
∞
y(t) = x(t) × h(t) = ∫ h(τ ) x(t − τ )dτ
−∞
∞ s(t−τ)
= ∫ h(τ ) Ge dτ
−∞
st ∞ (−sτ)
= Ge .∫ h(τ ) e dτ
−∞
st
y(t) = Ge . H(S ) = x(t). H(S )
∞
Where H(S) = Laplace transform of
−sτ
h(τ ) = ∫ h(τ ) e dτ
−∞
∞
Similarly, Laplace transform of x(t) = X(S ) = ∫
−∞
x(t) e
−st
dt . . . . . . (1)
∞ −(σ+jω)t
X(σ + jω) = ∫ x(t) e dt
−∞
∞ −σt −jωt
= ∫ [x(t) e ]e dt
−∞
−σt
∴ X(S ) = F . T [x(t) e ] . . . . . . (2)
X(S ) = X(ω) f or s = jω
−σt −1 −1
→ x(t) e = F.T [X(S )] = F . T [X(σ + jω)]
1 ∞ jωt
= π∫ X(σ + jω) e dω
2 −∞
σt 1 ∞ jωt
x(t) = e ∫ X(σ + jω) e dω
2π −∞
1 ∞ (σ+jω)t
= ∫ X(σ + jω) e dω . . . . . . (3)
2π −∞
Here, σ + jω = s
jd = dsd = ds/j
1 ∞ st
∴ x(t) = ∫ X(s) e ds . . . . . . (4)
2πj −∞
Equations 1 and 4 represent Laplace and Inverse Laplace Transform of a signal x(t).
There must be finite number of discontinuities in the signal f(t),in the given interval
of time.
Statement: if x(t) and its 1st derivative is Laplace transformable, then the initial value of
x(t) is given by
+
x( 0 ) = lim S X(S )
s→∞
Linearity Property
L.T
If x(t) ⟷ X(s)
L.T
& y(t) ⟷ Y (s)
L.T
L.T
−st0
x(t − t 0 ) ⟷ e X(s)
If x(t) ⟷ X(s)
L.T
s0 t
e . x(t) ⟷ X(s − s0 )
L.T
x(−t) ⟷ X(−s)
If x(t) ⟷ X(s)
L.T
1 s
x(at) ⟷ X( )
|a| a
dx(t) L.T
⟷ s. X(s) − s. X(0)
dt
n
d x(t) L.T
n
n ⟷ (s) . X(s)
dt
L.T
1
∫ x(t)dt ⟷ X(s)
s
L.T
1
∭ . . . ∫ x(t)dt ⟷ n
X(s)
s
If x(t) ⟷ X(s)
L.T
L.T
1
x(t). y(t) ⟷ X(s) ∗ Y (s)
2πj
L.T
If x(t) is absolutely integral and it is of finite duration, then ROC is entire s-plane.
If x(t) is a two sided sequence then ROC is the combination of two regions.
at 1
L. T [x(t)] = L. T [e − u(t)] =
S+a
Re > −a
at 1
L. T [x(t)] = L. T [e u(t)] =
S−a
Res < a
−at at 1 1
L. T [x(t)] = L. T [e u(t) + e u(−t)] = +
S+a S−a
For 1
Re{s} > −a
S+a
For
1
Re{s} < a
S−a
For a system to be causal, all poles of its transfer function must be right half of s-
plane.
A system is said to be stable when all poles of its transfer function lay on the left
half of s-plane.
A system is said to be unstable when at least one pole of its transfer function is
shifted to the right half of s-plane.
A system is said to be marginally stable when at least one pole of its transfer
function lies on the jω axis of s-plane.
1
u(t) ROC: Re{s} > 0
s
1
t u(t) ROC:Re{s} > 0
2
s
n!
t
n
u(t) ROC:Re{s} > 0
n+1
s
1
ROC:Re{s} > a
at
e u(t)
s − a
1
ROC:Re{s} > -a
−at
e u(t)
s + a
1
ROC:Re{s} < a
at
e u(t) −
s − a
1
e
−at
u(−t) − ROC:Re{s} < -a
s + a
1
te
at
u(t)
2
ROC:Re{s} > a
(s − a)
n!
t
n
e
at
u(t) ROC:Re{s} > a
n+1
(s − a)
1
ROC:Re{s} > -a
−at
te u(t)
2
(s + a)
n!
t
n
e
−at
u(t)
n+1
ROC:Re{s} > -a
(s + a)
1
ROC:Re{s} < a
at
te u(−t) −
2
(s − a)
n!
t
n
e
at
u(−t) −
n+1
ROC:Re{s} < a
(s − a)
1
te
−at
u(−t) −
2
ROC:Re{s} < -a
(s + a)
n!
ROC:Re{s} < -a
n −at
t e u(−t) −
n+1
(s + a)
s + a
−at
e cos bt 2 2
(s + a) + b
b
−at
e sin bt 2 2
(s + a) + b
Z-Transforms (ZT)
Analysis of continuous time LTI systems can be done using z-transforms. It is a powerful
mathematical tool to convert differential equations into algebraic equations.
The bilateral (two sided) z-transform of a discrete time signal x(n) is given as
∞ −n
Z. T [x(n)] = X(Z) = Σ x(n) z
n=−∞
The unilateral (one sided) z-transform of a discrete time signal x(n) is given as
∞ −n
Z. T [x(n)] = X(Z) = Σ x(n) z
n=0
Z-transform may exist for some signals for which Discrete Time Fourier Transform (DTFT)
does not exist.
∞ −n
X(Z ) = Σ x(n)z . . . . . . (1)
n=−∞
jω ∞ jω −n
X(re ) = Σ x(n)[re ]
n=−∞
∞ −n −jωn
= Σ x(n)[r ]e
n=−∞
jω −n
X(re ) = X(Z ) = F . T [x(n)r ] . . . . . . (2)
The above equation represents the relation between Fourier transform and Z-transform.
X(Z )| jω = F . T [x(n)].
z=e
Inverse Z-transform
jω −n
X(re ) = F . T [x(n)r ]
−n −1 jω
x(n)r = F. T [X(re ]
n −1 jω
x(n) = r F. T [X(re )]
n 1 j jωn
= r ∫ X(re ω)e dω
2π
1 j jω n
= ∫ X(re ω)[re ] dω . . . . . . (3)
2π
Substitute re
jω
= z.
jω
dz = jre dω = jzdω
1 −1
dω = z dz
j
Substitute in equation 3.
1 n 1 −1 1 n−1
3 → x(n) = ∫ X(z)z z dz = ∫ X(z)z dz
2π j 2πj
−n
X(Z) = ∑ x(n) z
n=−∞
1
n−1
x(n) = ∫ X(z) z dz
2πj
Z-Transforms Properties
Z-Transform has following properties:
Linearity Property
Z.T
If x(n) ⟷ X(Z)
Z.T
and y(n) ⟷ Y (Z)
Z.T
Z.T
−m
x(n − m) ⟷ z X(Z)
Z.T
If x(n) ⟷ X(Z)
Z.T
n
a . x(n) ⟷ X(Z/a)
Z.T
x(−n) ⟷ X(1/Z)
If x(n) ⟷ X(Z)
k
Z.T d X(Z)
k k k
n x(n) ⟷ [−1] z K
dZ
Convolution Property
Z.T
If x(n) ⟷ X(Z)
Z.T
and y(n) ⟷ Y (Z)
Z.T
Correlation Property
Z.T
If x(n) ⟷ X(Z)
Z.T
Z.T
−1
x(n) ⊗ y(n) ⟷ X(Z). Y ( Z )
Initial value and final value theorems of z-transform are defined for causal signal.
This is used to find the initial value of the signal without taking inverse z-transform
This is used to find the final value of the signal without taking inverse z-transform.
If x(n) is a finite duration causal sequence or right sided sequence, then the ROC is
entire z-plane except at z = 0.
If x(n) is a finite duration anti-causal sequence or left sided sequence, then the ROC
is entire z-plane except at z = ∞.
If x(n) is a infinite duration causal sequence, ROC is exterior of the circle with radius
a. i.e. |z| > a.
If x(n) is a infinite duration anti-causal sequence, ROC is interior of the circle with
radius a. i.e. |z| < a.
If x(n) is a finite duration two sided sequence, then the ROC is entire z-plane
except at z = 0 & z = ∞.
n −n Z Z
Z. T [a u[n]] + Z. T [a u[−n − 1]] = +
Z−a −1
Z
a
1
ROC : |z| > a ROC : |z| <
a
The plot of ROC has two conditions as a > 1 and a < 1, as you do not know a.
In The transfer function H[Z], the order of numerator cannot be grater than the
order of denominator.
all poles of the transfer function lay inside the unit circle |z|=1.
x(t) X[Z]
δ 1
Z
u(n)
Z−1
Z
u(−n − 1) −
Z−1
−m
δ(n − m) z
n Z
a u[n]
Z−a
n Z
a u[−n − 1] −
Z−a
aZ
n
n a u[n] 2
|Z−a|
aZ
n
n a u[−n − 1] − 2
|Z−a|
2
n Z −aZ cos ω
a cos ωnu[n] 2
Z −2aZ cos ω+ a 2
n aZ sin ω
a sin ωnu[n] 2
Z −2aZ cos ω+ a 2
TOP TUTORIALS
Python Tutorial
Java Tutorial
C++ Tutorial
C Programming Tutorial
C# Tutorial
PHP Tutorial
R Tutorial
HTML Tutorial
CSS Tutorial
JavaScript Tutorial
SQL Tutorial
TRENDING TECHNOLOGIES
Git Tutorial
Ethical Hacking Tutorial
Docker Tutorial
Kubernetes Tutorial
DSA Tutorial
SDLC Tutorial
Unix Tutorial
CERTIFICATIONS
DevOps Certification
Online C Compiler
Online C# Compiler
Tutorials Point is a leading Ed Tech company striving to provide the best learning material on
technical and non-technical subjects.