SSP Lecture 3
SSP Lecture 3
Answer
Signal Estimation refers to the process of extracting or reconstructing the original signal
from observed data that may be corrupted by noise or distortion. In practice, signals
received in real-world systems are often contaminated by random noise, and estimation
techniques help in recovering the underlying useful signal from such noisy observations.
where x(t) is the true signal and n(t) is the noise, the goal of signal estimation is to
construct an estimator x̂(t) that approximates x(t) as accurately as possible.
1
Signal Estimation Theory Overview Statistical Signal Processing
Key Difference:
• Signal detection deals with discrete decisions (e.g., signal present or not).
Signal estimation plays a crucial role in the recovery of signals from noisy environments
in various applications such as:
• Medical Imaging: Estimating physiological signals like ECG or MRI from noisy
measurements.
• Radar and Sonar: Estimating range and velocity of targets using reflected signals.
Signal estimation methods such as the Maximum Likelihood Estimator (MLE), Minimum
Mean Square Error (MMSE), and Bayesian Estimation are widely used for accurate signal
recovery.
2
Signal Estimation Theory Overview Statistical Signal Processing
Answer
n
Y
L(θ; x1 , x2 , . . . , xn ) = f (xi ; θ)
i=1
Advantages of ML Estimation
3
Signal Estimation Theory Overview Statistical Signal Processing
Limitations of ML Estimation
xi ∼ N (µ, σ 2),
Answer:
xi ∼ N (µ, σ 2 )
n
(xi − µ)2
Y 1
L(µ) = √ exp −
i=1 2πσ 2 2σ 2
4
Signal Estimation Theory Overview Statistical Signal Processing
Hence !
n n
n 2 1 X 2 X
xi + nµ2
ℓ(µ) = − log 2πσ − 2 x − 2µ .
2 2σ i=1 i i=1
Differentiate:
n
! n
!
dℓ 1 X 1 X
=− 2 −2 xi + 2nµ = 2 xi − nµ .
dµ 2σ i=1
σ i=1
d2 ℓ 1 n
2
= 2 (0 − n) = − 2 < 0,
dµ σ σ
so ℓ(µ) is strictly concave and the stationary point is the (unique) maximizer.
Estimation variance:
Differentiate w.r.t. σ 2 :
n
∂ℓ n 1 X
= − + (xi − µ)2 .
∂σ 2 2σ 2 2(σ 2 )2 i=1
so n
1X
σb2 ML = (xi − x̄)2 .
n i=1
5
Signal Estimation Theory Overview Statistical Signal Processing
Remarks: µ̂ML is unbiased; σb2 ML is biased (unbiased estimator uses division by n − 1).
Both are standard textbook results.
Solution:
1 1 2
p(y; A) = exp − ∥y − As∥ .
(2πσ 2 )N/2 2σ 2
N 1
log 2πσ 2 − 2 ∥y − As∥2 .
ℓ(A) = −
2 2σ
1
ℓ(A) = − 2
∥y − As∥2 .
2σ
6
Signal Estimation Theory Overview Statistical Signal Processing
Thus
1
y⊤ y − 2A s⊤ y + A2 s⊤ s .
ℓ(A) = − 2
2σ
Differentiate ℓ(A) with respect to A:
dℓ 1 1
= − 2 − 2s⊤ y + 2A s⊤ s = 2 s⊤ y − A s⊤ s .
dA 2σ σ
⊤
PN −1
⊤ ⊤ bML = s y = n=0 s[n] y[n]
s y − As s = 0 =⇒ A N −1
.
s⊤ s
P 2
n=0 s[n]
(Second-derivative check:)
d2 ℓ 1 ⊤
= − s s (< 0),
dA2 σ2
so the stationary point is a maximum.
N 1
log 2πσ 2 − 2 ∥y − As∥2 .
ℓ(A) = −
2 2σ
Score:
dℓ 1
= 2 s⊤ (y − As).
dA σ
Fisher information:
h dℓ 2 i 1
= 4 E (s⊤ (y − As))2 .
I(A) = E
dA σ
Since
w ∼ N (0, σ 2 I) ⇒ E[ww⊤ ] = Cov(w) = σ 2 I,
7
Signal Estimation Theory Overview Statistical Signal Processing
1 ⊤ 1 1
I(A) = 4
s E[ww⊤ ]s = 4 s⊤ (σ 2 I)s = 2 s⊤ s.
σ σ σ
Therefore the CRLB for any unbiased estimator à of A is
1 σ2
Var(Ã) ≥ = ⊤ .
I(A) s s
Motivation
In many signal processing problems, we observe incomplete or noisy data. For example:
8
Signal Estimation Theory Overview Statistical Signal Processing
9
Signal Estimation Theory Overview Statistical Signal Processing
Pn (t)
where Nk = i=1 γik .
Answer:
1. Complete-data log-likelihood
10
Signal Estimation Theory Overview Statistical Signal Processing
2.1 Posterior p(xi | yi , θ(t) ) (by completing the square). For each i,
µ(t)
1 1 1 yi
− 2
+ 2(t) x2i + + xi + (terms free of xi ).
2 σ τ σ 2 τ 2(t)
(t) (t)
E[xi | yi , θ(t) ] = mi , E[x2i | yi , θ(t) ] = (mi )2 + v (t) .
2.2 Assemble Q. Only the prior part depends on (µ, τ 2 ); the likelihood part − 2σ1 2 (yi −
P
xi )2 is constant in (µ, τ 2 ) after taking expectation, so it can be dropped for the M-step.
Therefore,
n
2 n
(t) 2 1 X
E (xi − µ)2 | yi , θ(t) + const
Q(µ, τ | θ ) = − log τ − 2
2 2τ i=1
n
n 1 X 2
= − log τ 2 − 2 E[xi ] − 2µ E[xi ] + µ2 + const
2 2τ i=1
n
n 2 1 X (t) 2 (t)
= − log τ − 2 (mi ) + v (t) − 2µ mi + µ2 + const.
2 2τ i=1
Set to zero:
n n
X (t) (t+1) 1 X (t)
− mi + nµ = 0 =⇒ µ = m .
i=1
n i=1 i
11
Signal Estimation Theory Overview Statistical Signal Processing
Then
n 1
Q(µ, τ 2 | θ(t) ) = − log τ 2 − 2 S (t) (µ) + const.
2 2τ
Differentiate w.r.t. τ 2 and set to zero:
With µ = µ(t+1) ,
n
2(t+1) 1 X (t) (t) 2
τ = v + mi − µ(t+1) .
n i=1
With −1
µ(t)
(t) 1 1 (t) (t) yi
v = 2
+ 2(t) , mi =v + ,
σ τ σ 2 τ 2(t)
the M-step gives
n n
(t+1) 1 X (t) 2(t+1) 1 X (t) (t) 2
µ = m , τ = v + mi − µ(t+1) .
n i=1 i n i=1
12
Signal Estimation Theory Overview Statistical Signal Processing
(1) (1)
Perform one EM iteration and report π (1), µ1 , µ2 .
Solution:
i = 1, y1 = 0.2:
(0) (0)
(y1 − µ1 )2 (0.2 − 0.3)2 (y1 − µ2 )2 (0.2 − 2.3)2
= = 0.5, = = 231.
2σ 2 0.02 2σ 2 0.02
(0) 0.5e−0.5
Thus γ11 ≈ 0.5e−0.5 +0.5e−231
≈ 1.0000.
i = 2, y2 = 0.4:
i = 3, y3 = 2.2:
i = 4, y4 = 2.4:
13
Signal Estimation Theory Overview Statistical Signal Processing
Thus after one iteration the parameters remain (0.5, 0.3, 2.3) because the initialization
already matches the two tight clusters perfectly.
14
Signal Estimation Theory Overview Statistical Signal Processing
Compute exponents:
0.5e−0.5
• i = 1: r11 ≈ ≈ 1.0.
0.5e−0.5 + 0.5e−50
0.5e−0.5
• i = 2: r21 ≈ ≈ 1.0.
0.5e−0.5 + 0.5e−32
0.5e−36.125
• i = 3: r31 ≈ ≈ 0.0.
0.5e−36.125 + 0.5e−0.125
0.5e−45.125
• i = 4: r41 ≈ ≈ 0.0.
0.5e−45.125 + 0.5e−0.125
So (within numerical precision) first two points assigned to component 1, last two to
component 2.
M-step:
X
N1 = ri1 ≈ 2, N2 ≈ 2.
i
15