Chapter 3: Common
Univariate Random
Variables
Objective
Master the definitions, properties, and financial uses of the most common dis-
crete and continuous random variables, understand their interconnections, and
be able to choose the right model for specific risk management problems.
Key Topics & Intuition
Discrete Random Variables
1. Bernoulli Distribution
• Use: Binary outcomes (0 or 1) like default/no default, win/loss, or
yes/no.
• Intuition: Think of flipping a biased coin. The probability p represents
the chance of a ”1” (e.g., default).
• Formula: f (y) = py (1 − p)1−y
• Mean: p, Variance: p(1 − p)
• Relationship: Basis for Binomial Distribution.
• Risk application: Identifies extreme events, e.g., VaR breaches.
2. Binomial Distribution
• Use: Number of successes in n Bernoulli trials.
• Intuition: Flip a biased coin n times. Binomial counts the ”heads” (i.e.,
successes).
• Formula: f (y) = ny py (1 − p)n−y
• Mean: np, Variance: np(1 − p)
1
2
• Approximation: Use Normal if np ≥ 10 and n(1 − p) ≥ 10
• Link : Binomial → Normal via CLT
3. Poisson Distribution
• Use: Models the count of events in a time period (e.g., defaults per
quarter).
• Intuition: Imagine rare events (like defaults) that occur randomly but
at a fixed average rate λ.
λy e−λ
• Formula: f (y) = y!
• Mean = Variance = λ
• Relationship: If the number of Bernoulli events becomes infinite and
p → 0, Binomial → Poisson
• Link to Exponential : Time between Poisson events is Exponential
Continuous Random Variables
4. Uniform Distribution
• Use: All outcomes equally likely; basis for simulation.
• Intuition: Picking a number at random from an interval (e.g., [0,1]).
• Formula (PDF): f (y) = 1
b−a , for y ∈ [a, b]
2
(b−a)
• Mean: a+b
2 , Variance: 12
• Use in modeling: Generates random samples from any distribution (via
transformation)
5. Normal Distribution
• Use: Modeling returns, noise, measurement error.
• Intuition: Most real-world phenomena ”cluster” around a mean.
(y−µ)2
• Formula: f (y) = √1 e− 2σ 2
σ 2π
• Properties: Symmetric, completely defined by mean and variance.
• Central Limit Theorem (CLT): Any sum of iid RVs → Normal as n → ∞
• Links:
– Underlies Lognormal, t, Chi-square, F
– Closed under linear combinations
– Most used in financial risk modeling
6. Lognormal Distribution
• Use: Modeling prices (which can’t be negative)
3
• Intuition: Log of prices is normally distributed; prices grow multiplica-
tively.
• Formula: If X ∼ N (µ, σ 2 ), then Y = eX ∼ Lognormal
2
• Mean: eµ+σ /2
• Right-skewed, strictly positive
• Use: Black-Scholes model assumes lognormal asset returns
7. Exponential Distribution
• Use: Time between Poisson events (e.g., time to next default)
• Intuition: Memoryless ”waiting time” model
• PDF : f (y) = β1 e−y/β , y ≥ 0
• Mean: β, Variance: β 2
• Link : Poisson → Exponential (event count ↔ event time)
8. Chi-Square Distribution
• Use: Variance testing, volatility modeling
• Intuition: Sum of squared standard normals
Pn
• Formula: χ2n = i=1 Zi2 , Zi ∼ N (0, 1)
• Mean: n, Variance: 2n
• Link : t and F distributions are built from Chi-square
9. Student’s t Distribution
• Use: Small sample means, unknown variance
• Intuition: Like Normal, but with fat tails to capture rare, extreme
events
• Formula: t = √ Z , where Z ∼ N (0, 1), W ∼ χ2 (n)
W/n
• Heavier tails than normal ⇒ more robust to outliers
• Use in finance: Short-horizon return modeling, VaR estimation
10. F-Distribution
• Use: Ratio of two sample variances (e.g., ANOVA, regression)
• Intuition: Compare two variances — is one riskier?
X1 /n1
• Formula: F = X2 /n2 , with Xi ∼ χ2
• Link : t2 ∼ F (1, n)
• Important: Always positive, right-skewed
11. Mixture Distributions
• Use: Model skewness and kurtosis found in real financial data
• Intuition: Combine simple distributions to mimic complex behavior
• Example: Mix two Normals → heavy tails + bimodal shape
• Use in risk management: Stress testing, non-Gaussian return modeling
4
Distribution Relationships & Modeling Links
From To How
Bernoulli Binomial Sum of n Bernoulli trials
Binomial Poisson As n → ∞, p → 0, np = λ
Poisson Exponential Time between Poisson events
Normal Lognormal Log transformation
Normal t-distribution Add uncertainty in variance (small samples)
Normal Chi-Square Sum of squared standard Normals
Chi-Square F-distribution Ratio of two Chi-square variables
t-distribution F-distribution t2 ∼ F (1, n)
Table 1: Distribution Properties Summary
Distribution Mean Variance Key Formula
Bernoulli(p) p p(1 − p) fY (y) = py (1 − p)1−y for y ∈
{0, 1}
fY (y) = ny py (1 − p)n−y for y =
Binomial(n, p) np np(1 − p)
0, 1, . . . , n
y −λ
Poisson(λ) λ λ fY (y) = λ y! e
for y = 0, 1, 2, . . .
a+b (b−a)2 1
Uniform(a, b) 2 12 fY (y) = b−a I[a,b] (y)
(y−µ)2
Normal(µ, σ 2 ) µ σ 2 1
fY (y) = √2πσ 2
e− 2σ2
2 2 2 (ln y−µ)2
Lognormal(µ, σ 2 ) eµ+σ /2
(eσ − 1)e2µ+σ 1
fY (y) = y√2πσ 2
e− 2σ2
Pn 2
Chi-Square(n) n 2n Y = i=1 Zi where Zi ∼
N (0, 1)
n
Student’s t(n) 0 (n > 1) n−2 (n > 2) Y = √Z where Z ∼ N (0, 1),
W/n
2
W ∼ χ (n)
F(n1 , n2 ) n2
n2 −2 (n2 > 2) Complex F =X 1 /n1 2
X2 /n2 where Xi ∼ χ (ni )
1 −y/β
Exponential(β) β β2 fY (y) = β e for y ≥ 0
Mnemonics
• BEEP-NFLX → Bernoulli, Exponential, Poisson — Normal, F, Lognor-
mal, Chi-square, t
• BU-PN → Discrete-to-continuous path: Bernoulli → Uniform → Poisson
→ Normal
• BINS for Binomial Assumptions:
5
– Binary outcomes
– Independent trials
– N = number of trials
– Success probability constant
Top 5 Exam Takeaways
1. Understand how Normal connects to t, χ2 , F, and Lognormal — central
for modeling and inference.
2. Memorize formulas: Expectation, Variance, PDF/CDF for all key distri-
butions.
3. Use Poisson/Exponential duality: One for count, the other for time.
4. Fat tails = Student’s t or Mixtures: Better than Normal for risk events.
5. CLT explains why Normal works so often—but only under specific condi-
tions!
General Framework: Calculating Probabilities
under the Normal Distribution
Step 1: Identify the Distribution
• If given Z ∼ N (0, 1) → skip to Step 3.
• If given W ∼ N (µ, σ 2 ), then:
– Mean: µ
√
– Standard Deviation: σ = σ2
Step 2: Standardize the Variable
Convert W to a standard normal variable Z:
W −µ
Z=
σ
This transforms your variable into the standard normal distribution N (0, 1), so
you can use standard Z-tables or a calculator.
6
Step 3: Use Z-Table or CDF
Problem Type Convert to this Probability
Pr(W < x) Pr(Z < z)
Pr(W > x) Pr(Z > z) = 1 − Pr(Z < z)
Pr(a < W < b) Pr(z1 < Z < z2 ) = Pr(Z < z2 ) − Pr(Z < z1 )
Find x such that Pr(W < x) = p Find z = Φ−1 (p), then x = µ + zσ
Step 4: Look Up or Compute
Use:
• Z-tables
• Calculator/inverse CDF functions (e.g., norm.ppf() in Python or Excel’s
NORM.S.INV)
• Q: Find Pr(W > 12), where W ∼ N (3, 9)
Solution:
√
– µ = 3, σ = 9=3
12−3
– Z= 3 =3
– Pr(W > 12) = Pr(Z > 3) ≈ 1 − 0.9987 = 0.0013
Mnemonic for Exam
”D-S-Z-T” = Define → Standardize → Z-table → Transform back (if needed)
Common FRM Exam Pitfalls
• Using variance instead of standard deviation when standardizing.
• Forgetting to subtract from 1 when calculating Pr(Z > z).
• Using the wrong direction for inequalities (e.g., mixing up Pr(Z > z) and
Pr(Z < z)).
Common Exam Traps
• Misclassifying Lognormal as Normal: Lognormal is strictly positive; Nor-
mal is not.
• Forgetting CLT approximations: Normal ̸= always valid. Use only with
large n, finite variance.
7
• Using Exponential instead of Poisson (and vice versa): One is for time
between events, the other for count.
• Overlooking fat tails: Real-world returns are rarely Normal — think Stu-
dent’s t or Mixtures.
• Incorrect mean for Lognormal: Must include the σ 2 /2 term.