9/3/2025
Chapter 2. Probability and Statistics:
What’s the difference?
Probability
To predict the results of experiments based on the
characteristics of population.
Statistics
The science of collecting, analyzing and
interpreting data.
Goal: to draw conclusions or make inferences from
experiments result (sample) about the population.
Probability
Population Samples
24
Statistics
Chapter 2. Probability and Statistics: Example
Example: specification for electronic parts.
25
Chapter 2. Probability and Statistics:
Another example: running on indoor
track in Rec
• Lane 4 1217 ft /lap (1 Mile = 5280 ft)
• Lane 3 1194 ft/lap
• Mile marks on both directions
• What if I want to run more than one mile?
• 5280/1217=4.3385 1M= 4 laps + more than 1/3 lap
• Measuring the distance between Mile marks: > 1/3 lap
• Conclusion: mile marks or 1217 ft/lap is not accurate
26
1
9/3/2025
Chapter 2. Probability and Statistics:
Why are they important?
• They are everywhere.
• Experiment: Measure a 10K resistor by each student.
• Is your result equal to 10K?
• Are the result the same for every student?
• Now measure 10 resistors (10K). What would happen?
• What if you repeat the measurement of the same
resistor?
• Statistics can be used to analyze something that
would otherwise be very difficult to analyze.
27
Chapter 2. Probability and Statistics:
Why are they relevant to ET?
• Collecting data (Testing) is an important part of
engineering tasks.
• If possible, multiple test data should be collected.
• Design parameters will not be perfect when
implemented (10K is not really 10K). So, what is the
impact of the variation in the design parameters on
your design?
• Your design needs to be robust against parameter
variation.
28
Chapter 2. Probability and Statistics:
Example 2.1
• Design of LPF with cut-off frequency of 159.15 Hz.
1 1
• 𝑓𝑐 = 2𝜋𝑅𝐶 = 2𝜋×103 ×10−6 = 159.15 𝐻𝑧
• Vishay parts:
• R: CRCW08051K00JNEA, 5% tolerance
• C: VJ0805Y105MXQTW1BC, 20% tolerance
• Worst case scenarios:
1 1
𝑓𝑐1 = = = 126.31 𝐻𝑧
2𝜋𝑅𝐶 2𝜋 × 1 + 0.05 × 103 × 1 + 0.20 × 10−6
1 1
𝑓𝑐2 = = = 20.41 𝐻𝑧
2𝜋𝑅𝐶 2𝜋 × 1 − 0.05 × 103 × 1 − 0.20 × 10−6
Do we really need to design for the worst-case scenario
caused by parameter variations?
29
2
9/3/2025
Chapter 2. Probability and Statistics: Basic
definitions
𝐴𝑐𝑢𝑡𝑎𝑙 𝑜𝑐𝑐𝑢𝑟𝑒𝑛𝑐𝑒𝑠 𝑜𝑓 𝐸𝑣𝑒𝑛𝑡 𝐴
Probability: 𝑃 𝐴 = 𝑇𝑜𝑡𝑎𝑙 𝑝𝑜𝑠𝑠𝑖𝑏𝑙𝑒 𝑜𝑐𝑐𝑢𝑟𝑒𝑛𝑐𝑒𝑠
, a relative
possibility that an event will occur.
Population: a well-defined collection of objects that are of
interest in a specific case.
If all the information for all objects in the population is
available, we have a census.
Sample: a subset of the population.
Reasonable to assume: the sample carries information that is
representative of the population. (Provided that the sample is
randomly selected and the sample size is large enough.)
30
Chapter 2. Probability and Statistics: more
definitions
Experiment: any activity having uncertain outcomes.
Sample space of an experiment, denoted by : is the set of
all possible outcomes of that experiment.
Event: any collection (subset) of outcomes contained in the
sample space .
Simple event: an element in that consists of a single
outcome.
Compound event: an event contains more than one outcome.
When an experiment is performed, a particular event A is said
to occur if the resulting experimental outcome is contained in
A.
Probability: a function P(•) that gives a precise measure P(A)
of the chance that an event A will occur, with the sample space
as the domain and the set of real numbers [0, 1] as the
range.
31
Chapter 2. Probability and Statistics:
Example 2.2
Experiment: to inspect 20 samples out of the 100 products.
Each sample is either defective or not defective.
= {0 defects, 1 defect, 2 defects, 3 defects, …, 20 defects}
What are the simple events?
Compound events: {less than 5 defects}, {more than 10
defects}, …
32
3
9/3/2025
Chapter 2. Probability and Statistics: Sampling
Simple random sampling: assigns any particular subset of a
given size the same chance of being selected.
Example 2.3 Polling for a local mayoral race.
Stratified sampling: first divides the entire sample space, ,
into non-overlapping subsets, followed by simple random
sampling within each subset.
Example 2.4 Polling for the U.S. presidential election.
Total electors: 538, 55 (or 55/538 = 10.22%) for CA, 38 (or
38/538 = 7.06%) for TX, …, and 3 (or 3/538 = 0.56%) for DC.
If the total sample is 20,000, then 2045 should be from
California, 1413 from Texas, and 111 from DC.
33
Chapter 2. Probability and Statistics: rv
Random variable: a function with the sample space as its
domain, and the real numbers R, or a subset of R, as its range.
Discrete random variable: if it takes finitely many or infinitely
but countably many values.
Continuous random variable: if it takes uncountable many
values and the probability for the random variable to be equal
to any given value is 0.
Example 2.5
The number of people voting for a particular presidential candidate
A digital signal
An analog voltage signal in an electronic circuit
A digital multimeter is used to measure the analog voltage, the
measurement would be …
34
Chapter 2. Probability and Statistics:
relationship with set theory
An event is just a set, so relationships and results from
elementary set theory can be used to study events.
Assume A, B ⊂
Union (A ∪ B) - all elements in A or B
Intersection (A ∩ B) - all elements in A and B
Complement (A') - all elements not in A (Φ := ')
Disjoint (Mutually exclusive) - two events are mutually
exclusive or disjoint if they have no outcomes in common,
i.e., A ∩ B = Φ
35
4
9/3/2025
Chapter 2. Probability and Statistics:
Venn diagrams
Mutually exclusive events
36
Chapter 2. Probability and Statistics:
Set Theory
For any 3 sets A, B, C, we have
– Commutative: A∪B=B∪A, B∩C=C∩B
– Associative: A∪(B∪C)=(A∪B)∪C=A∪B∪C
A∩(B∩C)=(A ∩ B)∩C=A ∩B∩C
– Distributive: A∪(B∩C)=(A∪B)∩(A∪C)
A∩(B∪C)=(A∩B)∪(A∩C)
– De Morgan’s laws:
(A∩B)’=A’ ∪ B’ (A∪B)’=A’ ∩ B’
37
Chapter 2. Probability and Statistics:
Example 2.6
The experiment: rolling a die twice
=
{(1,1),(1,2),(1,3),(1,4),(1,5),(1,6),(2,1),(2,2),(2,3),(2,4),(2,5),(2,6),
(3,1),(3,2),(3,3),(3,4),(3,5),(3,6),(4,1),(4,2),(4,3),(4,4),(4,5),(4,6),(5
,1), (5,2),(5,3),(5,4),(5,5),(5,6),(6,1),(6,2),(6,3),(6,4),(6,5),(6,6)}
• A = {sum of the two numbers is less than 6}
A = {(1,1),(1,2),(1,3),(1,4),(2,1),(2,2),(2,3),(3,1),(3,2),(4,1)}
• B = {both numbers are even}
B = {(2,2),(2,4),(2,6),(4,2),(4,4),(4,6),(6,2),(6,4),(6,6)}
• C = {at least one of the numbers is 4}
C = {(1,4),(2,4),(3,4),(4,1),(4,2),(4,3),(4,4),(4,5), (4,6),(5,4),(6,4)}
• A ∩ B ={(2,2)}, A ∩ C ={(1,4),(4,1)}, B ∩ C ={(2,4),(4,2),
(4,4),(4,6),(6,4)}, A ∩ B ∩ C =Φ
38
5
9/3/2025
Chapter 2. Probability and Statistics:
Calculus of Probabilities
• P( Ø) = 0, P()=1
• P(A) ≤ 1
• P(A’) = 1 − P(A)
• P(A B) = P(A) + P(B) − P(A B)
• A ⊂ B implies P(A) ≤ P(B)
• If A1, A2, A3,… is an infinite collection of disjoint events, then
•
P(A1 A2 A3 …) =σ∞ 𝑖=1 𝑃(𝐴𝑖 )
• A and B are independent if and only if
• P(A B) = P(A) P(B)
• Events A1, . . . , An are mutually independent if for every k (k =
2, 3, . . . , n) and every subset of indices i1, i2, . . . , ik,
• 𝑃 𝐴𝑖1 ∩ 𝐴𝑖2 ∩ ⋯ ∩ 𝐴𝑖𝑘 = 𝑃(𝐴𝑖1 ) ∙ 𝑃(𝐴𝑖2 ) ∙ ⋯ ∙ 𝑃(𝐴𝑖𝑘 )
39
Chapter 2. Probability and Statistics:
Example
Dependent and independent events
• Lottery
• Coin tosses
• Drawing cards (putting back or not)
40
Chapter 2. Probability and Statistics:
Example
Using Venn diagram to prove that
P(A B) = P(A) + P(B) − P(A B)
P(A)+P(B)=(area1+area2)+(area2+area3)
=P(A∪B)+area2
P(A∩B)=area2
41
6
9/3/2025
Chapter 2. Probability and Statistics:
Exercise:
(Example 2.7) Using Venn diagram to prove that
P(A∪B∪C)=P(A)+P(B)+P(C)−P(A∩B)−P(A∩C)−P(B∩C)+P(A∩B∩
C)
42
Chapter 2. Probability and Statistics:
Conditional Probability
• How the information “event B has occurred” affects the
probability of event A.
• Notation: P(A | B) for the conditional probability of A given
that the event B has occurred. B: “conditioning event.”
• For any two events A and B with P(B) > 0,
𝑃(𝐴 ∩ 𝐵)
𝑃 𝐴𝐵 =
𝑃(𝐵)
• A and B are independent if and only if, P(A|B) = P(A).
43
Chapter 2. Probability and Statistics
Independent vs. Disjoint
• Independent: P(B A) = P(B) P(A)
• Disjoint: B A = Ø
• If two events have none-zero probabilities, then
Disjoint dependent
Independent not disjoint
Example 2.8 Assume P(A) ≠ 0, P(B) ≠ 0. If A and B are
independent events, prove that A and B are not disjoint.
Since P(A) ≠ 0 and P(B) ≠ 0, we have P(A) P(B) ≠ 0.
A and B are independent P(A ∩ B) = P(A) P(B) ≠ 0,
A and B are not disjoint.
44
7
9/3/2025
Chapter 2. Probability and Statistics:
System and subsystem reliability
System Reliability as a Function of
Subsystem Reliability
• Subsystems are mutually independent
• The probabilities of subsystems are known
• How do we find the system probability?
45
Chapter 2. Probability and Statistics: Example
Let’s consider various configurations of solar
photovoltaic arrays consisting of solar cells.
Consider a particular lifetime value t0, and suppose
we want to determine the probability that the system
lifetime exceeds t0.
Let Ai denote the event that the lifetime of cell i
exceeds t0 (i = 1, 2, . . . , 6).
Assume that the Ai ‘s are independent events and that
P(Ai) = .9 for every i since the cells are identical.
Option 1: Option 2:
46
Chapter 2. Probability and Statistics:
basic configurations
• Series configuration
• Parallel configuration
P(S)=P(S1 ∪ S2)
= P(S1)+P(S2 )−P(S1)P(S2)
P(S)=P(S1∩S2)=P(S1)P(S2)
• Alternatively, P(Sc)=P(S1c ∩ S2c)
• Generalize to n
subsystems =P(S1c)P(S2c)=[1−P(S1)][1−P(S2)]
P(S)=P(S1)P(S2) P(Sn) P(S)=1−P(Sc)=1−[1−P(S1)][1−P(S2)]
• n parallel-connected subsystems
P(S)=1−[1−P(S1)][1−P(S2)]
[1−P(Sn)]
47
8
9/3/2025
Chapter 2. Probability and Statistics:
Example 2.9
1. First, break down the system into subsystems in series
or parallel;
2. Write the probability of the system (being a success) as
a function of probabilities of the subsystems;
3. Continue this process until you reach a point where you
can calculate all the probabilities.
48
Chapter 2. Probability and Statistics:
Example 2.9
P(A)=P(A124 A35678)= P(A124)P(A35678)
P(A124) =P(A1 A24) = P(A1) + P(A24) − P(A1)P( A24)
P(A35678) = P(A38 A567)= P(A38 )+P( A567)- P(A38 )P( A567)
P(A24)=P(A2)P( A4), P(A38)=P(A3)P( A8), P(A567)=P(A5)+P( A67) -P(A5)P( A67)
P( A67) = P(A6)P( A7)
P(A)=[P(A1) + P(A24) − P(A1)P(A24)] [P(A38 )+P(A567)- P(A38 )P( A567)]
=[P(A1) + P(A2)P(A4)− P(A1)P(A2)P(A4)] *
[P(A3)P(A8)+{P(A5)+P(A6)P(A7)-P(A5)P(A6)P(A7)}-
P(A3)P(A8){P(A5)+P(A6)P(A7)-P(A5)P(A6)P(A7)}]
49
Chapter 2. Probability and Statistics:
Option 1
Now let’s consider Option 1 for photovoltaic arrays
Let A123 denote the event of the top branch being a success
and A456 the event of the bottom branch being a success.
P(A)=P(A123A456)=P(A123)+P(A456 )−P(A123)P(A456)
P(A123)=P(A1) P(A2) P(A3)
P(A456)=P(A4) P(A5) P(A6)
P(A)=P(A1)P(A2)P(A3)+P(A4)P(A5)P(A6)-
P(A1)P(A2)P(A3)P(A4)P(A5)P(A6)
Create an Excel program
50
9
9/3/2025
Chapter 2. Probability and Statistics:
option 2
P(A)=P(A14) P(A25 ) P(A36)
P(A14)=P(A1A4)= P(A1)+P(A4) - P(A1)P(A4)
P(A25)=P(A2) + P(A5) - P(A2)P(A5)
P(A36)=P(A3A6)= P(A3)+P(A6) - P(A3)P(A6)
P(A)=[P(A1)+P(A4)-P(A1)P(A4)][P(A2)+P(A5)-P(A2)P(A5)]
[P(A3)+P(A6)-P(A3)P(A6)]
51
Chapter 2. Probability and Statistics:
The Law of Total Probability
Events A1, . . . , Ak are mutually exclusive if no two have any
common outcomes. The events are exhaustive if one Ai must
occur, so that A1 … Ak = .
Let A1, . . . , Ak be mutually exclusive and exhaustive events.
Then for any event B,
𝑃 𝐵 = 𝑃 𝐵 𝐴1 𝑃 𝐴1 + ⋯ + 𝑃 𝐵 𝐴𝑘 𝑃 𝐴𝑘
𝑘
= 𝑃 𝐵 𝐴𝑖 𝑃 𝐴𝑖
𝑖=1
Example 2.10: For any event B, B and B’ are mutually
exclusive and exhaustive, therefore, P(A) = P(A|B)P(B) +
P(A|B’)P(B’), or
P(A) = P(A B) + P(A B’) 52
Chapter 2. Probability and Statistics:
Bayes’ Theorem
Let A1, A2, . . . , Ak be a collection of k mutually exclusive
and exhaustive events (i = 1,…, k).
For any other event B for which P(B) > 0, the conditional
probability of Aj given that B has occurred is
𝑃(𝐴𝑖 ∩ 𝐵) 𝑃 𝐵 𝐴𝑖 𝑃(𝐴𝑖 )
𝑃 𝐴𝑖 𝐵 = =
𝑃(𝐵) 𝑃(𝐵)
𝑃 𝐵 𝐴𝑖 𝑃(𝐴𝑖 )
=
σ𝑛𝑗 = 1 𝑃 𝐵 𝐴𝑗 𝑃(𝐴𝑗 )
53
10
9/3/2025
Chapter 2. Probability and Statistics:
Example 2.11
A company has 3 different plants making the same product.
• 65% of the products are from plant #1
• 20% from plant #2
• 15% from plant #3.
The defect rates in plants 1, 2, and 3 are 1%, 2%, and 3%
respectively.
Q1: What is the probability that a product purchased by a
customer is defective?
Q2: If a consumer purchases a defective product, what is the
probability that it was made in plant 2?
54
Chapter 2. Probability and Statistics:
Example 2.11
Ai = {product purchased by a customer is from plant i} for i =
1, 2, 3,
B = {product purchased by a customer is defective}
Then the given percentages imply that
P(A1) = .65, P(A2) = .20, P(A3) = .15
P(B|A1)=.01, P(B|A2)=.02, P(B|A3)=.03.
Q1: What is the probability that a product purchased by a
customer is defective?
Substituting into the equation for the law of total probability:
P(B) = (.01)(.65) + (.02)(.20) + (.03)(.15) = .015
1.5% of the products will be defective.
55
Chapter 2. Probability and Statistics
Example 2.11
Q2: If a consumer purchases a defective product, what is
the probability that it was made in plant 2?
Using Bayes Theorem,
𝑃 𝐵 𝐴2 𝑃(𝐴2 ) 0.02 ∗ 0.2
𝑃 𝐴2 𝐵 = = = 0.267
𝑃(𝐵) 0.015
56
11
9/3/2025
Chapter 2. Probability and Statistics:
Discrete Random Variable
The probability distribution or probability mass function
(pmf) of a discrete rv:
p (x) := P(X = x) = P (all s : X (s) = x).
• The pmf specifies the probability of observing value x when
the experiment is performed.
• p (x) 0, all possible x p (x) = 1.
The cumulative distribution function (cdf) F(x)
𝐹 𝑥 : = 𝑃 𝑋 ≤ 𝑥 = 𝑝(𝑦)
𝑦:𝑦≤𝑥
• For any two numbers a and b (a b),
P (a X b) = F (b) – F (a–)
57
Chapter 2. Probability and Statistics:
Continuous Random Variables
Probability distribution or probability density function (pdf)
of X
𝑏
𝑃 (𝑎 𝑋 𝑏) = න 𝑓 𝑥 𝑑𝑥
𝑎
• f(x) 0 for all x
• Area under the entire graph of f(x) =1.
The cumulative distribution function (cdf) F(x)
𝑥
𝐹 𝑥 = 𝑃 𝑋 𝑥 = න 𝑓 𝑦 𝑑𝑦
−∞
• F(x) is the area under the density curve to the left of x.
• If X is a continuous rv with pdf f(x) and cdf F(x), then at every
x at which the derivative F(x) exists, F(x) = f(x). 58
Chapter 2. Probability and Statistics:
F(x)
• 𝑃 𝑋 > 𝑥 =1−𝑃 𝑋 ≤ 𝑥 = 1−𝐹 𝑥
• P(a X b) = F(b) – F(a)
59
12
9/3/2025
Chapter 2. Probability and Statistics:
Percentiles of a Continuous Distribution
Let p be a number between 0 and 1. The (100p)th
percentile of the distribution of a continuous rv X, denoted
by (p), is defined by
𝜂(𝑝)
p = F((p))= −∞ 𝑓 𝑦 𝑑𝑦
(p) is that value on the measurement axis such that
100p% of the area under the graph of f(x) lies to the left of
(p) and 100(1 – p)% lies to the right.
60
Chapter 2. Probability and Statistics
Expected Values
Discrete rv Continuous rv
E(X), X or E(X), X or
∞
𝐸 𝑋 = 𝜇𝑥 = 𝑥 ∙ 𝑝(𝑥)
𝑥∈𝐷
𝜇𝑥 = 𝐸(𝑋) = න 𝑥 𝑓 𝑥 𝑑𝑥
−∞
Expected value of a function h (X), Expected value of a function h (X),
h(X) h(X)
𝐸 ℎ(𝑋) = ℎ(𝑥) ∙ 𝑝(𝑥) ∞
𝑥∈𝐷 𝐸 ℎ(𝑋) = න ℎ(𝑥) 𝑓 𝑥 𝑑𝑥
E (aX + b) = a E(X) + b −∞
E (aX + b) = a E(X) + b
61
Chapter 2. Probability and Statistics:
Variance and Standard Deviation
• Variance of a random variable X (discrete or continuous):
𝜎 2 = 𝑉 𝑋 = 𝐸[(𝑋 − 𝜇)2 ]
• Standard deviation of X is 𝜎 = 𝑉(𝑋)
V (aX + b) = a2 V(X)
𝜎𝑎𝑋+𝑏 = |𝑎|𝜎𝑋
62
13
9/3/2025
Chapter 2. Probability and Statistics:
Example
Consider a university having 15,000 students and let X = #
of courses for which a randomly selected student is
registered. The pmf of X follows.
= 1 p(1) + 2 p(2) +…+ 7 p(7)
= (1)(.01) + 2(.03) + …+ (7)(.02)
= .01 + .06 + .39 + 1.00 + 1.95 + 1.02 + .14
= 4.57
63
Chapter 2. Probability and Statistics:
Example
A computer store has purchased three computers of a certain
type at $500 apiece. It will sell them for $1000 apiece.
The manufacturer has agreed to repurchase any computers still
unsold after a specified period at $200 apiece.
Let X denote the number of computers sold, and suppose that
p(0) = .1, p(1) = .2, p(2) = .3 and p(3) = .4.
With h (X) denoting the profit associated with selling X units, the
given information implies that
h (X) = revenue – cost = 1000X + 200(3 – X) – 1500
= 800X – 900
The expected profit is then
E [h (X)] = h(0) p(0) + h(1) p(1) + h(2) p(2) + h(3) p(3)
= (–900)(.1) + (– 100)(.2) + (700)(.3) + (1500)(.4) = $700
64
Chapter 2. Probability and Statistics:
Example (continued)
What if the computers are purchased at $450 a
piece with $150 buy back?
h(X)= 1000*X+150*(3-X)-450*3 = 850*X-900
h(0)=-900, h(1)=-50, h(2)=800, h(3)=1650
E[h(X)]=-900*0.1-50*0.2+800*0.3+1650*0.4=800
65
14
9/3/2025
Chapter 2. Probability and Statistics:
The Normal Distribution
The normal distribution is the most important one in all of
probability and statistics.
Many numerical populations have distributions that can be fit
very closely by an appropriate normal curve.
• heights, weights, and other physical characteristics
• measurement errors in scientific experiments,
• reaction times in psychological experiments,
• measurements of intelligence and aptitude,
• scores on various tests,
• numerous economic measures and indicators.
66
Chapter 2. Probability and Statistics:
Normal Distribution density function
• A continuous rv X is said to have a normal distribution
(X ~ N(, 2)) if the pdf of X is
1 2 2
𝑓 𝑥; 𝜇, 𝜎 = 𝑒 − 𝑥−𝜇 /(2𝜎 ) −∞ < 𝑥 < ∞
2𝜋𝜎
• It can be shown that E(X) = and V(X) = 2 67
Chapter 2. Probability and Statistics:
Standard normal rv
• Impossible to get the closed form expression for
𝑏
1 𝑥−𝜇 2 /(2𝜎2 )
𝑃 𝑎≤𝑋≤𝑏 =න 𝑒− 𝑑𝑥
2𝜋𝜎
𝑎
• For = 0 and = 1, which is called the standard normal
rv (Z), the above integration has been calculated using
numerical techniques and tabulated for certain values of a
and b.
• The standardized variable Z = (X – )/ is a normal rv
with = 0 and = 1.
68
15
9/3/2025
Chapter 2. Probability and Statistics:
From Normal to Standard Normal
𝑎−𝜇 𝑏−𝜇
𝑃 𝑎≤𝑋≤𝑏 =𝑃 ≤𝑍≤
𝜎 𝜎
69
Chapter 2. Probability and Statistics:
standard normal curve
70
Chapter 2. Probability and Statistics:
Z table
Z 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
0.0 0.5000 0.5040 0.5080 0.5120 0.5160 0.5199 0.5239 0.5279 0.5319 0.5359
0.1 0.5398 0.5438 0.5478 0.5517 0.5557 0.5596 0.5636 0.5675 0.5714 0.5753
0.2 0.5793 0.5832 0.5871 0.5910 0.5948 0.5987 0.6026 0.6064 0.6103 0.6141
0.3 0.6179 0.6217 0.6255 0.6293 0.6331 0.6368 0.6406 0.6443 0.6480 0.6517
0.4 0.6554 0.6591 0.6628 0.6664 0.6700 0.6736 0.6772 0.6808 0.6844 0.6879
0.5 0.6915 0.6950 0.6985 0.7019 0.7054 0.7088 0.7123 0.7157 0.7190 0.7224
0.6 0.7257 0.7291 0.7324 0.7357 0.7389 0.7422 0.7454 0.7486 0.7517 0.7549
0.7 0.7580 0.7611 0.7642 0.7673 0.7704 0.7734 0.7764 0.7794 0.7823 0.7852 Examples 2.12:
0.8 0.7881 0.7910 0.7939 0.7967 0.7995 0.8023 0.8051 0.8078 0.8106 0.8133
0.9 0.8159 0.8186 0.8212 0.8238 0.8264 0.8289 0.8315 0.8340 0.8365 0.8389 Find P(Z<0.84),
0.8413 0.8438 0.8461 0.8485 0.8508 0.8531 0.8554 0.8577 0.8599 0.8621
1.0
1.1 0.8643 0.8665 0.8686 0.8708 0.8729 0.8749 0.8770 0.8790 0.8810 0.8830
P(Z>0.84)
1.2 0.8849 0.8869 0.8888 0.8907 0.8925 0.8944 0.8962 0.8980 0.8997 0.9015
1.3 0.9032 0.9049 0.9066 0.9082 0.9099 0.9115 0.9131 0.9147 0.9162 0.9177
1.4 0.9192 0.9207 0.9222 0.9236 0.9251 0.9265 0.9279 0.9292 0.9306 0.9319 Example 2.14:
1.5 0.9332 0.9345 0.9357 0.9370 0.9382 0.9394 0.9406 0.9418 0.9429 0.9441
1.6 0.9452 0.9463 0.9474 0.9484 0.9495 0.9505 0.9515 0.9525 0.9535 0.9545
P(0.15<Z<0.63)
1.7 0.9554 0.9564 0.9573 0.9582 0.9591 0.9599 0.9608 0.9616 0.9625 0.9633
1.8 0.9641 0.9649 0.9656 0.9664 0.9671 0.9678 0.9686 0.9693 0.9699 0.9706
1.9 0.9713 0.9719 0.9726 0.9732 0.9738 0.9744 0.9750 0.9756 0.9761 0.9767
2.0 0.9772 0.9778 0.9783 0.9788 0.9793 0.9798 0.9803 0.9808 0.9812 0.9817
2.1 0.9821 0.9826 0.9830 0.9834 0.9838 0.9842 0.9846 0.9850 0.9854 0.9857
2.2 0.9861 0.9864 0.9868 0.9871 0.9875 0.9878 0.9881 0.9884 0.9887 0.9890
2.3 0.9893 0.9896 0.9898 0.9901 0.9904 0.9906 0.9909 0.9911 0.9913 0.9916
2.4 0.9918 0.9920 0.9922 0.9925 0.9927 0.9929 0.9931 0.9932 0.9934 0.9936
2.5 0.9938 0.9940 0.9941 0.9943 0.9945 0.9946 0.9948 0.9949 0.9951 0.9952
2.6 0.9953 0.9955 0.9956 0.9957 0.9959 0.9960 0.9961 0.9962 0.9963 0.9964
2.7 0.9965 0.9966 0.9967 0.9968 0.9969 0.9970 0.9971 0.9972 0.9973 0.9974
2.8 0.9974 0.9975 0.9976 0.9977 0.9977 0.9978 0.9979 0.9979 0.9980 0.9981
2.9 0.9981 0.9982 0.9982 0.9983 0.9984 0.9984 0.9985 0.9985 0.9986 0.9986
0.9987 0.9987 0.9987 0.9988 0.9988 0.9989 0.9989 0.9989 0.9990 0.9990
71
3.0
16
9/3/2025
Chapter 2. Probability and Statistics:
negative values
Why some Z tables do not have negative numbers?
Φ (−z) = 1 − Φ (z)
72
Chapter 2. Probability and Statistics:
Examples 2.13, 2.15
Example 2.13 Find P (Z< −0.84)
P (Z < −0.84) = Φ (−0.84) = 1 − Φ (0.84) = 1 − 0.7995 = 0.2005.
Example 2.15 X~ N(2, 42), find the probability of X<-8.0
• Normal distribution with = 2 and = 4.
• Converting to standard normal rv Z: Z = (X − 2)/4
• P(X < −8.0) = P(Z < −2.5) = Φ (−2.5) = 1 − Φ (2.5) = 1 − 0.9938
= 0.0062.
73
Chapter 2. Probability and Statistics
Percentiles of the Standard Normal Distribution
• A different way of using the Z table:
Example 2.16 Let X be a standard normal random variable.
Given P( X < a) = 0.7088, find a.
• For any p between 0 and 1, the Z table can be used to obtain
the (100p)th percentile of the standard normal distribution.
• The (100p)th percentile is identified by the row and column of
the Z table in which the entry p is found.
74
17
9/3/2025
Chapter 2. Probability and Statistics:
Create Z table in Excel
What if you don’t have access to the table??
Example 2.17
Reproduce the results in
Excel: the Z table and
Examples 2.12−2.16.
• Find P (Z < 0.84) and P (Z ≥ 0.84), P (Z < −0.84), P(0.15 < Z <
0.63), P(0.15<Z<0.63 or Z>0.84), and a such that P(Z<a) = 0.51.
• X ~ N(2, 16), find P(X < −8.0)
75
Chapter 2. Probability and Statistics:
Student’s t Distribution
𝑣+1
2 𝑥 2 −(𝑣+1)/2
𝑓 𝑥 = 𝑣 (1 + 𝑣 ) , −∞ < 𝑥 < ∞
𝜋𝑣 2
where v is the degree of freedom, is the gamma function defined
as
∞
a = 0 𝑡 𝑎−1 𝑒 −𝑡 𝑑𝑡
• Assuming X is a random variable and n samples of its values are
ത
(𝑋−𝜇) 𝑛
taken, the probability that 𝑡 = is
𝑠
between a and b is equal to the area under the
Student’s t-density function between a and b.
76
Chapter 2. Probability and Statistics:
Example 2.18
Ten measurements of a random variable are as follows:
x1 = 10.0, x2 = 11.1, x3 = 9.9, x4 = 10.1, x5 = 11.2, x6 = 9.7, x7 =
11.5, x8 = 9.8, x9 = 10.1, x10 = 11.2.
Find the probability that the random variable has a population
mean greater than 10.0.
• The average of the samples is 10.46, and the sample
standard deviation is 0.6979.
• 𝑃 𝜇 > 10 = 𝑃 −𝜇 < −10 = 𝑃 𝑋ത − 𝜇 < 𝑋ത − 10 =
ത
𝑋−𝜇 ത
𝑋−10
𝑃 𝑛< 𝑛 = 𝑃(𝑡 < 2.084)
𝑠 𝑠
• [Link] (2.084, 9, TRUE) 96.66%
77
18
9/3/2025
Chapter 2. Probability and Statistics:
F-distribution
𝑣1 /2 𝑣1
𝑣 + 𝑣2 𝑣1
( 1 ) 𝑥 ( 2 −1)
2 𝑣2
(𝑣1 +𝑣2 )/2
, 𝑖𝑓 𝑥 > 0
𝑓 𝑥 = 𝑣1 𝑥
(𝑣1 /2)(𝑣2 /2) 1 +
𝑣2
0, 𝑖𝑓 𝑥 ≤ 0
where v1 and v2 are the two degrees of freedom
78
Chapter 2. Probability and Statistics:
Chi-square (or χ2) distribution
𝑣
𝑥 (2−1) 𝑒 −𝑥/2
𝑓 𝑥 = , 𝑖𝑓 𝑥 > 0
2𝑣/2 (𝑣/2)
0, 𝑖𝑓 𝑥 ≤ 0
where v is the degree of freedom
Example 2.19 Let X be a χ2 random variable with 8 degrees of
freedom. Use Excel to find the x value such that P(X < x) = 95%.
=[Link](0.95, 8)
79
Chapter 2. Probability and Statistics:
Binomial Distribution
𝑛! 𝑥 𝑛−𝑥
𝑝 𝑥 = ቐ𝑥! 𝑛 − 𝑥 ! 𝑞 (1 − 𝑞)
𝑥 = 0,1,2, … , 𝑛
0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
where q is a number between 0 and 1.
The binomial distribution can be used to model the probability
of exactly x successes in n trials. Each trial has two possible
outcomes, success or failure, with q as the probability of one
trial being a success.
80
19
9/3/2025
Chapter 2. Probability and Statistics:
Example 2.20
The defect rate of certain product is 2 percent. If 20 random
samples of the product are tested, what is the probability of
having less than or equal to 3 defective products?
• Let X be the random variable of the number of non−defective
products among the 20 samples. Then X is a binomial
random variable and n = 20. With a defect rate of 2 percent,
q = 0.98. We need to calculate P(X ≥ 17).
• 𝑃 𝑋 ≥ 17 = 𝑝 17 + 𝑝 18 + 𝑝 19 + 𝑝 20
• = 1 − [Link](16, 20, 0.98, 1) (0.9994)
81
Chapter 2. Probability and Statistics:
Lognormal Distribution
1 2 2
𝑒 −[ln(𝑥)−𝜇] /(2𝜎 ) 𝑥>0
𝑓 𝑥; 𝜇, 𝜎 = ൞ 2𝜋𝜎𝑥
0 𝑥≤ 0
• ln (X) is a normal
• μ =E(ln(X)), 𝜎: standard deviation of ln(X).
82
Chapter 2. Probability and Statistics:
Lognormal to Normal
83
20
9/3/2025
Chapter 2. Probability and Statistics:
Lognormal Example
X: a lognormal distribution with = .353 and = .754.
Find P(1 X 2)
P(1 X 2) = P(ln(1) ln(X) ln(2)) = P(0 ln(X) .693)
0 − 0.352 0.693 − 0.352
=𝑃 ≤𝑍≤
0.754 0.754
= (.45) – (–.47) = 0.354
84
Chapter 2. Probability and Statistics:
CLT
Theorem 2.3 (Central Limit Theorem) If a random variable X has a
mean μ and a finite variance 𝜎 2 , then the average of the n samples
1
𝑋ത = σ𝑛𝑖 = 1 𝑥𝑖 has a distribution that can be approximated by a normal
𝑛
𝜎2
distribution with mean μ and variance if n is sufficiently large.
𝑛
Example 2.21 A sensor output has a mean value of 40 mV and a
standard deviation of 5 mV. If 35 values are measured and the
average is calculated for these measurements, what is the probability
that the average value is between 39 and 41 mV?
According CLT, the average value is approximately a normal
distribution with a mean of 40 mV and a standard deviation of 5/ 35=
0.8452
• 𝑃 39 < 𝑋ത < 41 = 0.7633
• = [Link](41, 40, 0.8452,1) − [Link](39, 40, 0.8452,1)
85
Chapter 2. Probability and Statistics:
Statistical Data Analysis
The measure of center position of the data:
1
• Mean or average: 𝑥ҧ = 𝑛 σ𝑛𝑖=1 𝑥𝑖
• Mode: the value(s) that appears most frequently.
• Median: The median of a data set is determined by first
sorting the data in ascending order followed by repeatedly
eliminating the smallest and largest numbers until there are
one or two data points remaining. The median is either the
remaining data point or the average of the two remaining
data points.
• Trimmed mean: certain percentage of the sorted data from
each end is eliminated first and then the average is
calculated for the remaining data.
86
21
9/3/2025
Chapter 2. Probability and Statistics:
Range, standard deviation and others
The measure of variability
• Range: the difference between the largest value and the
smallest value of the data set.
σ𝑛
𝑖=1 𝑥𝑖 −𝑥ҧ
2
• Standard deviation: 𝑠 =
𝑛−1
Other measurements of data
• Quartiles: The median divides the data set into two
subsets: the smaller half and the larger half. The median for
each subset can be found and are known as the lower and
upper fourth. The lower fourth (Q1), the median (Q2), and
the upper fourth (Q3), also known as the first, second and
third quartiles, divide the data into four quarters. 87
Chapter 2. Probability and Statistics:
Graphical Methods: Stem-and-Leaf Plots
Example 2.22 Construct the stem-and-leaf plot for the
following set of data.
15.2 9.2 12.1 8.8 9.5 18.9 19.2 30.9 45.8 25.6
24.5 15.2 10.0 49.9 41.7 23.4 24.8 31.2 26.8 27.3
88
Chapter 2. Probability and Statistics:
Graphical Methods: Histogram
• Rule of thumb: the number of bins should be close to the square root of
the total number of data points in the data set.
• Usually, the number of bins should be between 5 and 20.
• If software is used, one can try different numbers of bins until a
satisfactory result is achieved.
89
22
9/3/2025
Chapter 2. Probability and Statistics:
Box-and-Whisker Plot
Example 2.23 Display the following data set with a box-and-
whisker plot:
10 20 23 34 45 46 46 46 48 51 52 52 56 68 70 90 100 110 120 130
• xmin = 10, Q1 = 45.5, Q2 = 51.5, Q3 = 80, xmax = 130.
90
Chapter 2. Probability and Statistics:
Box-and-Whisker Plot with outliers
Mild Outliers:
xi > Q3 + 1.5 fs or xi < Q1 − 1.5 fs
Extreme Outliers:
xi > Q3 + 3 fs or xi < Q1 − 3 fs
fs = Q3 – Q1 (fourth spread)
91
23