Introduction to Probability
1. Introduction
Probability measures uncertainty and is used in engineering, science, finance, etc.
History:
- Originated in the 17th century from gambling studies (Pascal and Fermat).
- Axioms by Kolmogorov (1933).
Scope:
- Random experiments, risk assessment, statistics, machine learning.
2. Definitions of Probability
Classical: P(E) = (Favorable outcomes) / (Total outcomes)
Example: P(Head) when tossing a coin = 1/2
Relative Frequency: P(E) approximately (Number of times E occurs) / (Total trials)
Example: 520 heads in 1000 tosses -> P approx 0.52
3. Limitations
- Classical requires equally likely outcomes.
- Relative frequency depends on a large number of trials.
4. Basic Set Theory
Sets, fields, sample space (S), events (subsets of S).
5. Axiomatic Definition
P satisfies: (1) 0 <= P(E) <= 1, (2) P(S) = 1, (3) Additivity for disjoint events.
6. Combinatorics
Factorial: n!
Combination: nCr = n!/(r!(n-r)!)
Introduction to Probability
Permutation: nPr = n!/(n-r)!
Example: 5C2 = 10
7. Probability on Finite Sample Spaces
P(E) = |E|/|S|
Example: Rolling a die, P(even number) = 3/6 = 0.5
8. Joint and Conditional Probability
P(A and B) = Probability of A and B.
P(A given B) = P(A and B)/P(B)
Example: Tossing two coins, P(HH) = 1/4
9. Independence
Events A and B are independent if P(A and B) = P(A) * P(B).
10. Total Probability Theorem
P(A) = sum of P(A given Bi) * P(Bi)
Example: Machine defect rate problem.
11. Bayes' Theorem
P(B given A) = (P(A given B) * P(B)) / P(A)
Example: Machine A responsible for defect: 0.375