S2 – Markov Models
Introduction
• A Markov Model is a stochastic model for systems that change
randomly.
• The future state depends only on the current state (Markov property).
• Useful for modeling systems in many fields such as weather
forecasting, speech recognition, etc.
Key Terminology
• State: A possible condition or status of the system.
• Transition: Movement from one state to another.
• Transition Probability: Likelihood of moving between states.
• State Space: Set of all possible states.
• Markov Property: Future state depends only on the present, not on
past states.
Types of Markov Models
• Discrete-Time Markov Chain (DTMC): System evolves in discrete time
steps, represented by a transition matrix.
• Continuous-Time Markov Chain (CTMC): Changes occur continuously
over time, governed by a rate matrix.
• Hidden Markov Model (HMM): System with hidden states observed
indirectly via outputs.
Discrete-Time Markov Chain (DTMC) Example
• Transition matrix example:
• P=[0.60.40.30.7]P = \begin{bmatrix} 0.6 & 0.4 \\ 0.3 & 0.7 \\
\end{bmatrix}P=[0.60.30.40.7]
• Each entry PijP_{ij}Pij shows the probability of transitioning from state
iii to jjj.
Properties of Markov Chains
• Irreducibility: Any state can be reached from any other state.
• Aperiodicity: States are revisited irregularly.
• Stationary Distribution: Probability distribution that remains stable
over time.
Example: Weather Model
• States: Sunny (S), Rainy (R)
• Transition Matrix:
• P=[0.80.20.40.6]P = \begin{bmatrix} 0.8 & 0.2 \\ 0.4 & 0.6 \\
\end{bmatrix}P=[0.80.40.20.6]
• Question: Probability it will be Rainy in two days if today is Sunny?
• Solution: Compute P2=P×PP^2 = P \times PP2=P×P, answer = 28%
Applications
• Natural language processing (speech recognition)
• Weather forecasting
• Google's PageRank algorithm
• Stock market modeling
• Bioinformatics (gene prediction)
Common Algorithms
• Forward Algorithm: Calculate probability of observed sequences (for
HMMs).
• Viterbi Algorithm: Find the most likely sequence of hidden states.
• Baum-Welch Algorithm: Train HMMs using expectation-maximization.