0% found this document useful (0 votes)
82 views303 pages

Class Notes

Uploaded by

Beliz Capitango
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
82 views303 pages

Class Notes

Uploaded by

Beliz Capitango
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 303

Introduction to Queueing Theory and

Stochastic Teletraffic Models

Moshe Zukerman

EE Department
City University of Hong Kong
email: moshezu@[Link]

Copyright M. Zukerman c 2000–2022.


This book can be used for educational and research purposes under the condition that it
(including this first page) is not modified in any way.

Preface
The aim of this textbook is to provide students with basic knowledge of stochastic models with
a special focus on queueing models that may apply to telecommunications topics, such as traffic
modelling, performance evaluation, resource provisioning and traffic management. These topics
are included in a field called teletraffic. This book assumes prior knowledge of a programming
language and mathematics normally taught in an electrical engineering bachelor program.
The book aims to enhance intuitive and physical understanding of the theoretical concepts it
introduces. The famous mathematician Pierre-Simon Laplace is quoted to say that “Probability
is common sense reduced to calculation” [18]; as the content of this book falls under the field
of applied probability, Laplace’s quote very much applies. Accordingly, the book aims to link
intuition and common sense to the mathematical models and techniques it uses. It mainly
focuses on steady-state analyses and avoids discussions of time-dependent analyses.
A unique feature of this book is the considerable attention given to guided homework assign-
ments involving computer simulations and analyzes. By successfully completing these assign-
ments, students learn to simulate and analyze stochastic models, such as queueing systems
and networks, and by interpreting the results, they gain insight into the queueing performance
effects and principles of telecommunications systems modelling. Although the book, at times,
provides intuitive explanations, it still presents the important concepts and ideas required for
the understanding of teletraffic, queueing theory fundamentals and related queueing behavior
of telecommunications networks and systems. These concepts and ideas form a strong base
for the more mathematically inclined students who can follow up with the extensive literature
on probability models and queueing theory. A small sample of it is listed at the end of this
book.
Queueing Theory and Stochastic Teletraffic Models
c Moshe Zukerman 2

The first two chapters provide background on probability and stochastic processes topics rele-
vant to the queueing and teletraffic models of this book. These two chapters provide a summary
of the key topics with relevant homework assignments that are especially tailored for under-
standing the queueing and teletraffic models discussed in later chapters. The content of these
chapters is mainly based on [18, 38, 95, 102, 100, 101, 103]. Students are encouraged to also
study the original textbooks for more explanations, illustrations, discussions, examples and
homework assignments.
Chapter 3 discusses general queueing notation and concepts. Chapter 4 aims to assist the
student to perform simulations of queueing systems. Simulations are useful and important
in the many cases where exact analytical results are not available. An important learning
objective of this book is to train students to perform queueing simulations. Chapter 5 provides
analyses of deterministic queues. Many queueing theory books tend to exclude deterministic
queues; however, the study of such queues is useful for beginners in that it helps them better
understand non-deterministic queueing models. Chapters 6 – 14 provide analyses of a wide
range of queueing and teletraffic models most of which fall under the category of continuous-
time Markov-chain processes. Chapter 15 provides an example of a discrete-time queue that
is modelled as a discrete-time Markov chain. In Chapter 16, various aspects of a single server
queue with Poisson arrivals and general service times are studied, mainly focussing on mean
value results as in [17]. Then, in Chapter 17, some selected results of a single server queue
with a general arrival process and general service times are provided. Chapter 18 focusses on
multi access applications, and in Chapter 19, we extend our discussion to queueing networks.
Finally, in Chapter 20, stochastic processes that have been used as traffic models are discussed
with special focus on their characteristics that affect queueing performance.
Throughout the book there is an emphasis on linking the theory with telecommunications ap-
plications as demonstrated by the following examples. Section 1.19 describes how properties
of Gaussian distribution can be applied to link dimensioning. Section 6.11 shows, in the con-
text of an M/M/1 queueing model, how optimally to set a link service rate such that delay
requirements are met and how the level of multiplexing affects the spare capacity required to
meet such delay requirements. An application of M/M/∞ queueing model to a multiple access
performance problem [17] is discussed in Section 7.5. Then later in Chapter 18 more multi-
access models are presented. In Sections 8.8 and 9.5, discussions on dimensioning and related
utilization issues of a multi-channel system are presented. Especially important is the empha-
sis on the insensitivity property of models such as M/M/∞, M/M/k/k, processor sharing and
multi-service that lead to practical and robust approximations as described in Chapters 7, 8,
13, and 14. Section 19.3 guides the reader to simulate a mobile cellular network. Section 20.6
describes a traffic model applicable to the Internet.
Last but not least, the author wishes to thank all the students and colleagues that provided
comments and questions that helped develop and edit the manuscript over the years.
Queueing Theory and Stochastic Teletraffic Models
c Moshe Zukerman 3

Contents
1 Background on Relevant Probability Topics 8
1.1 Events, Sample Space, and Random Variables . . . . . . . . . . . . . . . . . . . 8
1.2 Probability, Conditional Probability and Independence . . . . . . . . . . . . . . 9
1.3 Probability and Distribution Functions . . . . . . . . . . . . . . . . . . . . . . . 10
1.4 Joint Distribution Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.5 Conditional Probability for Random Variables . . . . . . . . . . . . . . . . . . . 12
1.6 Independence between Random Variables . . . . . . . . . . . . . . . . . . . . . . 13
1.7 Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.8 Selected Discrete Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.8.1 Non-parametric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.8.2 Bernoulli . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.8.3 Geometric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.8.4 Binomial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.8.5 Poisson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.8.6 Pascal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.8.7 Discrete Uniform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.9 Continuous Random Variables and their Distributions . . . . . . . . . . . . . . . 25
1.10 Selected Continuous Random Variables . . . . . . . . . . . . . . . . . . . . . . . 29
1.10.1 Uniform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.10.2 Exponential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.10.3 Relationship between Exponential and Geometric Random Variables . . 33
1.10.4 Hyper-Exponential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.10.5 Erlang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.10.6 Hypo-Exponential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.10.7 Gaussian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.10.8 Pareto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
1.11 Moments and Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
1.11.1 Mean (or Expectation) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
1.11.2 Moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
1.11.3 Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
1.11.4 Conditional Mean and the Law of Iterated Expectation . . . . . . . . . . 43
1.11.5 Conditional Variance and the Law of Total Variance . . . . . . . . . . . . 44
1.12 Mean and Variance of Specific Random Variables . . . . . . . . . . . . . . . . . 51
1.13 Sample Mean and Sample Variance . . . . . . . . . . . . . . . . . . . . . . . . . 55
1.14 Covariance and Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
1.15 Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
1.15.1 Z-transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
1.15.2 Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
1.16 Multivariate Random Variables and Transform . . . . . . . . . . . . . . . . . . . 66
1.17 Probability Inequalities and Their Dimensioning Applications . . . . . . . . . . 66
1.18 Limit Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
1.19 Link Dimensioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
1.19.1 Case 1: Homogeneous Individual Sources . . . . . . . . . . . . . . . . . . 70
1.19.2 Case 2: Non-homogeneous Individual Sources . . . . . . . . . . . . . . . 71
1.19.3 Case 3: Capacity Dimensioning for a Community . . . . . . . . . . . . . 72
Queueing Theory and Stochastic Teletraffic Models
c Moshe Zukerman 4

2 Relevant Background on Stochastic Processes 73


2.1 General Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
2.2 Two Orderly and Memoryless Point Processes . . . . . . . . . . . . . . . . . . . 76
2.2.1 Bernoulli Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
2.2.2 Poisson Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
2.3 Markov Modulated Poisson Process . . . . . . . . . . . . . . . . . . . . . . . . . 87
2.4 Discrete-time Markov chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
2.4.1 Definitions and Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . 88
2.4.2 Transition Probability Matrix . . . . . . . . . . . . . . . . . . . . . . . . 89
2.4.3 Chapman-Kolmogorov Equation . . . . . . . . . . . . . . . . . . . . . . . 89
2.4.4 Marginal Probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
2.4.5 Properties and Classification of States . . . . . . . . . . . . . . . . . . . 90
2.4.6 Steady-state Probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . 92
2.4.7 Birth and Death Process . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
2.4.8 Reversibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
2.4.9 Multi-dimensional Markov Chains . . . . . . . . . . . . . . . . . . . . . . 96
2.5 Continuous Time Markov chains . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
2.5.1 Definitions and Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . 97
2.5.2 Birth and Death Process . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
2.5.3 First Passage Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
2.5.4 Transition Probability Function . . . . . . . . . . . . . . . . . . . . . . . 100
2.5.5 Steady-State Probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . 100
2.5.6 Multi-Dimensional Continuous-time Markov Chains . . . . . . . . . . . . 102
2.5.7 Solutions by Successive Substitutions . . . . . . . . . . . . . . . . . . . . 102
2.5.8 The Curse of Dimensionality . . . . . . . . . . . . . . . . . . . . . . . . . 103
2.5.9 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
2.5.10 Reversibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
2.6 Renewal Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

3 General Queueing and Teletraffic Concepts 109


3.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
3.2 Utilization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
3.3 Little’s Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
3.4 Delay and Loss Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
3.5 Traffic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
3.6 Offered and Carried Traffic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
3.7 Work Conservation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
3.8 PASTA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
3.9 Bit-rate Versus Service Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
3.10 Queueing Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

4 Simulations 122
4.1 Confidence Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
4.2 Simulation of a G/G/1 Queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

5 Deterministic Queues 128


5.1 D/D/1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Queueing Theory and Stochastic Teletraffic Models
c Moshe Zukerman 5

5.2 D/D/k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129


5.3 D/D/k/k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
5.3.1 The D/D/k/k Process and Its Cycles . . . . . . . . . . . . . . . . . . . . 131
5.3.2 Blocking Probability, Mean Queue-size and Utilization . . . . . . . . . . 132
5.3.3 Proportion of Time Spent in Each State . . . . . . . . . . . . . . . . . . 132
5.4 Summary of Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134

6 M/M/1 135
6.1 Steady-State Queue Size Probabilities . . . . . . . . . . . . . . . . . . . . . . . . 135
6.2 State Transition Diagram of M/M/1 . . . . . . . . . . . . . . . . . . . . . . . . 137
6.3 Queue Performance Measures: E[Q], E[NQ ], E[D], and E[WQ ] . . . . . . . . . . 138
6.4 Using Z-Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
6.5 Mean Delay of Delayed Customers . . . . . . . . . . . . . . . . . . . . . . . . . . 139
6.6 Delay Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
6.7 The Departure Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
6.8 Mean Busy Period and First Passage Time . . . . . . . . . . . . . . . . . . . . . 144
6.9 Dimensioning Based on Meeting Required Mean Delay . . . . . . . . . . . . . . 145
6.10 Effect of Rising Internet Bit-rate on Link Efficiency and QoS . . . . . . . . . . . 146
6.11 Multiplexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
6.12 Dimensioning Based on Delay Distribution . . . . . . . . . . . . . . . . . . . . . 150
6.13 A Markov-chain Simulation of M/M/1 . . . . . . . . . . . . . . . . . . . . . . . 151

7 M/M/∞ 154
7.1 Steady-State Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
7.2 Solving the Steady-State Equations . . . . . . . . . . . . . . . . . . . . . . . . . 155
7.3 State Transition Diagram of M/M/∞ . . . . . . . . . . . . . . . . . . . . . . . . 156
7.4 Insensitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
7.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
7.5.1 A Multi-access Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
7.5.2 Birth Rate Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158

8 M/M/k/k and Extensions 159


8.1 M/M/k/k: Offered, Carried and Overflow Traffic . . . . . . . . . . . . . . . . . 159
8.2 The Steady-State Equations and Their Solution . . . . . . . . . . . . . . . . . . 159
8.3 Recursion and Jagerman Formula . . . . . . . . . . . . . . . . . . . . . . . . . . 162
8.4 The Special Case: M/M/1/1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
8.5 Lower bound of Ek (A) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
8.6 Monotonicity of Ek (A) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
8.7 The Limit k → ∞ with A/k Fixed . . . . . . . . . . . . . . . . . . . . . . . . . 165
8.8 M/M/k/k: Dimensioning and Utilization . . . . . . . . . . . . . . . . . . . . . . 166
8.9 M/M/k/k under Critical Loading . . . . . . . . . . . . . . . . . . . . . . . . . . 167
8.10 Insensitivity and Many Classes of Customers . . . . . . . . . . . . . . . . . . . . 168
8.11 First Passage Time and Time Between Events in M/M/k/k . . . . . . . . . . . 173
8.12 A Markov-chain Simulation of M/M/k/k . . . . . . . . . . . . . . . . . . . . . . 175
8.13 Preemptive Priorities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
8.14 Overflow Traffic of M/M/k/k . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
8.15 Multi-server Loss Systems with Non-Poisson Input . . . . . . . . . . . . . . . . . 179
Queueing Theory and Stochastic Teletraffic Models
c Moshe Zukerman 6

9 M/M/k 184
9.1 Steady-State Equations and Their Solution . . . . . . . . . . . . . . . . . . . . . 184
9.2 Erlang C Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
9.3 Mean Queue Size, Delay, Waiting Time and Delay Factor . . . . . . . . . . . . . 186
9.4 Mean Delay of Delayed Customers . . . . . . . . . . . . . . . . . . . . . . . . . . 188
9.5 Dimensioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
9.6 Utilization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189

10 Engset Loss Formula 191


10.1 Steady-State Equations and Their Solution . . . . . . . . . . . . . . . . . . . . . 191
10.2 Blocking Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
10.3 Obtaining the Blocking Probability by a Recursion . . . . . . . . . . . . . . . . 193
10.4 Insensitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
10.5 Load Classifications and Definitions . . . . . . . . . . . . . . . . . . . . . . . . . 194
10.6 The Small Idle Period Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
10.7 The Many Sources Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
10.8 Obtaining the Blocking Probability by Successive Iterations . . . . . . . . . . . 196

11 State Dependent SSQ 198

12 Queueing Models with Finite Buffers 201


12.1 D/D/1/N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
12.2 SLB/D/1/N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
12.3 M/M/1/N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
12.4 M/M/k/N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
12.5 MMPP(2)/M/1/N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
12.6 M/Em /1/N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
12.7 Saturated Queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219

13 Processor Sharing 221


13.1 The M/M/1-PS queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
13.2 Insensitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224

14 Multi-service Loss Model 231


14.1 Model Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
14.2 Attributes of the Multi-service Loss Model . . . . . . . . . . . . . . . . . . . . . 232
14.3 A Simple Example with I = 2 and k = 2 . . . . . . . . . . . . . . . . . . . . . . 233
14.4 Other Reversibility Criteria for Markov chains . . . . . . . . . . . . . . . . . . . 235
14.5 Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
14.6 A General Treatment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
14.6.1 Infinite Number of Servers . . . . . . . . . . . . . . . . . . . . . . . . . . 240
14.6.2 Finite Number of Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
14.7 Critical Loading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243

15 Discrete-Time Queue 246

16 M/G/1 and Extensions 249


16.1 Pollaczek Khinchine Formula: Residual Service Approach [17] . . . . . . . . . . 249
Queueing Theory and Stochastic Teletraffic Models
c Moshe Zukerman 7

16.2 Pollaczek-Khinchine Formula: by Kendall’s Recursion [66] . . . . . . . . . . . . 252


16.3 Special Cases: M/M/1 and M/D/1 . . . . . . . . . . . . . . . . . . . . . . . . . 253
16.4 Busy Period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
16.5 M/G/1-LIFO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
16.6 M/G/1 with m Priority Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
16.7 Nonpreemptive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
16.8 Preemptive Resume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261

17 Queues with General Input 264


17.1 Reich’s Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
17.2 Queue Size Versus Virtual Waiting Time . . . . . . . . . . . . . . . . . . . . . . 265
17.3 Is the Queue Overflow Probability a Good QoS Measure? . . . . . . . . . . . . . 265
17.4 G/GI/1 Queue and Its G/GI/1/k Equivalent . . . . . . . . . . . . . . . . . . . 266

18 Multi-access Modeling and Analyses 269


18.1 ALOHA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
18.1.1 Throughput Analysis of Slotted ALOHA . . . . . . . . . . . . . . . . . . 270
18.1.2 Throughput Analysis of Pure ALOHA . . . . . . . . . . . . . . . . . . . 271
18.2 Random-access Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
18.3 Binary Exponential Backoff (BEB) . . . . . . . . . . . . . . . . . . . . . . . . . 275
18.3.1 A Discrete-time Markov Chain Model of BEB [24] . . . . . . . . . . . . . 276
18.3.2 A Continuous-time Markov Chain Model of BEB . . . . . . . . . . . . . 277

19 Network Models and Applications 278


19.1 Jackson Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
19.2 Computation of Blocking Probability in Circuit Switched Networks by the Erlang
Fixed-Point Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
19.3 A Markov-chain Simulation of a Mobile Cellular Network . . . . . . . . . . . . . 287

20 Stochastic Processes as Traffic Models 290


20.1 Parameter Fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
20.2 Poisson Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
20.3 Markov Modulated Poisson Process (MMPP) . . . . . . . . . . . . . . . . . . . 291
20.4 Autoregressive Gaussian Process . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
20.5 Exponential Autoregressive (1) Process . . . . . . . . . . . . . . . . . . . . . . . 292
20.6 Poisson Pareto Burst Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
Queueing Theory and Stochastic Teletraffic Models
c Moshe Zukerman 8

1 Background on Relevant Probability Topics


Probability theory provides the foundation for queueing theory and stochastic teletraffic models,
therefore it is important that the student masters the probability concepts required for the
material that follows. Although the coverage here is comprehensive in the sense that it discusses
all the probability concepts and techniques used in later chapters, and it include many examples
and exercises, still, readers may be aided by additional probability texts, such as [18] and [101]
to complement their study for improved learning.

1.1 Events, Sample Space, and Random Variables


Consider an experiment with an uncertain outcome. The term “experiment” refers to any
uncertain scenario, such as tomorrow’s weather, tomorrow’s share price of a certain company,
or the result of flipping a coin. The sample space is a set of all possible outcomes of an
experiment. An event is a subset of the sample space. Consider, for example, an experiment
which consists of tossing a die. The sample space is {1, 2, 3, 4, 5, 6}, and an event could be
the set {2, 3}, or {6}, or the empty set {} or even the entire sample space {1, 2, 3, 4, 5, 6}.
Events are called mutually exclusive if their intersection is the empty set. A set of events is
said to be exhaustive if its union is equal to the sample space.
A random variable is a real valued function defined on the sample space. This definition appears
somewhat contradictory to the wording “random variable” as a random variable is not at all
random, because it is actually a deterministic function which assigns a real valued number to
each possible outcome of an experiment. It is the outcome of the experiment that is random
and therefore the name: random variable. If we consider the flipping a coin experiment, the
possible outcomes are Head (H) and Tail (T), hence the sample space is {H, T}, and a random
variable X could assign X = 1 for the outcome H, and X = 0 for the outcome T.
Remark (Moran): While not every subset is an event and not every function is a random
variable, for all practical purposes we can, and do, assume that they are.
If X is a random variable, then Y = g(X) for some function g(·) is also a random variable. In
particular, some functions of interest are Y = cX for some constant c and Y = X n for some
integer n.
If X1 , X2 , X3 , . . . , Xn is a sequence of random variables, then Y = ni=1 Xi is also a random
P
variable.

Homework 1.1

Consider the experiment to be tossing a coin. What is the Sample Space? What are the events
associated with this Sample Space?

Guide

Notice that although the sample space includes only the outcome of the experiments which are
Head (H) and Tail (T), the events associated with this samples space includes also the empty
Queueing Theory and Stochastic Teletraffic Models
c Moshe Zukerman 9

set which in this case is the event H ∩ T and the entire sample space which in this case is the
event H ∪ T . 

1.2 Probability, Conditional Probability and Independence


Consider a sample space S. Let A be a subset of S, the probability of A is the function on S
and all its subsets, denoted P(A), that satisfies the following three axioms:
1. 0 ≤ P (A) ≤ 1
2. P (S) = 1
3. The probability of the union of mutually exclusive events is equal to the sum of the
probabilities of these events.
Normally, higher probability signifies higher likelihood of occurrence. In particular, if we con-
duct a very large number of experiments, and we generate the histogram by measuring how
many times each of the possible occurrences actually occurred, then we normalize the his-
togram by dividing all its values by the total number of experiments to obtain the relative
frequencies. These measurable relative frequencies can provide intuitive interpretation to the
theoretical concept of probability. Accordingly, the term limiting relative frequency is often
used as an interpretation of probability.
We use the notation P (A | B) for the conditional probability of A given B, which is the
probability of the event A given that we know that event B has occurred. If we know that B
has occurred, then it is our new sample space, and for A to occur, the relevant experiments
outcomes must be in A ∩ B, hence the new probability of A, namely the probability P (A | B),
is the ratio between the probability of A ∩ B and the probability of B. Accordingly,

P (A ∩ B)
P (A | B) = . (1)
P (B)

Since the event A ∩ B is equal to the event B ∩ A, we have that

P (A ∩ B) = P (B ∩ A) = P (B | A)P (A),
so by (1) we obtain

P (B | A)P (A)
P (A | B) = . (2)
P (B)
Eq. (2) is useful to obtain conditional probability of one event (A) given another (B) when
P (B | A) is known or easier to obtain, than P (A | B).
Remark: The intersection of A and B is also denoted by A, B or AB in addition to A∩B.
If events A and B are independent, which means that if one of them occurs, then the probability
of the other to occur is not affected, then

P (A | B) = P (A) (3)
Queueing Theory and Stochastic Teletraffic Models
c Moshe Zukerman 10

and hence, by Eq. (1), if A and B are independent, then

P (A ∩ B) = P (A)P (B). (4)

Let B1 , B2 , B3 , . . . , Bn be a sequence of mutually exclusive and exhaustive events in S, and


let A be another event in S. Then,
n
[
A= (A ∩ Bi ) (5)
i=1

and since the Bi s are mutually exclusive, the events A ∩ Bi s are also mutually exclusive.
Hence,
X n
P (A) = P (A ∩ Bi ). (6)
i=1

Thus, by Eq. (1),


n
X
P (A) = P (A | Bi ) × P (Bi ). (7)
i=1

The latter is a very useful formula for deriving probability of a given event by conditioning and
unconditioning on a set of mutually exclusive and exhaustive events. It is called the Law of
Total Probability. Therefore, by Eqs. (7) and (1) (again), we obtain the following formula for
conditional probability between two events:

P (A | B1 )P (B1 )
P (B1 | A) = Pn . (8)
i=1 P (A | Bi ) × P (Bi )

The latter is known as Bayes’ theorem (or Bayes’ law or Bayes’ rule).
Note that Eq. (2) is also referred to as Bayes’ theorem.
Set A is said to be a subset of set B, namely, A ⊂ B if A 6= B and every element of A is
an element of B. If we do not require A 6= B, but still require that every element of A is an
element of B, then we say that A is subset of or equal to set B, which is denoted by A ⊆ B.
If event A occurs implies that event B occurs, then we have that A ⊆ B. In such a case (and
of course under the stronger condition A ⊂ B), A = A ∩ B, so

P (A) = P (A ∩ B) = P (A | B)P (B). (9)

1.3 Probability and Distribution Functions


Random variables are related to events. When we say that random variable X takes value x,
this means that x represents a certain outcome of an experiment which is an event, so {X = x}
is an event. Therefore, we may assign probabilities to all possible values of the random variable.
This function denoted PX (x) = P (X = x) will henceforth be called probability function. Other
names used in the literature for a probability function include probability distribution function,
probability distribution, or simply distribution. Because probability theory is used in many
applications, in many cases, there are many alternative terms to describe the same thing. It is
Queueing Theory and Stochastic Teletraffic Models
c Moshe Zukerman 11

important that the student is familiar with these alternative terms, so we will use these terms
alternately in this book.
The cumulative distribution function of random variable X is defined for all x ∈ R (R being
the set of all real numbers), is defined as

FX (x) = P (X ≤ x). (10)

Accordingly, the complementary distribution function F̄X (x) is defined by

F̄X (x) = P (X > x). (11)

Consequently, for any random variable, for every x ∈ R, F (x) + F̄ (x) = 1. As the comple-
mentary and the cumulative distribution functions as well as the probability function can be
obtained from each other, we will use the terms distribution function when we refer to any of
these functions without being specific.
We have already mentioned that if X is a random variable, then Y = g(X) is also a random
variable. In this case, if PX (x) is the probability function of X, then the probability function
of Y is X
PY (y) = PX (x). (12)
x:g(x)=y

1.4 Joint Distribution Functions


In some cases, we are interested in the probability that two or more random variables are
within a certain range. For this purpose, we define, the joint distribution function for n random
variables X1 , X2 , . . . , Xn , as follows:

FX1 , X2 , ...,Xn (x1 , x2 , ..., xn ) = P (X1 ≤ x1 , X2 ≤ x2 , . . . , Xn ≤ xn ). (13)

Having the joint distribution function, we can obtain the distribution function of a single
random variable, say, X1 , as

FX1 (x1 ) = FX1 , X2 , ..., Xn (x1 , ∞, . . . , ∞). (14)

A random variable is called discrete if it takes at most a countable number of possible values.
On the other hand, a continuous random variable takes an uncountable number of possible
values.
When the random variables X1 , X2 , ..., Xn are discrete, we can use their joint probability
function which is defined by

PX1 , X2 , ..., Xn (x1 , x2 , . . . , xn ) = P (X1 = x1 , X2 = x2 , . . . , Xn = xn ). (15)

The probability function of a single random variable can then be obtained by


X X
PX1 (x1 ) = ··· PX1 , X2 , ..., Xn (x1 , x2 , . . . , xn ). (16)
x2 xn
Queueing Theory and Stochastic Teletraffic Models
c Moshe Zukerman 12

In this section and in sections 1.5, 1.6, 1.7, when we mention random variables or their distri-
bution functions, we consider them all to be discrete. Then, in Section 1.9, we will introduce
the analogous definitions and notation relevant to their continuous counterparts.
We have already mentioned the terms probability function, distribution, probability distribution
function and probability distribution. These terms apply to discrete as well as to continuous
random variables. There are however additional terms that are used to describe probability
function only for discrete random variable they are: probability mass function, and probability
mass, and there are equivalent terms used only for continuous random variables – they are
probability density function, density function and simply density.

1.5 Conditional Probability for Random Variables


The conditional probability concept, which we defined for events, can also apply to random
variables. Because {X = x}, namely, the random variable X takes value x, is an event, by the
definition of conditional probability (1) we have

P (X = x, Y = y)
P (X = x | Y = y) = . (17)
P (Y = y)

Let PX|Y (x | y) = P (X = x | Y = y) be the conditional probability of a discrete random


variable X given Y , we obtain by (17)

PX,Y (x, y)
PX|Y (x | y) = . (18)
PY (y)

Noticing that X
PY (y) = PX,Y (x, y), (19)
x

we obtain by (18) X
PX|Y (x | y) = 1. (20)
x

This means that if we condition on the event {Y = y} for a specific y, then the probability
function of X given {Y = y} is a legitimate probability function. This is consistent with
our discussion above. The event {Y = y} is the new sample space and X has a legitimate
distribution function there. By (18)

PX,Y (x, y) = PX|Y (x | y)PY (y) (21)

and by symmetry
PX,Y (x, y) = PY |X (y | x)PX (x) (22)
so the latter and (19) gives
X X
PY (y) = PX,Y (x, y) = PY |X (y | x)PX (x) (23)
x x

which is another version of the Law of Total Probability (7).


Queueing Theory and Stochastic Teletraffic Models
c Moshe Zukerman 13

1.6 Independence between Random Variables


The definition of independence between random variables is very much related to the definition
of independence between events because when we say that random variables U and V are
independent, it is equivalent to say that the events {U = u} and {V = v} are independent for
every u and v. Accordingly, random variables U and V are said to be independent if
PU,V (u, v) = PU (u)PV (v) for all u, v. (24)
Notice that by (21) and (24), we obtain an equivalent definition of independent random variables
U and V which is
PU |V (u | v) = PU (u) (25)
which is equivalent to P (A | B) = P (A) which we used to define independent events A and
B.

1.7 Convolution
Consider independent random variables V1 and V2 that have probability functions PV1 (v1 ) and
PV2 (v2 ), respectively, and their sum which is another random variable V = V1 + V2 . Let us now
derive the probability function PV (v) of V .
PV (v) = P (V1 + V2 = v)
X
= P (V1 = v1 , V2 = v − v1 )
v1
X
= PV1 (v1 )PV2 (v − v1 ).
v1

The latter is called the convolution of the probability functions PV1 (v1 ) and PV2 (v2 ).
Let us now extend the result from two to k random variables. Consider k independent ran-
dom variables Xi , i = 1, 2, 3, . . . , Pk. Let PXi (xi ) be the probability function of Xi , for
k
i = 1, 2, 3, . . . , k, and let Y = i=1 Xi . If k = 3, then we first compute the convo-
lution of X1 and X2 to obtain the probability function of V = X1 + X2 using the above
convolution formula and then we use the formula again to obtain the probability function of
Y = V + X3 = X1 + X2 + X3 . Therefore, for an arbitrary k, we obtain