100% found this document useful (3 votes)
4K views711 pages

Simulation Using Promodel

Simulation Using Promodel

Uploaded by

maf2612
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
100% found this document useful (3 votes)
4K views711 pages

Simulation Using Promodel

Simulation Using Promodel

Uploaded by

maf2612
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

HarrellGhoshBo

wden: Simulation
Using ProModel,
Second Edition

I. Study
Chapters

1.
Introduction
to

The
McGrawHill
Companies,

INTRODUCTION
TO
SIMULATION

Man is a tool using animal. . . . Without tools he is nothing, with tools he


is all.
Thomas Carlyle

1.1 Introduction
On March 19, 1999, the following story appeared in The Wall Street Journal:
Captain Chet Rivers knew that his 747-400 was loaded to the limit. The giant plane,
weighing almost 450,000 pounds by itself, was carrying a full load of passengers
and baggage, plus 400,000 pounds of fuel for the long ight from San
Francisco to Australia. As he revved his four engines for takeoff, Capt. Rivers
noticed that San Franciscos famous fog was creeping in, obscuring the hills to the
north and west of the airport.
At full throttle the plane began to roll ponderously down the runway, slowly at
rst but building up to ight speed well within normal limits. Capt. Rivers pulled
the throt- tle back and the airplane took to the air, heading northwest across the San
Francisco peninsula towards the ocean. It looked like the start of another routine
ight. Suddenly the plane began to shudder violently. Several loud explosions
shook the craft and smoke and ames, easily visible in the midnight sky,
illuminated the right wing. Although the plane was shaking so violently that it was
hard to read the instruments, Capt. Rivers was able to tell that the right inboard
engine was malfunctioning, back- ring violently. He immediately shut down the
engine, stopping the explosions and shaking.
However this introduced a new problem. With two engines on the left wing at
full power and only one on the right, the plane was pushed into a right turn,
bringing it directly towards San Bruno Mountain, located a few miles northwest of
the airport. Capt. Rivers instinctively turned his control wheel to the left to bring
the plane back on course. That action extended the aileronscontrol surfaces on
the trailing edges of the wingsto tilt the plane back to the left. However, it
also extended the

Part I Study Chapters

spoilerspanels on the tops of the wingsincreasing drag and lowering lift. With
the nose still pointed up, the heavy jet began to slow. As the plane neared stall speed,
the control stick began to shake to warn the pilot to bring the nose down to gain air
speed. Capt. Rivers immediately did so, removing that danger, but now San Bruno
Mountain was directly ahead. Capt. Rivers was unable to see the mountain due to
the thick fog that had rolled in, but the planes ground proximity sensor sounded an
au- tomatic warning, calling terrain, terrain, pull up, pull up. Rivers frantically
pulled back on the stick to clear the peak, but with the spoilers up and the plane
still in a skidding right turn, it was too late. The plane and its full load of 100
tons of fuel crashed with a sickening explosion into the hillside just above a
densely populated housing area.
Hey Chet, that could ruin your whole day, said Capt. Riverss supervisor, who
was sitting beside him watching the whole thing. Lets rewind the tape and see
what you did wrong. Sure Mel, replied Chet as the two men stood up and
stepped out- side the 747 cockpit simulator. I think I know my mistake already. I
should have used my rudder, not my wheel, to bring the plane back on course. Say,
I need a breather after that experience. Im just glad that this wasnt the real thing.
The incident above was never reported in the nations newspapers, even though
it would have been one of the most tragic disasters in aviation history, because it
never really happened. It took place in a cockpit simulator, a device which uses computer technology to predict and recreate an airplanes behavior with gut-wrenching
realism.

The relief you undoubtedly felt to discover that this disastrous incident was
just a simulation gives you a sense of the impact that simulation can have in
avert- ing real-world catastrophes. This story illustrates just one of the many
ways sim- ulation is being used to help minimize the risk of making costly and
sometimes fatal mistakes in real life. Simulation technology is nding its way
into an in- creasing number of applications ranging from training for aircraft
pilots to the testing of new product prototypes. The one thing that these
applications have in common is that they all provide a virtual environment that
helps prepare for real- life situations, resulting in signicant savings in time,
money, and even lives.
One area where simulation is nding increased application is in manufacturing and service system design and improvement. Its unique ability to accurately
predict the performance of complex systems makes it ideally suited for
systems planning. Just as a ight simulator reduces the risk of making costly
errors in ac- tual ight, system simulation reduces the risk of having systems
that operate inef- ciently or that fail to meet minimum performance
requirements. While this may not be life-threatening to an individual, it certainly
places a company (not to men- tion careers) in jeopardy.
In this chapter we introduce the topic of simulation and answer the
following questions:

What is simulation?
Why is simulation used?
How is simulation performed?
When and where should simulation be used?

Chapter 1 Introduction to Simulation

What are the qualications for doing simulation?


How is simulation economically justied?
The purpose of this chapter is to create an awareness of how simulation is
used to visualize, analyze, and improve the performance of manufacturing and
service systems.

1.2 What Is Simulation?


The Oxford American Dictionary (1980) denes simulation as a way to reproduce the conditions of a situation, as by means of a model, for study or testing or
training, etc. For our purposes, we are interested in reproducing the operational
behavior of dynamic systems. The model that we will be using is a computer
model. Simulation in this context can be dened as the imitation of a dynamic
sys- tem using a computer model in order to evaluate and improve system
perfor- mance. According to Schriber (1987), simulation is the modeling of a
process or system in such a way that the model mimics the response of the
actual system to events that take place over time. By studying the behavior of
the model, we can gain insights about the behavior of the actual system.

Simulation is the imitation of a dynamic system


using a computer model in order to evaluate and
improve system performance.

In practice, simulation is usually performed using commercial simulation


software like ProModel that has modeling constructs specically designed for
capturing the dynamic behavior of systems. Performance statistics are gathered
during the simulation and automatically summarized for analysis. Modern simulation software provides a realistic, graphical animation of the system being
mod- eled (see Figure 1.1). During the simulation, the user can interactively
adjust the animation speed and change model parameter values to do what
if analysis on the y. State-of-the-art simulation technology even provides
optimization capabilitynot that simulation itself optimizes, but scenarios that
satisfy dened feasibility constraints can be automatically run and analyzed
using special goal- seeking algorithms.
This book focuses primarily on discrete-event simulation, which models the
effects of the events in a system as they occur over time. Discrete-event simulation employs statistical methods for generating random behavior and estimating

Part I Study Chapters

FIGURE 1.1
Simulation provides animation capability.

model performance. These methods are sometimes referred to as Monte Carlo


methods because of their similarity to the probabilistic outcomes found in games
of chance, and because Monte Carlo, a tourist resort in Monaco, was such a
popular center for gambling.

1.3 Why Simulate?


Rather than leave design decisions to chance, simulation provides a way to
validate whether or not the best decisions are being made. Simulation avoids the
expensive, time-consuming, and disruptive nature of traditional trial-and-error
techniques.

Trial-and-error approaches are


expensive, time consuming, and disruptive.

With the emphasis today on time-based competition, traditional trial-and-error


methods of decision making are no longer adequate. Regarding the shortcoming of

Chapter 1 Introduction to Simulation

trial-and-error approaches in designing manufacturing systems, Solberg (1988)


notes,
The ability to apply trial-and-error learning to tune the performance of
manufacturing systems becomes almost useless in an environment in which changes
occur faster than the lessons can be learned. There is now a greater need for formal
predictive method- ology based on understanding of cause and effect.

The power of simulation lies in the fact that it provides a method of analysis
that is not only formal and predictive, but is capable of accurately predicting the
performance of even the most complex systems. Deming (1989) states,
Manage- ment of a system is action based on prediction. Rational prediction
requires sys- tematic learning and comparisons of predictions of short-term
and long-term results from possible alternative courses of action. The key to
sound manage- ment decisions lies in the ability to accurately predict the
outcomes of alternative courses of action. Simulation provides precisely that
kind of foresight. By simu- lating alternative production schedules, operating
policies, stafng levels, job priorities, decision rules, and the like, a manager
can more accurately predict out- comes and therefore make more informed and
effective management decisions. With the importance in todays competitive
market of getting it right the rst time, the lesson is becoming clear: if at
rst you dont succeed, you probably should have simulated it.
By using a computer to model a system before it is built or to test operating
policies before they are actually implemented, many of the pitfalls that are often
encountered in the start-up of a new system or the modication of an existing
sys- tem can be avoided. Improvements that traditionally took months and even
years of ne-tuning to achieve can be attained in a matter of days or even
hours. Be- cause simulation runs in compressed time, weeks of system operation
can be sim- ulated in only a few minutes or even seconds. The characteristics
of simulation that make it such a powerful planning and decision-making tool
can be summa- rized as follows:

Captures system interdependencies.


Accounts for variability in the system.
Is versatile enough to model any system.
Shows behavior over time.
Is less costly, time consuming, and disruptive than experimenting on the
actual system.
Provides information on multiple performance measures.
Is visually appealing and engages peoples interest.
Provides results that are easy to understand and communicate.
Runs in compressed, real, or even delayed time.
Forces attention to detail in a design.

Because simulation accounts for interdependencies and variation, it provides


insights into the complex dynamics of a system that cannot be obtained using

Part I Study Chapters

other analysis techniques. Simulation gives systems planners unlimited freedom


to try out different ideas for improvement, risk freewith virtually no cost, no
waste of time, and no disruption to the current system. Furthermore, the results
are both visual and quantitative with performance statistics automatically
reported on all measures of interest.
Even if no problems are found when analyzing the output of simulation, the
exercise of developing a model is, in itself, benecial in that it forces one to
think through the operational details of the process. Simulation can work with
inaccu- rate information, but it cant work with incomplete information. Often
solutions present themselves as the model is builtbefore any simulation run
is made. It is a human tendency to ignore the operational details of a design or
plan until the implementation phase, when it is too late for decisions to
have a signicant impact. As the philosopher Alfred North Whitehead
observed, We think in gen- eralities; we live detail (Audon 1964). System
planners often gloss over the details of how a system will operate and then get
tripped up during implementa- tion by all of the loose ends. The expression
the devil is in the details has denite application to systems planning.
Simulation forces decisions on critical details so they are not left to chance or
to the last minute, when it may be too late.
Simulation promotes a try-it-and-see attitude that stimulates innovation and
encourages thinking outside the box. It helps one get into the system with
sticks and beat the bushes to ush out problems and nd solutions. It also puts
an end to fruitless debates over what solution will work best and by how much.
Simulation takes the emotion out of the decision-making process by
providing objective evidence that is difcult to refute.

1.4 Doing Simulation


Simulation is nearly always performed as part of a larger process of system
design or process improvement. A design problem presents itself or a need for
improve- ment exists. Alternative solutions are generated and evaluated,
and the best solution is selected and implemented. Simulation comes into play
during the eval- uation phase. First, a model is developed for an alternative
solution. As the model is run, it is put into operation for the period of interest.
Performance statistics (uti- lization, processing time, and so on) are gathered
and reported at the end of the run. Usually several replications (independent
runs) of the simulation are made. Averages and variances across the replications
are calculated to provide statistical estimates of model performance. Through an
iterative process of modeling, simu- lation, and analysis, alternative
congurations and operating policies can be tested to determine which solution
works the best.
Simulation is essentially an experimentation tool in which a computer
model of a new or existing system is created for the purpose of conducting
experiments. The model acts as a surrogate for the actual or real-world
system. Knowledge gained from experimenting on the model can be transferred
to the real system (see Figure 1.2). When we speak of doing simulation, we
are talking about the

Chapter 1 Introduction to Simulation

FIGURE 1.2
System

Simulation provides
a virtual method for
doing system
experimentation.
Concept

Model

process of designing a model of a real system and conducting experiments with


this model (Shannon 1998). Conducting experiments on a model reduces the
time, cost, and disruption of experimenting on the actual system. In this respect,
simulation can be thought of as a virtual prototyping tool for demonstrating
proof of concept.
The procedure for doing simulation follows the scientic method of (1) formulating a hypothesis, (2) setting up an experiment, (3) testing the hypothesis
through experimentation, and (4) drawing conclusions about the validity of the
hypothesis. In simulation, we formulate a hypothesis about what design or
operating policies work best. We then set up an experiment in the form of a
simulation model to test the hypothesis. With the model, we conduct multiple
replications of the experiment or simulation. Finally, we analyze the simulation
results and draw conclusions about our hypothesis. If our hypothesis was correct, we can condently move ahead in making the design or operational
changes (assuming time and other implementation constraints are satised).
As shown in Figure 1.3, this process is repeated until we are satised with the
results.
By now it should be obvious that simulation itself is not a solution tool but
rather an evaluation tool. It describes how a dened system will behave; it does
not prescribe how it should be designed. Simulation doesnt compensate for
ones ignorance of how a system is supposed to operate. Neither does it excuse
one from being careful and responsible in the handling of input data and the
interpretation of output results. Rather than being perceived as a substitute for
thinking, simula- tion should be viewed as an extension of the mind that enables
one to understand the complex dynamics of a system.

10

Part I Study Chapters

FIGURE 1.3
The process of
simulation
experimentation
.

Start

Formulate a
hypothesis

Develop a
simulation model
No
Run simulation
experiment

Hypothesis
correct?
Yes
End

1.5 Use of Simulation


Simulation began to be used in commercial applications in the 1960s. Initial
models were usually programmed in FORTRAN and often consisted of
thousands of lines of code. Not only was model building an arduous task, but
extensive de- bugging was required before models ran correctly. Models
frequently took a year or more to build and debug so that, unfortunately, useful
results were not obtained until after a decision and monetary commitment had
already been made. Lengthy simulations were run in batch mode on expensive
mainframe computers where CPU time was at a premium. Long development
cycles prohibited major changes from being made once a model was built.
Only in the last couple of decades has simulation gained popularity as a
decision-making tool in manufacturing and service industries. For many companies, simulation has become a standard practice when a new facility is being
planned or a process change is being evaluated. It is fast becoming to systems
planners what spreadsheet software has become to nancial planners.

Chapter 1 Introduction to Simulation

11

The surge in popularity of computer simulation can be attributed to the


following:

Increased awareness and understanding of simulation technology.


Increased availability, capability, and ease of use of simulation software.
Increased computer memory and processing speeds, especially of PCs.
Declining computer hardware and software costs.

Simulation is no longer considered a method of last resort, nor is it a technique reserved only for simulation experts. The availability of easy-to-use
sim- ulation software and the ubiquity of powerful desktop computers have
made simulation not only more accessible, but also more appealing to
planners and managers who tend to avoid any kind of solution that appears too
complicated. A solution tool is not of much use if it is more complicated than
the problem that it is intended to solve. With simple data entry tables and
automatic output reporting and graphing, simulation is becoming much easier to
use and the reluctance to use it is disappearing.
The primary use of simulation continues to be in the area of
manufacturing. Manufacturing systems, which include warehousing and
distribution systems, tend to have clearly dened relationships and formalized
procedures that are well suited to simulation modeling. They are also the
systems that stand to benet the most from such an analysis tool since capital
investments are so high and changes are so disruptive. Recent trends to
standardize and systematize other business processes such as order processing,
invoicing, and customer support are boosting the application of simulation in
these areas as well. It has been observed that 80 percent of all business
processes are repetitive and can benet from the same analysis techniques used
to improve manufacturing systems (Harrington 1991). With this being the case,
the use of simulation in designing and improving busi- ness processes of every
kind will likely continue to grow.
While the primary use of simulation is in decision support, it is by no means
limited to applications requiring a decision. An increasing use of simulation is
in the area of communication and visualization. Modern simulation software
in- corporates visual animation that stimulates interest in the model and
effectively communicates complex system dynamics. A proposal for a new
system design can be sold much easier if it can actually be shown how it will
operate.
On a smaller scale, simulation is being used to provide interactive,
computer- based training in which a management trainee is given the
opportunity to practice decision-making skills by interacting with the model
during the simulation. It is also being used in real-time control applications
where the model interacts with the real system to monitor progress and
provide master control. The power of simulation to capture system dynamics
both visually and functionally opens up numerous opportunities for its use in an
integrated environment.
Since the primary use of simulation is in decision support, most of our discussion will focus on the use of simulation to make system design and
operational decisions. As a decision support tool, simulation has been used to
help plan and

12

Part I Study Chapters

make improvements in many areas of both manufacturing and service industries.


Typical applications of simulation include

Work-ow planning.
Capacity planning.
Cycle time reduction.
Staff and resource planning.
Work prioritization.
Bottleneck analysis.
Quality improvement.
Cost reduction.
Inventory reduction.

Throughput analysis.
Productivity improvement.
Layout analysis.
Line balancing.
Batch size optimization.
Production scheduling.
Resource scheduling.
Maintenance scheduling.
Control system design.

1.6 When Simulation Is Appropriate


Not all system problems that could be solved with the aid of simulation should
be solved using simulation. It is important to select the right tool for the task.
For some problems, simulation may be overkilllike using a shotgun to kill a
y. Simulation has certain limitations of which one should be aware before
making a decision to apply it to a given situation. It is not a panacea for all
system-related problems and should be used only if the shoe ts. As a
general guideline, simulation is appropriate if the following criteria hold true:

An operational (logical or quantitative) decision is being made.


The process being analyzed is well dened and repetitive.
Activities and events are interdependent and variable.
The cost impact of the decision is greater than the cost of
doing the simulation.
The cost to experiment on the actual system is greater than the
cost of simulation.
Decisions should be of an operational nature. Perhaps the most signicant
limitation of simulation is its restriction to the operational issues associated with
systems planning in which a logical or quantitative solution is being sought. It is
not very useful in solving qualitative problems such as those involving technical
or sociological issues. For example, it cant tell you how to improve machine reliability or how to motivate workers to do a better job (although it can assess the
impact that a given level of reliability or personal performance can have on
over- all system performance). Qualitative issues such as these are better
addressed using other engineering and behavioral science techniques.
Processes should be well dened and repetitive. Simulation is useful only if
the process being modeled is well structured and repetitive. If the process
doesnt follow a logical sequence and adhere to dened rules, it may be difcult
to model. Simulation applies only if you can describe how the process operates.
This does

Chapter 1 Introduction to Simulation

13

not mean that there can be no uncertainty in the system. If random behavior can
be described using probability expressions and distributions, they can be simulated. It is only when it isnt even possible to make reasonable assumptions of
how a system operates (because either no information is available or behavior is
totally erratic) that simulation (or any other analysis tool for that matter)
becomes useless. Likewise, one-time projects or processes that are never
repeated the same way twice are poor candidates for simulation. If the scenario
you are modeling is likely never going to happen again, it is of little benet to
do a simulation.
Activities and events should be interdependent and variable. A system may
have lots of activities, but if they never interfere with each other or are deterministic (that is, they have no variation), then using simulation is probably unnecessary. It isnt the number of activities that makes a system difcult to
analyze. It is the number of interdependent, random activities. The effect of
simple interde- pendencies is easy to predict if there is no variability in the
activities. Determining the ow rate for a system consisting of 10 processing
activities is very straightfor- ward if all activity times are constant and activities
are never interrupted. Likewise, random activities that operate independently of
each other are usually easy to ana- lyze. For example, 10 machines operating in
isolation from each other can be ex- pected to produce at a rate that is based on
the average cycle time of each machine less any anticipated downtime. It is the
combination of interdependencies and ran- dom behavior that really produces
the unpredictable results. Simpler analytical methods such as mathematical
calculations and spreadsheet software become less adequate as the number of
activities that are both interdependent and random in- creases. For this reason,
simulation is primarily suited to systems involving both interdependencies and
variability.
The cost impact of the decision should be greater than the cost of doing the
simulation. Sometimes the impact of the decision itself is so insignicant that it
doesnt warrant the time and effort to conduct a simulation. Suppose, for
example, you are trying to decide whether a worker should repair rejects as
they occur or wait until four or ve accumulate before making repairs. If you are
certain that the next downstream activity is relatively insensitive to whether
repairs are done sooner rather than later, the decision becomes inconsequential
and simulation is a wasted effort.
The cost to experiment on the actual system should be greater than the cost
of simulation. While simulation avoids the time delay and cost associated with
ex- perimenting on the real system, in some situations it may actually be quicker
and more economical to experiment on the real system. For example, the
decision in a customer mailing process of whether to seal envelopes before or
after they are addressed can easily be made by simply trying each method and
comparing the results. The rule of thumb here is that if a question can be
answered through direct experimentation quickly, inexpensively, and with
minimal impact to the current operation, then dont use simulation.
Experimenting on the actual system also eliminates some of the drawbacks
associated with simulation, such as proving model validity.
There may be other situations where simulation is appropriate independent
of the criteria just listed (see Banks and Gibson 1997). This is certainly true in
the

14

Part I Study Chapters

case of models built purely for visualization purposes. If you are trying to sell a
system design or simply communicate how a system works, a realistic animation
created using simulation can be very useful, even though nonbenecial from an
analysis point of view.

1.7 Qualications for Doing Simulation


Many individuals are reluctant to use simulation because they feel unqualied.
Certainly some training is required to use simulation, but it doesnt mean that
only statisticians or operations research specialists can learn how to use it.
Decision support tools are always more effective when they involve the decision
maker, es- pecially when the decision maker is also the domain expert or person
who is most familiar with the design and operation of the system. The process
owner or man- ager, for example, is usually intimately familiar with the
intricacies and idiosyn- crasies of the system and is in the best position to know
what elements to include in the model and be able to recommend alternative
design solutions. When per- forming a simulation, often improvements suggest
themselves in the very activity of building the model that the decision maker
might never discover if someone else is doing the modeling. This reinforces the
argument that the decision maker should be heavily involved in, if not actually
conducting, the simulation project.
To make simulation more accessible to nonsimulation experts, products
have been developed that can be used at a basic level with very little training.
Unfortunately, there is always a potential danger that a tool will be used in a way
that exceeds ones skill level. While simulation continues to become more userfriendly, this does not absolve the user from acquiring the needed skills to make
intelligent use of it. Many aspects of simulation will continue to require
some training. Hoover and Perry (1989) note, The subtleties and nuances of
model val- idation and output analysis have not yet been reduced to such a level
of rote that they can be completely embodied in simulation software.
Modelers should be aware of their own inabilities in dealing with the
statisti- cal issues associated with simulation. Such awareness, however, should
not pre- vent one from using simulation within the realm of ones expertise.
There are both a basic as well as an advanced level at which simulation can be
benecially used. Rough-cut modeling to gain fundamental insights, for
example, can be achieved with only a rudimentary understanding of simulation.
One need not have exten- sive simulation training to go after the low-hanging
fruit. Simulation follows the 8020 rule, where 80 percent of the benet can be
obtained from knowing only 20 percent of the science involved (just make sure
you know the right 20 percent). It isnt until more precise analysis is required
that additional statistical training and knowledge of experimental design are
needed.
To reap the greatest benets from simulation, a certain degree of knowledge
and skill in the following areas is useful:
Project management.
Communication.

Chapter 1 Introduction to Simulation

15

Systems engineering.
Statistical analysis and design of experiments.
Modeling principles and concepts.
Basic programming and computer skills.
Training on one or more simulation products.
Familiarity with the system being investigated.

Experience has shown that some people learn simulation more rapidly and
become more adept at it than others. People who are good abstract thinkers yet
also pay close attention to detail seem to be the best suited for doing simulation.
Such individuals are able to see the forest while still keeping an eye on the trees
(these are people who tend to be good at putting together 1,000-piece puzzles).
They are able to quickly scope a project, gather the pertinent data, and get a useful model up and running without lots of starts and stops. A good modeler is
some- what of a sleuth, eager yet methodical and discriminating in piecing
together all of the evidence that will help put the model pieces together.
If short on time, talent, resources, or interest, the decision maker need not
despair. Plenty of consultants who are professionally trained and experienced
can provide simulation services. A competitive bid will help get the best price,
but one should be sure that the individual assigned to the project has good
credentials. If the use of simulation is only occasional, relying on a consultant
may be the preferred approach.

1.8 Economic Justication of Simulation


Cost is always an important issue when considering the use of any software tool,
and simulation is no exception. Simulation should not be used if the cost
exceeds the expected benets. This means that both the costs and the benets
should be carefully assessed. The use of simulation is often prematurely
dismissed due to the failure to recognize the potential benets and savings it can
produce. Much of the reluctance in using simulation stems from the mistaken
notion that simulation is costly and very time consuming. This perception is
shortsighted and ignores the fact that in the long run simulation usually saves
much more time and cost than it consumes. It is true that the initial
investment, including training and start-up costs, may be between $10,000
and $30,000 (simulation products themselves generally range between $1,000
and $20,000). However, this cost is often recov- ered after the rst one or two
projects. The ongoing expense of using simulation for individual projects is
estimated to be between 1 and 3 percent of the total project cost (Glenney and
Mackulak 1985). With respect to the time commitment involved in doing
simulation, much of the effort that goes into building the model is in arriving at
a clear denition of how the system operates, which needs to be done anyway.
With the advanced modeling tools that are now available, the actual model
development and running of simulations take only a small fraction (often less
than 5 percent) of the overall system design time.

16

Part I Study Chapters

Savings from simulation are realized by identifying and eliminating


problems and inefciencies that would have gone unnoticed until system
implementation. Cost is also reduced by eliminating overdesign and removing
excessive safety factors that are added when performance projections are
uncertain. By identifying and eliminating unnecessary capital investments, and
discovering and correcting operating inefciencies, it is not uncommon for
companies to report hundreds of thousands of dollars in savings on a single
project through the use of simulation. The return on investment (ROI) for
simulation often exceeds 1,000 percent, with payback periods frequently being
only a few months or the time it takes to com- plete a simulation project.
One of the difculties in developing an economic justication for simulation
is the fact that it is usually not known in advance how much savings will be realized until it is actually used. Most applications in which simulation has been
used have resulted in savings that, had the savings been known in advance,
would have looked very good in an ROI or payback analysis.
One way to assess in advance the economic benet of simulation is to
assess the risk of making poor design and operational decisions. One need only
ask what the potential cost would be if a misjudgment in systems planning
were to occur. Suppose, for example, that a decision is made to add another
machine to solve a capacity problem in a production or service system. What are
the cost and proba- bility associated with this being the wrong decision? If the
cost associated with a wrong decision is $100,000 and the decision maker is
only 70 percent condent that the decision is correct, then there is a 30 percent
chance of incurring a cost of
$100,000. This results in a probable cost of $30,000 (.3 $100,000). Using this
approach, many decision makers recognize that they cant afford not to use simulation because the risk associated with making the wrong decision is too high.
Tying the benets of simulation to management and organizational goals
also provides justication for its use. For example, a company committed to
continu- ous improvement or, more specically, to lead time or cost
reduction can be sold on simulation if it can be shown to be historically
effective in these areas. Simulation has gained the reputation as a best
practice for helping companies achieve organizational goals. Companies that
profess to be serious about perfor- mance improvement will invest in
simulation if they believe it can help them achieve their goals.
The real savings from simulation come from allowing designers to make
mistakes and work out design errors on the model rather than on the actual system. The concept of reducing costs through working out problems in the design
phase rather than after a system has been implemented is best illustrated by the
rule of tens. This principle states that the cost to correct a problem increases by a
factor of 10 for every design stage through which it passes without being
detected (see Figure 1.4).
Simulation helps avoid many of the downstream costs associated with poor
decisions that are made up front. Figure 1.5 illustrates how the cumulative cost
resulting from systems designed using simulation can compare with the cost of
designing and operating systems without the use of simulation. Note that while

17

Chapter 1 Introduction to Simulation

FIGURE 1.4
Concept

Cost of making
changes at
subsequent stages of
system development.

Desig
n

Installation

Operation

Cost

System stage

Comparison of
cumulative system
costs with and
without simulation.

System costs

FIGURE 1.5
Cost
without
simulation
Cost with
simulatio
n

Design
phase

Implementatio
n phase

Operatio
n phase

the short-term cost may be slightly higher due to the added labor and software
costs associated with simulation, the long-term costs associated with capital
investments and system operation are considerably lower due to better
efciencies realized through simulation. Dismissing the use of simulation on
the basis of sticker price is myopic and shows a lack of understanding of the
long-term sav- ings that come from having well-designed, efciently operating
systems.
Many examples can be cited to show how simulation has been used to avoid
costly errors in the start-up of a new system. Simulation prevented an
unnecessary expenditure when a Fortune 500 company was designing a facility
for producing and storing subassemblies and needed to determine the number of
containers re- quired for holding the subassemblies. It was initially felt that
3,000 containers

18

Part I Study Chapters

were needed until a simulation study showed that throughput did not improve
sig- nicantly when the number of containers was increased from 2,250 to
3,000. By purchasing 2,250 containers instead of 3,000, a savings of $528,375
was expected in the rst year, with annual savings thereafter of over $200,000
due to the savings in oor space and storage resulting from having 750 fewer
containers (Law and McComas 1988).
Even if dramatic savings are not realized each time a model is built, simulation at least inspires condence that a particular system design is capable of
meet- ing required performance objectives and thus minimizes the risk often
associated with new start-ups. The economic benet associated with instilling
condence was evidenced when an entrepreneur, who was attempting to secure
bank nanc- ing to start a blanket factory, used a simulation model to show the
feasibility of the proposed factory. Based on the processing times and
equipment lists supplied by industry experts, the model showed that the
output projections in the business plan were well within the capability of the
proposed facility. Although unfamiliar with the blanket business, bank ofcials
felt more secure in agreeing to support the venture (Bateman et al. 1997).
Often simulation can help improve productivity by exposing ways of
making better use of existing assets. By looking at a system holistically, longstanding problems such as bottlenecks, redundancies, and inefciencies that
previously went unnoticed start to become more apparent and can be eliminated.
The trick is to nd waste, or muda, advises Shingo; after all, the most
damaging kind of waste is the waste we dont recognize (Shingo 1992).
Consider the following actual examples where simulation helped uncover and
eliminate wasteful practices:
GE Nuclear Energy was seeking ways to improve productivity without
investing large amounts of capital. Through the use of simulation, the
company was able to increase the output of highly specialized reactor
parts by 80 percent. The cycle time required for production of each part
was reduced by an average of 50 percent. These results were obtained
by running a series of models, each one solving production problems
highlighted by the previous model (Bateman et al. 1997).
A large manufacturing company with stamping plants located throughout
the world produced stamped aluminum and brass parts on order according
to customer specications. Each plant had from 20 to 50 stamping presses
that were utilized anywhere from 20 to 85 percent. A simulation study
was conducted to experiment with possible ways of increasing capacity
utilization. As a result of the study, machine utilization improved from an
average of 37 to 60 percent (Hancock, Dissen, and Merten 1977).
A diagnostic radiology department in a community hospital was
modeled to evaluate patient and staff scheduling, and to assist in
expansion planning over the next ve years. Analysis using the
simulation model enabled improvements to be discovered in operating
procedures that precluded the necessity for any major expansions in
department size (Perry and Baum 1976).

Chapter 1 Introduction to Simulation

19

In each of these examples, signicant productivity improvements were realized without the need for making major investments. The improvements came
through nding ways to operate more efciently and utilize existing resources
more effectively. These capacity improvement opportunities were brought to
light through the use of simulation.

1.9 Sources of Information on Simulation


Simulation is a rapidly growing technology. While the basic science and theory
remain the same, new and better software is continually being developed to
make simulation more powerful and easier to use. It will require ongoing
education for those using simulation to stay abreast of these new developments.
There are many sources of information to which one can turn to learn the latest
developments in simulation technology. Some of the sources that are available
include
Conferences and workshops sponsored by vendors and
professional societies (such as, Winter Simulation Conference and
the IIE Conference).
Professional magazines and journals (IIE Solutions, International
Journal of Modeling and Simulation, and the like).
Websites of vendors and professional societies ([Link],
[Link], and so on).
Demonstrations and tutorials provided by vendors (like those on the
ProModel CD).
Textbooks (like this one).

1.10 How to Use This Book


This book is divided into three parts. Part I contains chapters describing the science and practice of simulation. The emphasis is deliberately oriented more toward the practice than the science. Simulation is a powerful decision support
tool that has a broad range of applications. While a fundamental understanding
of how simulation works is presented, the aim has been to focus more on how to
use sim- ulation to solve real-world problems.
Part II contains ProModel lab exercises that help develop simulation skills.
ProModel is a simulation package designed specically for ease of use, yet it
pro- vides the exibility to model any discrete event or continuous ow
process. It is similar to other simulation products in that it provides a set of
basic modeling constructs and a language for dening the logical decisions that
are made in a sys- tem. Basic modeling objects in ProModel include entities
(the objects being processed), locations (the places where processing occurs),
resources (the agents used to process the entities), and paths (the course of
travel for entities and resources in moving between locations such as aisles
or conveyors). Logical

20

Part I Study Chapters

behavior such as the way entities arrive and their routings can be dened with
lit- tle, if any, programming using the data entry tables that are provided.
ProModel is used by thousands of professionals in manufacturing and servicerelated indus- tries and is taught in hundreds of institutions of higher learning.
Part III contains case study assignments that can be used for student
projects to apply the theory they have learned from Part I and to try out the skills
they have ac- quired from doing the lab exercises (Part II). It is recommended
that students be as- signed at least one simulation project during the course.
Preferably this is a project performed for a nearby company or institution so it
will be meaningful. If such a project cannot be found, or as an additional
practice exercise, the case studies pro- vided should be useful. Student projects
should be selected early in the course so that data gathering can get started and
the project completed within the allotted time. The chapters in Part I are
sequenced to parallel an actual simulation project.

1.11Summary
Businesses today face the challenge of quickly designing and implementing
com- plex production and service systems that are capable of meeting
growing de- mands for quality, delivery, affordability, and service. With recent
advances in computing and software technology, simulation tools are now
available to help meet this challenge. Simulation is a powerful technology that
is being used with increasing frequency to improve system performance by
providing a way to make better design and management decisions. When used
properly, simulation can re- duce the risks associated with starting up a new
operation or making improve- ments to existing operations.
Because simulation accounts for interdependencies and variability, it provides insights that cannot be obtained any other way. Where important system
decisions are being made of an operational nature, simulation is an invaluable
decision-making tool. Its usefulness increases as variability and interdependency
increase and the importance of the decision becomes greater.
Lastly, simulation actually makes designing systems fun! Not only can a designer try out new design concepts to see what works best, but the visualization
makes it take on a realism that is like watching an actual system in operation.
Through simulation, decision makers can play what-if games with a new system
or modied process before it actually gets implemented. This engaging process
stimulates creative thinking and results in good design decisions.

1.12

Review Questions
1. Dene simulation.
2. What reasons are there for the increased popularity of computer
simulation?

Chapter 1 Introduction to Simulation

21

3. What are two specic questions that simulation might help answer in a
bank? In a manufacturing facility? In a dental ofce?
4. What are three advantages that simulation has over alternative
approaches to systems design?
5. Does simulation itself optimize a system design? Explain.
6. How does simulation follow the scientic method?
7. A restaurant gets extremely busy during lunch (11:00 A.M. to 2:00 P.M.)
and is trying to decide whether it should increase the number of waitresses
from two to three. What considerations would you look at to determine
whether simulation should be used to make this decision?
8. How would you develop an economic justication for using simulation?
9. Is a simulation exercise wasted if it exposes no problems in a system
design? Explain.
10.A simulation run was made showing that a modeled factory could produce
130 parts per hour. What information would you want to know about the
simulation study before placing any condence in the results?
11.A PC board manufacturer has high work-in-process (WIP) inventories, yet
machines and equipment seem underutilized. How could simulation help
solve this problem?
[Link] important is a statistical background for doing simulation?
[Link] can a programming background be useful in doing simulation?
[Link] are good project management and communication skills important in
simulation?
[Link] should the process owner be heavily involved in a simulation
project?
[Link] which of the following problems would simulation likely be useful?
a. Increasing the throughput of a production line.
b. Increasing the pace of a worker on an assembly line.
c. Decreasing the time that patrons at an amusement park spend
waiting in line.
d. Determining the percentage defective from a particular machine.
e. Determining where to place inspection points in a process.
f. Finding the most efcient way to ll out an order form.

Reference
s

Audon, Wyston Hugh, and L. Kronenberger. The Faber Book of Aphorisms. London: Faber
and Faber, 1964.
Banks, J., and R. Gibson. 10 Rules for Determining When Simulation Is Not Appropriate. IIE Solutions, September 1997, pp. 3032.
Bateman, Robert E.; Royce O. Bowden; Thomas J. Gogg; Charles R. Harrell; and Jack
R. A. Mott. System Improvement Using Simulation. Utah: PROMODEL Corp., 1997.

22

Part I Study Chapters

Deming, W. E. Foundation for Management of Quality in the Western World. Paper read
at a meeting of the Institute of Management Sciences, Osaka, Japan, 24 July 1989.
Glenney, Neil E., and Gerald T. Mackulak. Modeling & Simulation Provide Key to CIM
Implementation Philosophy. Industrial Engineering, May 1985.
Hancock, Walton; R. Dissen; and A. Merten. An Example of Simulation to Improve
Plant Productivity. AIIE Transactions, March 1977, pp. 210.
Harrell, Charles R., and Donald Hicks. Simulation Software Component Architecture
for Simulation-Based Enterprise Applications. In Proceedings of the 1998 Winter
Simulation Conference, ed. D. J. Medeiros, E. F. Watson, J. S. Carson, and
M. S. Manivannan, pp. 171721. Institute of Electrical and Electronics Engineers,
Piscataway, New Jersey.
Harrington, H. James. Business Process Improvement. New York: McGraw-Hill, 1991.
Hoover, Stewart V., and Ronald F. Perry. Simulation: A Problem-Solving Approach.
Reading, MA: Addison-Wesley, 1989.
Law, A. M., and M. G. McComas. How Simulation Pays Off. Manufacturing
Engineer- ing, February 1988, pp. 3739.
Mott, Jack, and Kerim Tumay. Developing a Strategy for Justifying Simulation. Industrial Engineering, July 1992, pp. 3842.
Oxford American Dictionary. New York: Oxford University Press, 1980. [compiled by]
Eugene Enrich et al.
Perry, R. F., and R. F. Baum. Resource Allocation and Scheduling for a Radiology
Department. In Cost Control in Hospitals. Ann Arbor, MI: Health Administration
Press, 1976.
Rohrer, Matt, and Jerry Banks. Required Skills of a Simulation Analyst. IIE Solutions,
May 1998, pp. 723.
Schriber, T. J. The Nature and Role of Simulation in the Design of Manufacturing
Systems. Simulation in CIM and Articial Intelligence Techniques, ed. J. Retti and
K. E. Wichmann. S.D., CA.: Society for Computer Simulation, 1987, pp. 58.
Shannon, Robert E. Introduction to the Art and Science of Simulation. In
Proceedings of the 1998 Winter Simulation Conference, ed. D. J. Medeiros,
E. F. Watson,
J. S. Carson, and M. S. Manivannan, pp. 714. Piscataway, NJ: Institute of Electrical
and Electronics Engineers.
Shingo, Shigeo. The Shingo Production Management SystemImproving Process Functions. Trans. Andrew P. Dillon. Cambridge, MA: Productivity Press, 1992.
Solberg, James. Design and Analysis of Integrated Manufacturing Systems. In W. Dale
Compton. Washington, D.C.: National Academy Press, 1988, p. 4.
The Wall Street Journal, March 19, 1999. United 747s Near Miss Sparks a Widespread
Review of Pilot Skills, p. A1.

HarrellGhoshBo
wden: Simulation
Using ProModel,
Second Edition

I. Study
Chapters

2. System
Dynamics

The
McGrawHill
Companies,

SYSTEM DYNAMICS

A fool with a tool is still a fool.


Unknown

Introduction
Knowing how to do simulation doesnt make someone a good systems designer
any more than knowing how to use a CAD system makes one a good product
de- signer. Simulation is a tool that is useful only if one understands the nature
of the problem to be solved. It is designed to help solve systemic problems that
are op- erational in nature. Simulation exercises fail to produce useful results
more often because of a lack of understanding of system dynamics than a lack
of knowing how to use the simulation software. The challenge is in
understanding how the system operates, knowing what you want to achieve
with the system, and being able to identify key leverage points for best
achieving desired objectives. To illustrate the nature of this challenge, consider
the following actual scenario:
The pipe mill for the XYZ Steel Corporation was an important prot center, turning
steel slabs selling for under $200/ton into a product with virtually unlimited demand
selling for well over $450/ton. The mill took coils of steel of the proper thickness
and width through a series of machines that trimmed the edges, bent the steel
into a cylinder, welded the seam, and cut the resulting pipe into appropriate lengths,
all on a continuously running line. The line was even designed to weld the end of
one coil to the beginning of the next one on the y, allowing the line to run
continually for days on end.
Unfortunately the mill was able to run only about 50 percent of its theoretical capacity over the long term, costing the company tens of millions of dollars a year in
lost revenue. In an effort to improve the mills productivity, management studied
each step in the process. It was fairly easy to nd the slowest step in the line, but
additional study showed that only a small percentage of lost production was due to
problems at this bottleneck operation. Sometimes a step upstream from the
bottleneck would

23

24

Part I Study Chapters

have a problem, causing the bottleneck to run out of work, or a downstream step
would go down temporarily, causing work to back up and stop the bottleneck. Sometimes the bottleneck would get so far behind that there was no place to put incoming,
newly made pipe. In this case the workers would stop the entire pipe-making process
until the bottleneck was able to catch up. Often the bottleneck would then be idle
wait- ing until the newly started line was functioning properly again and the new
pipe had a chance to reach it. Sometimes problems at the bottleneck were actually
caused by im- proper work at a previous location.
In short, there was no single cause for the poor productivity seen at this plant.
Rather, several separate causes all contributed to the problem in complex ways.
Man- agement was at a loss to know which of several possible improvements
(additional or faster capacity at the bottleneck operation, additional storage space
between stations, better rules for when to shut down and start up the pipe-forming
section of the mill, better quality control, or better training at certain critical
locations) would have the most impact for the least cost. Yet the poor performance
of the mill was costing enor- mous amounts of money. Management was under
pressure to do something, but what should it be?

This example illustrates the nature and difculty of the decisions that an
operations manager faces. Managers need to make decisions that are the best
in some sense. To do so, however, requires that they have clearly dened goals
and understand the system well enough to identify cause-and-effect
relationships.
While every system is different, just as every product design is different,
the basic elements and types of relationships are the same. Knowing how the
elements of a system interact and how overall performance can be improved are
essential to the effective use of simulation. This chapter reviews basic system
dynamics and answers the following questions:

What is a system?
What are the elements of a system?
What makes systems so complex?
What are useful system metrics?
What is a systems approach to systems planning?
How do traditional systems analysis techniques compare with simulation?

2 System Denition
We live in a society that is composed of complex, human-made systems that
we depend on for our safety, convenience, and livelihood. Routinely we rely on
transportation, health care, production, and distribution systems to provide
needed goods and services. Furthermore, we place high demands on the quality,
conve- nience, timeliness, and cost of the goods and services that are provided
by these systems. Remember the last time you were caught in a trafc jam, or
waited for what seemed like an eternity in a restaurant or doctors ofce?
Contrast that ex- perience with the satisfaction that comes when you nd a
store that sells quality merchandise at discount prices or when you locate a
health care organization that

25

Chapter 2 System Dynamics

provides prompt and professional service. The difference is between a system


that has been well designed and operates smoothly, and one that is poorly
planned and managed.
A system, as used here, is dened as a collection of elements that function
together to achieve a desired goal (Blanchard 1991). Key points in this denition
include the fact that (1) a system consists of multiple elements, (2) these
elements are interrelated and work in cooperation, and (3) a system exists for
the purpose of achieving specic objectives. Examples of systems are trafc
systems, political systems, economic systems, manufacturing systems, and
service systems. Our main focus will be on manufacturing and service systems
that process materials, information, and people.
Manufacturing systems can be small job shops and machining cells or large
production facilities and assembly lines. Warehousing and distribution as well as
entire supply chain systems will be included in our discussions of manufacturing
systems. Service systems cover a wide variety of systems including health care
facilities, call centers, amusement parks, public transportation systems, restaurants, banks, and so forth.
Both manufacturing and service systems may be termed processing
systems because they process items through a series of activities. In a
manufacturing sys- tem, raw materials are transformed into nished products.
For example, a bicycle manufacturer starts with tube stock that is then cut,
welded, and painted to pro- duce bicycle frames. In service systems, customers
enter with some service need and depart as serviced (and, we hope, satised)
customers. In a hospital emer- gency room, for example, nurses, doctors, and
other staff personnel admit and treat incoming patients who may undergo tests
and possibly even surgical proce- dures before nally being released.
Processing systems are articial (they are human-made), dynamic (elements
interact over time), and usually stochastic (they exhibit random behavior).

System Elements
From a simulation perspective, a system can be said to consist of entities, activities, resources, and controls (see Figure 2.1). These elements dene the
who, what, where, when, and how of entity processing. This model for
describing a

FIGURE 2.1
Elements of a
system.

Incoming entities

Activities

Resources

Controls

System

Outgoing entities

26

Part I Study Chapters

system corresponds closely to the well-established ICAM denition (IDEF)


process model developed by the defense industry (ICAM stands for an early Air
Force program in integrated computer-aided manufacturing). The IDEF
modeling paradigm views a system as consisting of inputs and outputs (that is,
entities), ac- tivities, mechanisms (that is, resources), and controls.

2.3.1 Entities
Entities are the items processed through the system such as products, customers,
and documents. Different entities may have unique characteristics such as cost,
shape, priority, quality, or condition. Entities may be further subdivided into the
following types:
Human or animate (customers, patients, etc.).
Inanimate (parts, documents, bins, etc.).
Intangible (calls, electronic mail, etc.).
For most manufacturing and service systems, the entities are discrete items.
This is the case for discrete part manufacturing and is certainly the case for
nearly all service systems that process customers, documents, and others. For
some pro- duction systems, called continuous systems, a nondiscrete substance
is processed rather than discrete entities. Examples of continuous systems are oil
reneries and paper mills.

2.3.2 Activities
Activities are the tasks performed in the system that are either directly or
indirectly involved in the processing of entities. Examples of activities include
servicing a customer, cutting a part on a machine, or repairing a piece of equipment. Activities usually consume time and often involve the use of resources.
Activities may be classied as
Entity processing (check-in, treatment, inspection, fabrication, etc.).
Entity and resource movement (forklift travel, riding in an elevator, etc.).
Resource adjustments, maintenance, and repairs (machine setups, copy
machine repair, etc.).

2.3.3 Resources
Resources are the means by which activities are performed. They provide the
supporting facilities, equipment, and personnel for carrying out activities. While
resources facilitate entity processing, inadequate resources can constrain
process- ing by limiting the rate at which processing can take place.
Resources have characteristics such as capacity, speed, cycle time, and
reliability. Like entities, resources can be categorized as
Human or animate (operators, doctors, maintenance personnel, etc.).
Inanimate (equipment, tooling, oor space, etc.).
Intangible (information, electrical power, etc.).

27

Chapter 2 System Dynamics

Resources can also be classied as being dedicated or shared, permanent or


consumable, and mobile or stationary.

2.3.4 Controls
Controls dictate how, when, and where activities are performed. Controls impose
order on the system. At the highest level, controls consist of schedules, plans,
and policies. At the lowest level, controls take the form of written procedures
and ma- chine control logic. At all levels, controls provide the information and
decision logic for how the system should operate. Examples of controls include

Routing sequences.
Production plans.
Work schedules.
Task prioritization.
Control software.
Instruction sheets.

2.4 System Complexity


Elements of a system operate in concert with one another in ways that often
result in complex interactions. The word complex comes from the Latin
complexus, meaning entwined or connected together. Unfortunately, unaided
human intuition is not very good at analyzing and understanding complex
systems. Economist Herbert Simon called this inability of the human mind to
grasp real-world complexity the principle of bounded rationality. This
principle states that the capacity of the human mind for formulating and
solving complex problems is very small compared with the size of the problem
whose solution is required for objectively rational behavior in the real world, or
even for a reasonable approxi- mation to such objective rationality (Simon
1957).

Bounded rationalityour limited ability to grasp


real-world complexity.

While the sheer number of elements in a system can stagger the mind (the
number of different entities, activities, resources, and controls can easily exceed
100), the interactions of these elements are what make systems so complex and

28

Part I Study Chapters

difcult to analyze. System complexity is primarily a function of the following


two factors:
1. Interdependencies between elements so that each element affects other
elements.
2. Variability in element behavior that produces uncertainty.

Interdependencies

Variability

Complexity

These two factors characterize virtually all human-made systems and make
system behavior difcult to analyze and predict. As shown in Figure 2.2, the degree of analytical difculty increases exponentially as the number of interdependencies and random variables increases.

2.4.1 Interdependencies

FIGURE 2.2

Degree of analytical difficulty

Interdependencies cause the behavior of one element to affect other elements in


the system. For example, if a machine breaks down, repair personnel are put into
action while downstream operations become idle for lack of parts. Upstream
operations may even be forced to shut down due to a logjam in the entity ow
causing a blockage of activities. Another place where this chain reaction or
domino effect manifests itself is in situations where resources are shared
between

Analytical difculty
as a function of the
number of
interdependencies and
random variables.

Number of interdependencies
and random variables

Chapter 2 System Dynamics

29

two or more activities. A doctor treating one patient, for example, may be
unable to immediately respond to another patient needing his or her attention.
This delay in response may also set other forces in motion.
It should be clear that the complexity of a system has less to do with
the number of elements in the system than with the number of interdependent
rela- tionships. Even interdependent relationships can vary in degree, causing
more or less impact on overall system behavior. System interdependency may
be either tight or loose depending on how closely elements are linked.
Elements that are tightly coupled have a greater impact on system operation
and performance than elements that are only loosely connected. When an
element such as a worker or machine is delayed in a tightly coupled system, the
impact is immediately felt by other elements in the system and the entire
process may be brought to a screech- ing halt.
In a loosely coupled system, activities have only a minor, and often delayed,
impact on other elements in the system. Systems guru Peter Senge (1990) notes
that for many systems, Cause and effect are not closely related in time and
space. Sometimes the distance in time and space between cause-and-effect relationships becomes quite sizable. If enough reserve inventory has been
stockpiled, a truckers strike cutting off the delivery of raw materials to a
transmission plant in one part of the world may not affect automobile assembly
in another part of the world for weeks. Cause-and-effect relationships are like a
ripple of water that di- minishes in impact as the distance in time and space
increases.
Obviously, the preferred approach to dealing with interdependencies is to
eliminate them altogether. Unfortunately, this is not entirely possible for most
situations and actually defeats the purpose of having systems in the rst place.
The whole idea of a system is to achieve a synergy that otherwise would be unattainable if every component were to function in complete isolation. Several
methods are used to decouple system elements or at least isolate their inuence
so disruptions are not felt so easily. These include providing buffer inventories,
implementing redundant or backup measures, and dedicating resources to single tasks. The downside to these mitigating techniques is that they often lead to
excessive inventories and underutilized resources. The point to be made here
is that interdependencies, though they may be minimized somewhat, are simply a fact of life and are best dealt with through effective coordination and
management.

2.4.2 Variability
Variability is a characteristic inherent in any system involving humans and
machinery. Uncertainty in supplier deliveries, random equipment failures, unpredictable absenteeism, and uctuating demand all combine to create havoc in
plan- ning system operations. Variability compounds the already unpredictable
effect of interdependencies, making systems even more complex and
unpredictable. Vari- ability propagates in a system so that highly variable
outputs from one worksta- tion become highly variable inputs to another
(Hopp and Spearman 2000).

30

Part I Study Chapters

TABLE 2.1 Examples of System Variability


Type of Variability

Examples

Activity times
Decisions

Operation times, repair times, setup times, move times


To accept or reject a part, where to direct a particular customer, which
task to perform next
Lot sizes, arrival quantities, number of workers absent
Time between arrivals, time between equipment failures
Customer preference, part size, skill level

Quantities
Event intervals
Attributes

Table 2.1 identies the types of random variability that are typical of most manufacturing and service systems.
The tendency in systems planning is to ignore variability and calculate system capacity and performance based on average values. Many commercial
sched- uling packages such as MRP (material requirements planning) software
work this way. Ignoring variability distorts the true picture and leads to
inaccurate perfor- mance predictions. Designing systems based on average
requirements is like deciding whether to wear a coat based on the average
annual temperature or pre- scribing the same eyeglasses for everyone based on
average eyesight. Adults have been known to drown in water that was only
four feet deepon the average! Wherever variability occurs, an attempt should
be made to describe the nature or pattern of the variability and assess the range
of the impact that variability might have on system performance.
Perhaps the most illustrative example of the impact that variability can have
on system behavior is the simple situation where entities enter into a single
queue to wait for a single server. An example of this might be customers
lining up in front of an ATM. Suppose that the time between customer arrivals
is exponen- tially distributed with an average time of one minute and that they
take an average time of one minute, exponentially distributed, to transact their
business. In queu- ing theory, this is called an M/M/1 queuing system. If we
calculate system per- formance based solely on average time, there will never
be any customers waiting in the queue. Every minute that a customer arrives the
previous customer nishes his or her transaction. Now if we calculate the
number of customers waiting in line, taking into account the variation, we will
discover that the waiting line grows to innity (the technical term is that the
system explodes). Who would guess that in a situation involving only one
interdependent relationship that variation alone would make the difference
between zero items waiting in a queue and an in- nite number in the queue?
By all means, variability should be reduced and even eliminated wherever
possible. System planning is much easier if you dont have to contend with it.
Where it is inevitable, however, simulation can help predict the impact it will
have on system performance. Likewise, simulation can help identify the
degree of improvement that can be realized if variability is reduced or even
eliminated. For

Chapter 2 System Dynamics

31

example, it can tell you how much reduction in overall ow time and ow time
variation can be achieved if operation time variation can be reduced by, say,
20 percent.

2.5 System Performance Metrics


Metrics are measures used to assess the performance of a system. At the
highest level of an organization or business, metrics measure overall
performance in terms of prots, revenues, costs relative to budget, return on
assets, and so on. These metrics are typically nancial in nature and show
bottom-line performance. Unfortunately, such metrics are inherently lagging,
disguise low-level operational performance, and are reported only periodically.
From an operational standpoint, it is more benecial to track such factors as
time, quality, quantity, efciency, and utilization. These operational metrics
reect immediate activity and are directly controllable. They also drive the
higher nancially related metrics. Key opera- tional metrics that describe the
effectiveness and efciency of manufacturing and service systems include the
following:
Flow timethe average time it takes for an item or customer to be
processed through the system. Synonyms include cycle time, throughput
time, and manufacturing lead time. For order fulllment systems, ow
time may also be viewed as customer response time or turnaround time.
A closely related term in manufacturing is makespan, which is the time
to process a given set of jobs. Flow time can be shortened by reducing
activity times that contribute to ow time such as setup, move, operation,
and inspection time. It can also be reduced by decreasing work-in-process
or average number of entities in the system. Since over 80 percent of
cycle time is often spent waiting in storage or queues, elimination of
buffers tends to produce the greatest reduction in cycle time. Another
solution is to add more resources, but this can be costly.
Utilizationthe percentage of scheduled time that personnel, equipment,
and other resources are in productive use. If a resource is not being
utilized, it may be because it is idle, blocked, or down. To increase
productive utilization, you can increase the demand on the resource or
reduce resource count or capacity. It also helps to balance work loads. In a
system with high variability in activity times, it is difcult to achieve high
utilization of resources. Job shops, for example, tend to have low machine
utilization. Increasing utilization for the sake of utilization is not a good
objective. Increasing the utilization of nonbottleneck resources, for
example, often only creates excessive inventories without creating
additional throughput.
Value-added timethe amount of time material, customers, and so forth
spend actually receiving value, where value is dened as anything for
which the customer is willing to pay. From an operational standpoint,

32

Part I Study Chapters

value-added time is considered the same as processing time or time spent


actually undergoing some physical transformation or servicing. Inspection
time and waiting time are considered non-value-added time.
Waiting timethe amount of time that material, customers, and so on
spend waiting to be processed. Waiting time is by far the greatest
component of non-value-added time. Waiting time can be decreased by
reducing the number of items (such as customers or inventory levels) in
the system. Reducing variation and interdependencies in the system can
also reduce waiting times. Additional resources can always be added,
but the trade-off between the cost of adding the resources and the
savings of reduced waiting time needs to be evaluated.
Flow ratethe number of items produced or customers serviced per unit
of time (such as parts or customers per hour). Synonyms include
production rate, processing rate, or throughput rate. Flow rate can be
increased by better management and utilization of resources, especially
the limiting or bottleneck resource. This is done by ensuring that the
bottleneck operation or resource is never starved or blocked. Once system
throughput matches the bottleneck throughput, additional processing or
throughput capacity can be achieved by speeding up the bottleneck
operation, reducing downtimes and setup times at the bottleneck
operation, adding more resources to the bottleneck operation, or offloading work from the bottleneck operation.
Inventory or queue levelsthe number of items or customers in storage
or waiting areas. It is desirable to keep queue levels to a minimum while
still achieving target throughput and response time requirements. Where
queue levels uctuate, it is sometimes desirable to control the minimum
or maximum queue level. Queuing occurs when resources are unavailable
when needed. Inventory or queue levels can be controlled either by
balancing ow or by restricting production at nonbottleneck operations.
JIT ( just-in-time) production is one way to control inventory levels.
Yieldfrom a production standpoint, the percentage of products
completed that conform to product specications as a percentage of the
total number of products that entered the system as raw materials. If 95
out of 100 items are nondefective, the yield is 95 percent. Yield can also
be measured by its complementreject or scrap rate.
Customer responsivenessthe ability of the system to deliver products in
a timely fashion to minimize customer waiting time. It might be measured
as ll rate, which is the number of customer orders that can be lled
immediately from inventory. In minimizing job lateness, it may be
desirable to minimize the overall late time, minimize the number or
percentage of jobs that are late, or minimize the maximum tardiness of
jobs. In make-to-stock operations, customer responsiveness can be
assured by maintaining adequate inventory levels. In make-to-order,
customer responsiveness is improved by lowering inventory levels so that
cycle times can be reduced.

Chapter 2 System Dynamics

33

Variancethe degree of uctuation that can and often does occur in any
of the preceding metrics. Variance introduces uncertainty, and therefore
risk, in achieving desired performance goals. Manufacturers and service
providers are often interested in reducing variance in delivery and service
times. For example, cycle times and throughput rates are going to have
some variance associated with them. Variance is reduced by controlling
activity times, improving resource reliability, and adhering to schedules.
These metrics can be given for the entire system, or they can be broken down by
individual resource, entity type, or some other characteristic. By relating these
metrics to other factors, additional meaningful metrics can be derived that are
useful for benchmarking or other comparative analysis. Typical relational
metrics include minimum theoretical ow time divided by actual ow time
(ow time efciency), cost per unit produced (unit cost), annual inventory sold
divided by average inventory (inventory turns or turnover ratio), or units
produced per cost or labor input (productivity).

2.6 System Variables


Designing a new system or improving an existing system requires more than
sim- ply identifying the elements and performance goals of the system. It
requires an understanding of how system elements affect each other and overall
performance objectives. To comprehend these relationships, you must
understand three types of system variables:
1. Decision variables
2. Response variables
3. State variables

2.6.1 Decision Variables


Decision variables (also called input factors in SimRunner) are sometimes referred to as the independent variables in an experiment. Changing the values of
a systems independent variables affects the behavior of the system.
Independent variables may be either controllable or uncontrollable depending
on whether the experimenter is able to manipulate them. An example of a
controllable variable is the number of operators to assign to a production line or
whether to work one or two shifts. Controllable variables are called decision
variables because the deci- sion maker (experimenter) controls the values of the
variables. An uncontrollable variable might be the time to service a customer or
the reject rate of an operation. When dening the system, controllable variables
are the information about the system that is more prescriptive than descriptive
(see section 2.9.3).
Obviously, all independent variables in an experiment are ultimately
controllablebut at a cost. The important point here is that some variables are
easier to change than others. When conducting experiments, the nal solution is

34

Part I Study Chapters

often based on whether the cost to implement a change produces a higher return
in performance.

2.6.2 Response Variables


Response variables (sometimes called performance or output variables) measure
the performance of the system in response to particular decision variable
settings. A response variable might be the number of entities processed for a
given period, the average utilization of a resource, or any of the other system
performance met- rics described in section 2.5.
In an experiment, the response variable is the dependent variable, which depends on the particular value settings of the independent variables. The experimenter doesnt manipulate dependent variables, only independent or decision
variables. Obviously, the goal in systems planning is to nd the right values or
set- tings of the decision variables that give the desired response values.

2.6.3 State Variables


State variables indicate the status of the system at any specic point in time. Examples of state variables are the current number of entities waiting to be
processed or the current status (busy, idle, down) of a particular resource. Response variables are often summaries of state variable changes over time. For
ex- ample, the individual times that a machine is in a busy state can be summed
over a particular period and divided by the total available time to report the
machine utilization for that period.
State variables are dependent variables like response variables in that they
depend on the setting of the independent variables. State variables are often ignored in experiments since they are not directly controlled like decision
variables and are not of as much interest as the summary behavior reported by
response variables.
Sometimes reference is made to the state of a system as though a system itself can be in a particular state such as busy or idle. The state of a system
actually consists of that collection of variables necessary to describe a system
at a partic- ular time, relative to the objectives of the study (Law and Kelton
2000). If we study the ow of customers in a bank, for example, the state of
the bank for a given point in time would include the current number of
customers in the bank, the current status of each teller (busy, idle, or whatever),
and perhaps the time that each customer has been in the system thus far.

2.7 System Optimization


Finding the right setting for decision variables that best meets performance
objectives is called optimization. Specically, optimization seeks the best combination of decision variable values that either minimizes or maximizes some
objective function such as costs or prots. An objective function is simply a

35

Chapter 2 System Dynamics

response variable of the system. A typical objective in an optimization problem


for a manufacturing or service system might be minimizing costs or maximizing
ow rate. For example, we might be interested in nding the optimum number
of per- sonnel for stafng a customer support activity that minimizes costs yet
handles the call volume. In a manufacturing concern, we might be interested in
maximizing the throughput that can be achieved for a given system
conguration. Optimization problems often include constraints, limits to the
values that the decision variables can take on. For example, in nding the
optimum speed of a conveyor such that production cost is minimized, there
would undoubtedly be physical limits to how slow or fast the conveyor can
operate. Constraints can also apply to response vari- ables. An example of this
might be an objective to maximize throughput but sub- ject to the constraint that
average waiting time cannot exceed 15 minutes.
In some instances, we may nd ourselves trying to achieve conicting
objectives. For example, minimizing production or service costs often conicts
with minimizing waiting costs. In system optimization, one must be careful to
weigh priorities and make sure the right objective function is driving the decisions. If, for example, the goal is to minimize production or service costs, the
obvious solution is to maintain only a sufcient workforce to meet processing
requirements. Unfortunately, in manufacturing systems this builds up work-inprocess and results in high inventory carrying costs. In service systems,
long queues result in long waiting times, hence dissatised customers. At the
other extreme, one might feel that reducing inventory or waiting costs should
be the overriding goal and, therefore, decide to employ more than an adequate
number of resources so that work-in-process or customer waiting time is
virtually elimi- nated. It should be obvious that there is a point at which the cost
of adding another resource can no longer be justied by the diminishing
incremental savings in waiting costs that are realized. For this reason, it is
generally conceded that a better strategy is to nd the right trade-off or
balance between the number of resources and waiting times so that the total
cost is minimized (see Figure 2.3).
FIGURE 2.3
Cost curves showing
optimum number of
resources to minimize
total cost.

Cost

Optimum

Total cost
Resource costs

Waiting costs
Number of resources

36

Part I Study Chapters

As shown in Figure 2.3, the number of resources at which the sum of the
resource costs and waiting costs is at a minimum is the optimum number of
resources to have. It also becomes the optimum acceptable waiting time.
In systems design, arriving at an optimum system design is not always
realistic, given the almost endless congurations that are sometimes possible and
limited time that is available. From a practical standpoint, the best that can be
expected is a near optimum solution that gets us close enough to our objective,
given the time constraints for making the decision.

2.8 The Systems Approach


Due to departmentalization and specialization, decisions in the real world are
often made without regard to overall system performance. With everyone busy
minding his or her own area of responsibility, often no one is paying attention
to the big picture. One manager in a large manufacturing corporation noted that
as many as 99 percent of the system improvement recommendations made in
his company failed to look at the system holistically. He further estimated that
nearly 80 per- cent of the suggested changes resulted in no improvement at all,
and many of the suggestions actually hurt overall performance. When
attempting to make system improvements, it is often discovered that localized
changes fail to produce the overall improvement that is desired. Put in technical
language: Achieving a local optimum often results in a global suboptimum. In
simpler terms: Its okay to act locally as long as one is thinking globally. The
elimination of a problem in one area may only uncover, and sometimes even
exacerbate, problems in other areas.
Approaching system design with overall objectives in mind and considering
how each element relates to each other and to the whole is called a systems
or holistic approach to systems design. Because systems are composed of
interde- pendent elements, it is not possible to accurately predict how a
system will perform simply by examining each system element in isolation
from the whole. To presume otherwise is to take a reductionist approach to
systems design, which focuses on the parts rather than the whole. While
structurally a system may be di- visible, functionally it is indivisible and
therefore requires a holistic approach to systems thinking. Kofman and Senge
(1995) observe
The dening characteristic of a system is that it cannot be understood as a function
of its isolated components. First, the behavior of the system doesnt depend on
what each part is doing but on how each part is interacting with the rest. . . . Second,
to un- derstand a system we need to understand how it ts into the larger system of
which it is a part. . . . Third, and most important, what we call the parts need not be
taken as primary. In fact, how we dene the parts is fundamentally a matter of
perspective and purpose, not intrinsic in the nature of the real thing we are
looking at.

Whether designing a new system or improving an existing system, it is


impor- tant to follow sound design principles that take into account all relevant
variables. The activity of systems design and process improvement, also
called systems

37

Chapter 2 System Dynamics

FIGURE 2.4
Four-step iterative
approach to
systems
improvement.

Identify problems
and
opportunities.

Select and
implement
the best
solution.

Develop
alternative
solutions.

Evaluate
the
solutions.

engineering, has been dened as


The effective application of scientic and engineering efforts to transform an operational need into a dened system conguration through the top-down iterative
process of requirements denition, functional analysis, synthesis, optimization,
design, test and evaluation (Blanchard 1991).

To state it simply, systems engineering is the process of identifying problems or


other opportunities for improvement, developing alternative solutions,
evaluating the solutions, and selecting and implementing the best solutions (see
Figure 2.4). All of this should be done from a systems point of view.

2.8.1 Identifying Problems and Opportunities


The importance of identifying the most signicant problem areas and
recognizing opportunities for improvement cannot be overstated. Performance
standards should be set high in order to look for the greatest improvement
opportunities. Companies making the greatest strides are setting goals of 100
to 500 percent improvement in many areas such as inventory reduction or
customer lead time re- duction. Setting high standards pushes people to think
creatively and often results in breakthrough improvements that would otherwise
never be considered. Con- trast this way of thinking with one hospital whose
standard for whether a patient had a quality experience was whether the patient
left alive! Such lack of vision will never inspire the level of improvement
needed to meet ever-increasing customer expectations.

2.8.2 Developing Alternative Solutions


We usually begin developing a solution to a problem by understanding the problem, identifying key variables, and describing important relationships. This
helps

38

Part I Study Chapters

identify possible areas of focus and leverage points for applying a solution.
Techniques such as cause-and-effect analysis and pareto analysis are useful here.
Once a problem or opportunity has been identied and key decision variables
isolated, alternative solutions can be explored. This is where most of the design
and engineering expertise comes into play. Knowledge of best practices for common types of processes can also be helpful. The designer should be open to all
possible alternative feasible solutions so that the best possible solutions dont get
overlooked.
Generating alternative solutions requires creativity as well as organizational
and engineering skills. Brainstorming sessions, in which designers exhaust every
conceivably possible solution idea, are particularly useful. Designers should use
every stretch of the imagination and not be stied by conventional solutions
alone. The best ideas come when system planners begin to think innovatively
and break from traditional ways of doing things. Simulation is particularly
helpful in this process in that it encourages thinking in radical new ways.

2.8.3 Evaluating the Solutions


Alternative solutions should be evaluated based on their ability to meet the
criteria established for the evaluation. These criteria often include
performance goals, cost of implementation, impact on the sociotechnical
infrastructure, and consis- tency with organizational strategies. Many of these
criteria are difcult to measure in absolute terms, although most design options
can be easily assessed in terms of relative merit.
After narrowing the list to two or three of the most promising solutions
using common sense and rough-cut analysis, more precise evaluation
techniques may need to be used. This is where simulation and other formal
analysis tools come into play.

2.8.4 Selecting and Implementing the Best Solution


Often the nal selection of what solution to implement is not left to the analyst,
but rather is a management decision. The analysts role is to present his or her
evalua- tion in the clearest way possible so that an informed decision can be
made.
Even after a solution is selected, additional modeling and analysis are often
needed for ne-tuning the solution. Implementers should then be careful to make
sure that the system is implemented as designed, documenting reasons for any
modications.

2.9 Systems Analysis Techniques


While simulation is perhaps the most versatile and powerful systems analysis
tool, other available techniques also can be useful in systems planning. These alternative techniques are usually computational methods that work well for
simple systems with little interdependency and variability. For more complex
systems,

39

Chapter 2 System Dynamics

FIGURE 2.5
Simulation
improves
performance
predictability.

System predictability

these techniques still can provide rough estimates but fall short in producing the
insights and accurate answers that simulation provides. Systems implemented
using these techniques usually require some adjustments after implementation to
compensate for inaccurate calculations. For example, if after implementing a
sys- tem it is discovered that the number of resources initially calculated is
insufcient to meet processing requirements, additional resources are added.
This adjustment can create extensive delays and costly modications if special
personnel training or custom equipment is involved. As a precautionary
measure, a safety factor is sometimes added to resource and space calculations
to ensure they are adequate. Overdesigning a system, however, also can be
costly and wasteful.
As system interdependency and variability increase, not only does system
performance decrease, but the ability to accurately predict system performance
decreases as well (Lloyd and Melton 1997). Simulation enables a planner to
ac- curately predict the expected performance of a system design and ultimately
make better design decisions.
Systems analysis tools, in addition to simulation, include simple
calculations, spreadsheets, operations research techniques (such as linear
programming and queuing theory), and special computerized tools for
scheduling, layout, and so forth. While these tools can provide quick and
approximate solutions, they tend to make oversimplifying assumptions, perform
only static calculations, and are lim- ited to narrow classes of problems.
Additionally, they fail to fully account for interdependencies and variability of
complex systems and therefore are not as ac- curate as simulation in predicting
complex system performance (see Figure 2.5). They all lack the power,
versatility, and visual appeal of simulation. They do pro- vide quick solutions,
however, and for certain situations produce adequate results. They are
important to cover here, not only because they sometimes provide a good
alternative to simulation, but also because they can complement simulation by
providing initial design estimates for input to the simulation model. They also

100%
With
simulation

50%
Without
simulation

0%

Call centers
Doctor's
offices
Machining
cells
Low

Banks
Emergency
rooms Production
lines
Medium
System
complexity

Airports
Hospital
s
Factorie
s
High

40

Part I Study Chapters

can be useful to help validate the results of a simulation by comparing them with
results obtained using an analytic model.

2.9.1 Hand Calculations


Quick-and-dirty, pencil-and-paper sketches and calculations can be remarkably helpful in understanding basic requirements for a system. Many important
decisions have been made as the result of sketches drawn and calculations performed on a napkin or the back of an envelope. Some decisions may be so basic
that a quick mental calculation yields the needed results. Most of these calculations involve simple algebra, such as nding the number of resource units (such
as machines or service agents) to process a particular workload knowing the
capacity per resource unit. For example, if a requirement exists to process 200
items per hour and the processing capacity of a single resource unit is 75
work items per hour, three units of the resource, most likely, are going to be
needed.
The obvious drawback to hand calculations is the inability to manually perform complex calculations or to take into account tens or potentially even hundreds of complex relationships simultaneously.

2.9.2 Spreadsheets
Spreadsheet software comes in handy when calculations, sometimes involving
hundreds of values, need to be made. Manipulating rows and columns of
numbers on a computer is much easier than doing it on paper, even with a
calculator handy. Spreadsheets can be used to perform rough-cut analysis
such as calculating average throughput or estimating machine requirements.
The drawback to spread- sheet software is the inability (or, at least, limited
ability) to include variability in activity times, arrival rates, and so on, and to
account for the effects of inter- dependencies.
What-if experiments can be run on spreadsheets based on expected
values (average customer arrivals, average activity times, mean time between
equipment failures) and simple interactions (activity A must be performed
before activity B). This type of spreadsheet simulation can be very useful for
getting rough perfor- mance estimates. For some applications with little
variability and component in- teraction, a spreadsheet simulation may be
adequate. However, calculations based on only average values and
oversimplied interdependencies potentially can be misleading and result in
poor decisions. As one ProModel user reported, We just completed our nal
presentation of a simulation project and successfully saved approximately
$600,000. Our management was prepared to purchase an addi- tional
overhead crane based on spreadsheet analysis. We subsequently built a
ProModel simulation that demonstrated an additional crane will not be
necessary. The simulation also illustrated some potential problems that were not
readily ap- parent with spreadsheet analysis.
Another weakness of spreadsheet modeling is the fact that all behavior is
assumed to be period-driven rather than event-driven. Perhaps you have tried to

Chapter 2 System Dynamics

41

gure out how your bank account balance uctuated during a particular period
when all you had to go on was your monthly statements. Using ending balances
does not reect changes as they occurred during the period. You can know the
cur- rent state of the system at any point in time only by updating the state
variables of the system each time an event or transaction occurs. When it comes
to dynamic models, spreadsheet simulation suffers from the curse of
dimensionality be- cause the size of the model becomes unmanageable.

2.9.3 Operations Research Techniques


Traditional operations research (OR) techniques utilize mathematical models to
solve problems involving simple to moderately complex relationships. These
mathematical models include both deterministic models such as mathematical
programming, routing, or network ows and probabilistic models such as
queuing and decision trees. These OR techniques provide quick, quantitative
answers without going through the guesswork process of trial and error. OR
techniques can be divided into two general classes: prescriptive and descriptive.
Prescriptive Techniques
Prescriptive OR techniques provide an optimum solution to a problem, such as
the optimum amount of resource capacity to minimize costs, or the optimum
product mix that will maximize prots. Examples of prescriptive OR optimization techniques include linear programming and dynamic programming. These
techniques are generally applicable when only a single goal is desired for minimizing or maximizing some objective functionsuch as maximizing prots or
minimizing costs.
Because optimization techniques are generally limited to optimizing for a
single goal, secondary goals get sacriced that may also be important. Additionally, these techniques do not allow random variables to be dened as input data,
thereby forcing the analyst to use average process times and arrival rates that
can produce misleading results. They also usually assume that conditions are
constant over the period of study. In contrast, simulation is capable of
analyzing much more complex relationships and time-varying circumstances.
With optimization capabilities now provided in simulation, simulation software
has even taken on a prescriptive roll.
Descriptive Techniques
Descriptive techniques such as queuing theory are static analysis techniques that
provide good estimates for basic problems such as determining the expected
average number of entities in a queue or the average waiting times for entities in
a queuing system. Queuing theory is of particular interest from a simulation
perspective because it looks at many of the same system characteristics and
issues that are addressed in simulation.
Queuing theory is essentially the science of waiting lines (in the United
Kingdom, people wait in queues rather than lines). A queuing system consists
of

42

Part I Study Chapters

FIGURE 2.6

Arriving entities

Queue Server

Departing entities

Queuing
system
conguration.

.
.

one or more queues and one or more servers (see Figure 2.6). Entities, referred
to in queuing theory as the calling population, enter the queuing system and
either are immediately served if a server is available or wait in a queue until a
server be- comes available. Entities may be serviced using one of several
queuing disci- plines: rst-in, rst-out (FIFO); last-in, rst-out (LIFO); priority;
and others. The system capacity, or number of entities allowed in the system at
any one time, may be either nite or, as is often the case, innite. Several
different entity queuing be- haviors can be analyzed such as balking (rejecting
entry), reneging (abandoning the queue), or jockeying (switching queues).
Different interarrival time distribu- tions (such as constant or exponential) may
also be analyzed, coming from either a nite or innite population. Service
times may also follow one of several distri- butions such as exponential or
constant.
Kendall (1953) devised a simple system for classifying queuing systems in
the form A/B/s, where A is the type of interarrival distribution, B is the type of
service time distribution, and s is the number of servers. Typical distribution
types for A and B are
M
G
D

for Markovian or exponential distribution


for a general distribution
for a deterministic or constant value

An M/D/1 queuing system, for example, is a system in which interarrival times


are exponentially distributed, service times are constant, and there is a single
server.
The arrival rate in a queuing system is usually represented by the Greek
letter lambda () and the service rate by the Greek letter mu (). The mean
interarrival time then becomes 1/ and the mean service time is 1/. A trafc
intensity factor / is a parameter used in many of the queuing equations and
is represented by the Greek letter rho ().
Common performance measures of interest in a queuing system are based
on steady-state or long-term expected values and include
L = expected number of entities in the system (number in the queue and in
service)
Lq = expected number of entities in the queue (queue length)

43

Chapter 2 System Dynamics

W = expected time in the system (ow time)


Wq = expected time in the queue (waiting time)
Pn = probability of exactly n customers in the system (n = 0, 1, . . .)
The M/M/1 system with innite capacity and a FIFO queue discipline is perhaps
the most basic queuing problem and sufciently conveys the procedure for analyzing queuing systems and understanding how the analysis is performed. The
equations for calculating the common performance measures in an M/M/1 are
L=

(1

( )

2
L q = L = )
(1
1
W=
Wq =

( )

Pn = (1 ) n

n = 0, 1, . . .

If either the expected number of entities in the system or the expected waiting
time is known, the other can be calculated easily using Littles law (1961):
L = W
Littles law also can be applied to the queue length and waiting time:
L q = Wq
Example: Suppose customers arrive to use an automatic teller machine (ATM) at
an interarrival time of 3 minutes exponentially distributed and spend an average
of
2.4 minutes exponentially distributed at the machine. What is the expected number
of customers the system and in the queue? What is the expected waiting time for
cus- tomers in the system and in the queue?
= 20 per hour
= 25 per hour

=
Solving for L:
L=
=

= .8

( )
20
(25 20)
20
5

=4

44

Part I Study Chapters

Solving for Lq:


Lq =

2
(1 )
.82
(1 .8)

= .64
.2
= 3.2
Solving for W using Littles formula:

W=
=

4
20

= .20 hrs
= 12 minutes
Solving for Wq using Littles formula:
Wq = L q

= 3.2
20
= .16 hrs
= 9.6 minutes

Descriptive OR techniques such as queuing theory are useful for the most
basic problems, but as systems become even moderately complex, the problems
get very complicated and quickly become mathematically intractable. In
contrast, simulation provides close estimates for even the most complex
systems (assum- ing the model is valid). In addition, the statistical output of
simulation is not limited to only one or two metrics but instead provides
information on all per- formance measures. Furthermore, while OR techniques
give only average per- formance measures, simulation can generate detailed
time-series data and histograms providing a complete picture of performance
over time.

2.9.4 Special Computerized Tools


Many special computerized tools have been developed for forecasting, scheduling, layout, stafng, and so on. These tools are designed to be used for narrowly
focused problems and are extremely effective for the kinds of problems they are
intended to solve. They are usually based on constant input values and are computed using static calculations. The main benet of special-purpose decision
tools is that they are usually easy to use because they are designed to solve a
specic type of problem.

Chapter 2 System Dynamics

45

2.10 Summary
An understanding of system dynamics is essential to using any tool for planning
system operations. Manufacturing and service systems consist of interrelated
elements (personnel, equipment, and so forth) that interactively function to produce a specied outcome (an end product, a satised customer, and so on).
Systems are made up of entities (the objects being processed), resources (the
per- sonnel, equipment, and facilities used to process the entities), activities
(the process steps), and controls (the rules specifying the who, what, where,
when, and how of entity processing).
The two characteristics of systems that make them so difcult to analyze are
interdependencies and variability. Interdependencies cause the behavior of one
element to affect other elements in the system either directly or indirectly. Variability compounds the effect of interdependencies in the system, making system
behavior nearly impossible to predict without the use of simulation.
The variables of interest in systems analysis are decision, response, and
state variables. Decision variables dene how a system works; response
variables indicate how a system performs; and state variables indicate system
conditions at specic points in time. System performance metrics or response
variables are gen- erally time, utilization, inventory, quality, or cost related.
Improving system per- formance requires the correct manipulation of decision
variables. System opti- mization seeks to nd the best overall setting of
decision variable values that maximizes or minimizes a particular response
variable value.
Given the complex nature of system elements and the requirement to make
good design decisions in the shortest time possible, it is evident that simulation
can play a vital role in systems planning. Traditional systems analysis techniques
are effective in providing quick but often rough solutions to dynamic systems
problems. They generally fall short in their ability to deal with the complexity
and dynamically changing conditions in manufacturing and service systems.
Simula- tion is capable of imitating complex systems of nearly any size and to
nearly any level of detail. It gives accurate estimates of multiple performance
metrics and leads designers toward good design decisions.

2.11 Review Questions


1. Why is an understanding of system dynamics important to the use of
simulation?
2. What is a system?
3. What are the elements of a system from a simulation perspective? Give
an example of each.
4. What are two characteristics of systems that make them so complex?
5. What is the difference between a decision variable and a response variable?
6. Identify ve decision variables of a manufacturing or service system
that tend to be random.

46

Part I Study Chapters

7. Give two examples of state variables.


8. List three performance metrics that you feel would be important
for a computer assembly line.
9. List three performance metrics you feel would be useful for a
hospital emergency room.
10.
Dene optimization in terms of decision variables and response
variables.
11.
Is maximizing resource utilization a good overriding
performance objective for a manufacturing system? Explain.
12.
What is a systems approach to problem solving?
13.
How does simulation t into the overall approach of systems
engineering?
14.
In what situations would you use analytical techniques
(like hand calculations or spreadsheet modeling) over
simulation?
15.
Assuming you decided to use simulation to determine how
many lift trucks were needed in a distribution center, how might
analytical models be used to complement the simulation study
both before and after?
16.
What advantages does simulation have over traditional OR
techniques used in systems analysis?
17.
Students come to a professors ofce to receive help on a
homework assignment every 10 minutes exponentially distributed.
The time to help a student is exponentially distributed with a mean
of 7 minutes. What are the expected number of students waiting to
be helped and the average time waiting before being helped? For
what percentage of time is it expected there will be more than two
students waiting to be helped?

References
Blanchard, Benjamin S. System Engineering Management. New York: John Wiley &
Sons, 1991.
Hopp, Wallace J., and M. Spearman. Factory Physics. New York: Irwin/McGraw-Hill,
2000, p. 282.
Kendall, D. G. Stochastic Processes Occurring in the Theory of Queues and Their
Analysis by the Method of Imbedded Markov Chains. Annals of
Mathematical Statistics 24 (1953), pp. 33854.
Kofman, Fred, and P. Senge. Communities of Commitment: The Heart of Learning
Organizations. Sarita Chawla and John Renesch, (eds.), Portland, OR. Productivity
Press, 1995.
Law, Averill M., and David W. Kelton. Simulation Modeling and Analysis. New York:
McGraw-Hill, 2000.
Little, J. D. C. A Proof for the Queuing Formula: L = W. Operations Research 9, no.
3 (1961), pp. 38387.
Lloyd, S., and K. Melton. Using Statistical Process Control to Obtain More Precise
Distribution Fitting Using Distribution Fitting Software. Simulators
International XIV 29, no. 3 (April 1997), pp. 19398.
Senge, Peter. The Fifth Discipline. New York: Doubleday, 1990.
Simon, Herbert A. Models of Man. New York: John Wiley & Sons, 1957, p. 198.

HarrellGhoshBo
wden: Simulation
Using ProModel,
Second Edition

I. Study
Chapters

3. Simulation
Basics

The
McGrawHill
Companies,

SIMULATION BASICS

Zeal without knowledge is re without light.


Dr. Thomas Fuller

3.1 Introduction
Simulation is much more meaningful when we understand what it is actually
doing. Understanding how simulation works helps us to know whether we are
applying it correctly and what the output results mean. Many books have been
written that give thorough and detailed discussions of the science of simulation
(see Banks et al. 2001; Hoover and Perry 1989; Law and Kelton 2000; Pooch
and Wall 1993; Ross 1990; Shannon 1975; Thesen and Travis 1992; and
Widman, Loparo, and Nielsen 1989). This chapter attempts to summarize the
basic technical issues related to simulation that are essential to understand in
order to get the greatest benet from the tool. The chapter discusses the different
types of simulation and how random behavior is sim- ulated. A spreadsheet
simulation example is given in this chapter to illustrate how various techniques
are combined to simulate the behavior of a common system.

3.2 Types of Simulation


The way simulation works is based largely on the type of simulation used. There
are many ways to categorize simulation. Some of the most common include
Static or dynamic.
Stochastic or deterministic.
Discrete event or continuous.
The type of simulation we focus on in this book can be classied as dynamic,
stochastic, discrete-event simulation. To better understand this classication, we
47

48

Part I Study Chapters

will look at what the rst two characteristics mean in this chapter and focus on
what the third characteristic means in Chapter 4.

3.2.1 Static versus Dynamic Simulation


A static simulation is one that is not based on time. It often involves drawing
random samples to generate a statistical outcome, so it is sometimes called
Monte Carlo simulation. In nance, Monte Carlo simulation is used to select a
portfolio of stocks and bonds. Given a portfolio, with different probabilistic
payouts, it is possible to generate an expected yield. One material handling
system supplier developed a static simulation model to calculate the expected
time to travel from one rack location in a storage system to any other rack
location. A random sample of 100 fromto relationships were used to estimate
an average travel time. Had every fromto trip been calculated, a 1,000location rack would have involved 1,000! calculations.
Dynamic simulation includes the passage of time. It looks at state changes
as they occur over time. A clock mechanism moves forward in time and state
vari- ables are updated as time advances. Dynamic simulation is well suited for
analyzing manufacturing and service systems since they operate over time.

3.2.2 Stochastic versus Deterministic Simulation


Simulations in which one or more input variables are random are referred to as
stochastic or probabilistic simulations. A stochastic simulation produces output
that is itself random and therefore gives only one data point of how the system
might behave.
Simulations having no input components that are random are said to be
deterministic. Deterministic simulation models are built the same way as
stochas- tic models except that they contain no randomness. In a deterministic
simulation, all future states are determined once the input data and initial
state have been dened.
As shown in Figure 3.1, deterministic simulations have constant inputs and
produce constant outputs. Stochastic simulations have random inputs and
produce random outputs. Inputs might include activity times, arrival intervals,
and routing sequences. Outputs include metrics such as average ow time, ow
rate, and re- source utilization. Any output impacted by a random input
variable is going to also be a random variable. That is why the random inputs
and random outputs of Figure 3.1(b) are shown as statistical distributions.
A deterministic simulation will always produce the exact same outcome no
matter how many times it is run. In stochastic simulation, several randomized
runs or replications must be made to get an accurate performance estimate
because each run varies statistically. Performance estimates for stochastic
simulations are obtained by calculating the average value of the performance
metric across all of the replications. In contrast, deterministic simulations need
to be run only once to get precise results because the results are always the
same.

49

Chapter 3 Simulation Basics

FIGURE 3.1
Examples of (a) a deterministic simulation and (b) a stochastic simulation.
Constant
inputs

Constant
outputs

7
3.4

Random
inputs

Random
outputs

12.3
Simulatio
n

Simulation
106

(a)

(b)

3.3 Random Behavior


Stochastic systems frequently have time or quantity values that vary within a
given range and according to specied density, as dened by a probability distribution. Probability distributions are useful for predicting the next time, distance,
quantity, and so forth when these values are random variables. For example, if an
operation time varies between 2.2 minutes and 4.5 minutes, it would be dened
in the model as a probability distribution. Probability distributions are dened
by specifying the type of distribution (normal, exponential, or another type) and
the parameters that describe the shape or density and range of the distribution.
For example, we might describe the time for a check-in operation to be
normally distributed with a mean of 5.2 minutes and a standard deviation of
0.4 minute. During the simulation, values are obtained from this distribution
for successive operation times. The shape and range of time values generated
for this activity will correspond to the parameters used to dene the distribution.
When we gener- ate a value from a distribution, we call that value a random
variate.
Probability distributions from which we obtain random variates may be
either discrete (they describe the likelihood of specic values occurring) or
continuous (they describe the likelihood of a value being within a given range).
Figure 3.2 shows graphical examples of a discrete distribution and a continuous
distribution.
A discrete distribution represents a nite or countable number of possible
values. An example of a discrete distribution is the number of items in a lot or
individuals in a group of people. A continuous distribution represents a
continuum of values. An example of a continuous distribution is a machine with
a cycle time that is uniformly distributed between 1.2 minutes and 1.8
minutes. An innite number of possible values exist within this range. Discrete
and continuous distributions are further dened in Chapter 6. Appendix A
describes many of the distributions used in simulation.

50

Part I Study Chapters

FIGURE 3.2
Examples of (a) a
discrete probability
distribution and (b) a
continuous
probability
distribution.

p(x) 1.0
.8
(a )

.6

.4
.2
0
0

4
3
5
Discrete Value

f (x) 1.0
.8
.6
(b )
.4
.2
0
0

4
3
5
Continuous Value

3.4 Simulating Random Behavior


One of the most powerful features of simulation is its ability to mimic random
behavior or variation that is characteristic of stochastic systems. Simulating
random behavior requires that a method be provided to generate random
numbers as well as routines for generating random variates based on a given
probability dis- tribution. Random numbers and random variates are dened in
the next sections along with the routines that are commonly used to generate
them.

3.4.1 Generating Random Numbers


Random behavior is imitated in simulation by using a random number
generator. The random number generator operates deep within the heart of a
simulation model, pumping out a stream of random numbers. It provides the
foundation for

51

Chapter 3 Simulation Basics

FIGURE 3.3
The
uniform(0,1)
distribution of a
random number
generator.

f(x)

f(x) =

1 for 0 < x <

1
0 elsewhere
1
Mean = , =
2
1
Variance = u2 =
12

1.
0

1.0

simulating random events occurring in the simulated system such as the


arrival time of cars to a restaurants drive-through window; the time it takes the
driver to place an order; the number of hamburgers, drinks, and fries ordered;
and the time it takes the restaurant to prepare the order. The input to the
procedures used to generate these types of events is a stream of numbers that
are uniformly distributed between zero and one (0 x 1). The random number generator is
respon- sible for producing this stream of independent and uniformly distributed
numbers
(Figure 3.3).
Before continuing, it should be pointed out that the numbers produced by a
random number generator are not random in the truest sense. For example, the
generator can reproduce the same sequence of numbers again and again, which
is not indicative of random behavior. Therefore, they are often referred to as
pseudo- random number generators (pseudo comes from Greek and means false
or fake). Practically speaking, however, good pseudo-random number
generators can pump out long sequences of numbers that pass statistical tests
for randomness (the numbers are independent and uniformly distributed). Thus
the numbers approxi- mate real-world randomness for purposes of simulation,
and the fact that they are reproducible helps us in two ways. It would be
difcult to debug a simulation program if we could not regenerate the same
sequence of random numbers to re- produce the conditions that exposed an error
in our program. We will also learn in Chapter 10 how reproducing the same
sequence of random numbers is useful when comparing different simulation
models. For brevity, we will drop the pseudo prex as we discuss how to
design and keep our random number generator healthy.
Linear Congruential Generators
There are many types of established random number generators, and researchers
are actively pursuing the development of new ones (LEcuyer 1998). However,
most simulation software is based on linear congruential generators (LCG). The
LCG is efcient in that it quickly produces a sequence of random numbers without requiring a great deal of computational resources. Using the LCG, a
sequence of integers Z1, Z2, Z3, . . . is dened by the recursive formula
Zi = (aZ i 1 + c) mod(m)

52

Part I Study Chapters

where the constant a is called the multiplier, the constant c the increment, and
the constant m the modulus (Law and Kelton 2000). The user must provide a
seed or starting value, denoted Z 0, to begin generating the sequence of integer
values. Z 0, a, c, and m are all nonnegative integers. The value of Zi is computed
by dividing
(aZ i1 + c) by m and setting Z i equal to the remainder part of the division, which
is the result returned by the mod function. Therefore, the Zi values are bounded
by 0 Zi m 1 and are uniformly distributed in the discrete case.
However, we desire the continuous version of the uniform distribution with
values ranging between a low of zero and a high of one, which we will denote
as Ui for i = 1, 2, 3, . . . . Accordingly, the value of Ui is computed by dividing
Zi by m.
In a moment, we will consider some requirements for selecting the values
for a, c, and m to ensure that the random number generator produces a long
sequence of numbers before it begins to repeat them. For now, however, lets
assign the fol- lowing values a = 21, c = 3, and m = 16 and generate a few
pseudo-random numbers. Table 3.1 contains a sequence of 20 random numbers
generated from the recursive formula
Zi = (21Zi1 + 3) mod(16)
An integer value of 13 was somewhat arbitrarily selected between 0 and
m 1 = 16 1 = 15 as the seed (Z 0 = 13) to begin generating the sequence of
TABLE 3.1 Example LCG Zi = (21Zi1 + 3) mod(16),
with Z0 = 13
i

21Zi1 + 3

Zi

Ui = Zi/16

0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20

276
87
150
129
24
171
234
213
108
255
318
297
192
3
66
45
276
87
150
129

13
4
7
6
1
8
11
10
5
12
15
14
9
0
3
2
13
4
7
6
1

0.2500
0.4375
0.3750
0.0625
0.5000
0.6875
0.6250
0.3125
0.7500
0.9375
0.8750
0.5625
0.0000
0.1875
0.1250
0.8125
0.2500
0.4375
0.3750
0.0625

Chapter 3 Simulation Basics

53

numbers in Table 3.1. The value of Z1 is obtained as


Z1 = (aZ 0 + c) mod(m) = (21(13) + 3) mod(16) = (276) mod(16) = 4
Note that 4 is the remainder term from dividing 16 into 276. The value of U1 is
computed as
U1 = Z1/16 = 4/16 = 0.2500
The process continues using the value of Z1 to compute Z 2 and then U2.
Changing the value of Z 0 produces a different sequence of uniform(0,1)
numbers. However, the original sequence can always be reproduced by setting
the seed value back to 13 (Z0 = 13). The ability to repeat the simulation
experiment under the exact same random conditions is very useful, as will be
demonstrated in Chapter 10 with a technique called common random numbers.
Note that in ProModel the sequence of random numbers is not generated in
advance and then read from a table as we have done. Instead, the only value
saved is the last Zi value that was generated. When the next random number in
the se- quence is needed, the saved value is fed back to the generator to produce
the next random number in the sequence. In this way, the random number
generator is called each time a new random event is scheduled for the
simulation.
Due to computational constraints, the random number generator cannot go
on indenitely before it begins to repeat the same sequence of numbers. The
LCG in Table 3.1 will generate a sequence of 16 numbers before it begins to
repeat itself. You can see that it began repeating itself starting in the 17th
position. The value of 16 is referred to as the cycle length of the random number
generator, which is dis- turbingly short in this case. A long cycle length is
desirable so that each replication of a simulation is based on a different segment
of random numbers. This is how we collect independent observations of the
models output.
Lets say, for example, that to run a certain simulation model for one
replica- tion requires that the random number generator be called 1,000 times
during the simulation and we wish to execute ve replications of the
simulation. The rst replication would use the rst 1,000 random numbers in the
sequence, the second replication would use the next 1,000 numbers in the
sequence (the number that would appear in positions 1,001 to 2,000 if a table
were generated in advance), and so on. In all, the random number generator
would be called 5,000 times. Thus we would need a random number generator
with a cycle length of at least 5,000.
The maximum cycle length that an LCG can achieve is m. To realize the
maximum cycle length, the values of a, c, and m have to be carefully selected.
A guideline for the selection is to assign (Pritsker 1995)
1. m = 2b, where b is determined based on the number of bits per word on
the computer being used. Many computers use 32 bits per word, making
31 a good choice for b.
2. c and m such that their greatest common factor is 1 (the only
positive integer that exactly divides both m and c is 1).
3. a = 1 + 4k, where k is an integer.

54

Part I Study Chapters

Following this guideline, the LCG can achieve a full cycle length of over 2.1
billion (231 to be exact) random numbers.
Frequently, the long sequence of random numbers is subdivided into smaller
segments. These subsegments are referred to as streams. For example, Stream 1
could begin with the random number in the rst position of the sequence and
continue down to the random number in the 200,000th position of the sequence.
Stream 2, then, would start with the random number in the 200,001st position of
the sequence and end at the 400,000th position, and so on. Using this approach,
each type of random event in the simulation model can be controlled by a unique
stream of random numbers. For example, Stream 1 could be used to generate the
arrival pattern of cars to a restaurants drive-through window and Stream 2 could
be used to generate the time required for the driver of the car to place an order.
This assumes that no more than 200,000 random numbers are needed to simulate
each type of event. The practical and statistical advantages of assigning unique
streams to each type of event in the model are described in Chapter 10.
To subdivide the generators sequence of random numbers into streams, you
rst need to decide how many random numbers to place in each stream. Next,
you begin generating the entire sequence of random numbers (cycle length)
produced by the generator and recording the Zi values that mark the beginning of
each stream. Therefore, each stream has its own starting or seed value. When
using the random number generator to drive different events in a simulation
model, the previously generated random number from a particular stream is
used as input to the generator to generate the next random number from that
stream. For convenience, you may want to think of each stream as a separate
random number generator to be used in different places in the model. For
example, see Figure 10.5 in Chapter 10.
There are two types of linear congruential generators: the mixed
congruential generator and the multiplicative congruential generator. Mixed
congruential gener- ators are designed by assigning c > 0. Multiplicative
congruential generators are designed by assigning c = 0. The multiplicative
generator is more efcient than the mixed generator because it does not require
the addition of c. The maximum cycle length for a multiplicative generator can
be set within one unit of the maximum
cycle length of the mixed generator by carefully selecting values for a and m.
From a practical standpoint, the difference in cycle length is insignicant
considering that both types of generators can boast cycle lengths of more than
2.1 billion.
ProModel uses the following multiplicative generator:
Zi = (630,360,016Zi1)
mod(2

31

1)

Specically, it is a prime modulus multiplicative linear congruential generator


(PMMLCG) with a = 630,360,016, c = 0, and m = 231 1. It has been
exten- sively tested and is known to be a reliable random number generator for
simula- tion (Law and Kelton 2000). The ProModel implementation of this
generator
divides the cycle length of 231 1 = 2,147,483,647 into 100 unique streams.
Testing Random Number Generators
When faced with using a random number generator about which you know very
little, it is wise to verify that the numbers emanating from it satisfy the two

Chapter 3 Simulation Basics

55

important properties dened at the beginning of this section. The numbers produced by the random number generator must be (1) independent and (2)
uniformly distributed between zero and one (uniform(0,1)). To verify that the
generator satises these properties, you rst generate a sequence of random
numbers U1, U2, U3, . . . and then subject them to an appropriate test of
hypothesis.
The hypotheses for testing the independence property are
H 0: Ui values from the generator are independent
H1: Ui values from the generator are not independent
Several statistical methods have been developed for testing these hypotheses at
a specied signicance level . One of the most commonly used methods is
the runs test. Banks et al. (2001) review three different versions of the runs test
for conducting this independence test. Additionally, two runs tests are implemented in Stat::Fitthe Runs Above and Below the Median Test and the Runs
Up and Runs Down Test. Chapter 6 contains additional material on tests for
independence.
The hypotheses for testing the uniformity property are
H 0: Ui values are uniform(0,1)
H1: Ui values are not uniform(0,1)
Several statistical methods have also been developed for testing these
hypotheses at a specied signicance level . The Kolmogorov-Smirnov test
and the chi- square test are perhaps the most frequently used tests. (See
Chapter 6 for a de- scription of the chi-square test.) The objective is to
determine if the uniform(0,1) distribution ts or describes the sequence of
random numbers produced by the random number generator. These tests are
included in the Stat::Fit software and are further described in many
introductory textbooks on probability and statistics (see, for example, Johnson
1994).

3.4.2 Generating Random Variates


This section introduces common methods for generating observations (random
variates) from probability distributions other than the uniform(0,1) distribution.
For example, the time between arrivals of cars to a restaurants drive-through
window may be exponentially distributed, and the time required for drivers to
place orders at the window might follow the lognormal distribution.
Observations from these and other commonly used distributions are obtained
by transforming the observations generated by the random number generator to
the desired distri- bution. The transformed values are referred to as variates
from the specied distribution.
There are several methods for generating random variates from a desired
dis- tribution, and the selection of a particular method is somewhat dependent
on the characteristics of the desired distribution. Methods include the inverse
transfor- mation method, the acceptance/rejection method, the composition
method, the convolution method, and the methods employing special
properties. The inverse transformation method, which is commonly used to
generate variates from both

56

Part I Study Chapters

discrete and continuous distributions, is described starting rst with the continuous case. For a review of the other methods, see Law and Kelton (2000).
Continuous Distributions
The application of the inverse transformation method to generate random
variates from continuous distributions is straightforward and efcient for many
continu- ous distributions. For a given probability density function f (x), nd
the cumula- tive distribution function of X. That is, F(x) = P(X x). Next,
set U = F(x),
where U is uniform(0,1), and solve for x. Solving for x yields x = F1(U ). The
equation x = F1(U ) transforms U into a value for x that conforms to the
given distribution f (x).
As an example, suppose that we need to generate variates from the
exponential distribution with mean . The probability density function f (x) and
corresponding cumulative distribution function F(x) are
x/
e
f (x) = 1

0
r
1 ex/
F(x) =
0

for x > 0
elsewhere
for x > 0
elsewhere

Setting U = F(x) and solving for x yields


U = 1 ex/
e
=1U
x/

ln (ex/) = ln (1 U)

where ln is the natural logarithm

x/ = ln (1 U)
x = ln (1 U)
The random variate x in the above equation is exponentially distributed with mean
provided U is uniform(0,1).
Suppose three observations of an exponentially distributed random variable
with mean = 2 are desired. The next three numbers generated by the
random number generator are U1 = 0.27, U2 = 0.89, and U3 = 0.13. The three
numbers are transformed into variates x1, x2, and x3 from the exponential
distribution with mean = 2 as follows:
x1 = 2 ln (1 U1) = 2 ln (1 0.27) = 0.63
x2 = 2 ln (1 U2) = 2 ln (1 0.89) = 4.41
x3 = 2 ln (1 U3) = 2 ln (1 0.13) = 0.28
Figure 3.4 provides a graphical representation of the inverse transformation
method in the context of this example. The rst step is to generate U, where U is
uniform(0,1). Next, locate U on the y axis and draw a horizontal line from that
point to the cumulative distribution function [F(x) = 1 ex/2]. From this point

57

Chapter 3 Simulation Basics

FIGURE 3.4
Graphical explanation
of inverse
transformation
method for continuous
variates.

F(x)
1.00
U2 = 1

ex2/ 2

= 0.89
0.50

U1 = 1 ex1/ 2 = 0.27

x1 = 2 ln (1 0.27) = 0.63

x2 = 2 ln (1 0.89) = 4.41

of intersection with F(x), a vertical line is dropped down to the x axis to obtain
the corresponding value of the variate. This process is illustrated in Figure
3.4 for generating variates x1 and x2 given U1 = 0.27 and U2 = 0.89.
Application of the inverse transformation method is straightforward as long
as there is a closed-form formula for the cumulative distribution function, which
is the case for many continuous distributions. However, the normal distribution
is one exception. Thus it is not possible to solve for a simple equation to
generate normally distributed variates. For these cases, there are other methods
that can be used to generate the random variates. See, for example, Law and
Kelton (2000) for a description of additional methods for generating random
variates from continuous distributions.
Discrete Distributions
The application of the inverse transformation method to generate variates from
discrete distributions is basically the same as for the continuous case. The difference is in how it is implemented. For example, consider the following
probability mass function:

0.10
for x = 1
p(x) = P(X = x) = 0.30 for x = 2

0.60
for x = 3
The random variate x has three possible values. The probability that x is equal to
1 is 0.10, P(X = 1) = 0.10; P(X = 2) = 0.30; and P(X = 3) = 0.60. The
cumulative distribution function F(x) is shown in Figure 3.5. The random variable x
could be used in a simulation to represent the number of defective components
on a circuit board or the number of drinks ordered from a drive-through window,
for example.
Suppose that an observation from the above discrete distribution is desired.
The rst step is to generate U, where U is uniform(0,1). Using Figure 3.5, the

58
FIGURE 3.5
Graphical explanation
of inverse
transformation
method for discrete
variates.

Part I Study Chapters

F(x)

1.00

U2 = 0.89

0.40

U1 = 0.27

U3 = 0.05

0.1
0

1
x3 = 1

2
x1 = 2

3
x2 = 3

value of the random variate is determined by locating U on the y axis, drawing a


horizontal line from that point until it intersects with a step in the cumulative
function, and then dropping a vertical line from that point to the x axis to read
the value of the variate. This process is illustrated in Figure 3.5 for x1, x2, and x3
given
U1 = 0.27, U2 = 0.89, and U3 = 0.05. Equivalently, if 0 Ui 0.10, then xi =
1; if 0.10 < Ui 0.40, then xi = 2; if 0.40 < Ui 1.00, then xi = 3. Note
that
should we generate 100 variates using the above process, we would expect a value
of 3 to be returned 60 times, a value of 2 to be returned 30 times, and a value of
1 to be returned 10 times.
The inverse transformation method can be applied to any discrete
distribution by dividing the y axis into subintervals as dened by the cumulative
distribution function. In our example case, the subintervals were [0, 0.10],
(0.10, 0.40], and (0.40, 1.00]. For each subinterval, record the appropriate
value for the random variate. Next, develop an algorithm that calls the
random number generator to receive a specic value for U, searches for and
locates the subinterval that contains U, and returns the appropriate value for the
random variate.
Locating the subinterval that contains a specic U value is relatively
straight- forward when there are only a few possible values for the variate.
However, the number of possible values for a variate can be quite large for
some discrete distributions. For example, a random variable having 50
possible values could require that 50 subintervals be searched before
locating the subinterval that contains U. In such cases, sophisticated search
algorithms have been developed that quickly locate the appropriate subinterval
by exploiting certain characteristics of the discrete distribution for which the
search algorithm was designed. A good source of information on the subject can
be found in Law and Kelton (2000).

59

Chapter 3 Simulation Basics

3.5 Simple Spreadsheet Simulation


This chapter has covered some really useful information, and it will be constructive to pull it all together for our rst dynamic, stochastic simulation model. The
simulation will be implemented as a spreadsheet model because the example
system is not so complex as to require the use of a commercial simulation software product. Furthermore, a spreadsheet will provide a nice tabulation of the
random events that produce the output from the simulation, thereby making it
ideal for demonstrating how simulations of dynamic, stochastic systems are
often accomplished. The system to be simulated follows.
Customers arrive to use an automatic teller machine (ATM) at a mean interarrival time of 3.0 minutes exponentially distributed. When customers arrive to
the system they join a queue to wait for their turn on the ATM. The queue has
the capacity to hold an innite number of customers. Customers spend an
average of
2.4 minutes exponentially distributed at the ATM to complete their transactions,
which is called the service time at the ATM. Simulate the system for the arrival
and processing of 25 customers and estimate the expected waiting time for customers in the queue (the average time customers wait in line for the ATM) and
the expected time in the system (the average time customers wait in the queue
plus the average time it takes them to complete their transaction at the ATM).
Using the systems terminology introduced in Chapter 2 to describe the ATM
system, the entities are the customers that arrive to the ATM for processing. The
resource is the ATM that serves the customers, which has the capacity to serve
one customer at a time. The system controls that dictate how, when, and where
activ- ities are performed for this ATM system consist of the queuing discipline,
which is rst-in, rst-out (FIFO). Under the FIFO queuing discipline the
entities (customers) are processed by the resource (ATM) in the order that they
arrive to the queue. Figure 3.6 illustrates the relationships of the elements of
the system with the customers appearing as dark circles.
FIGURE 3.6
Descriptive drawing of the automatic teller machine (ATM) system.
Arriving
customers
(entities)

ATM
queue
(FIFO)

6
Interarrival time
4.8 minutes

7th customer
arrives at
21.0 min.

6th customer
arrives at
16.2 min.

ATM server
(resource)

Departing
customers
(entities)

60

Part I Study Chapters

Our objective in building a spreadsheet simulation model of the ATM


system is to estimate the average amount of time the 25 customers will spend
waiting in the queue and the average amount of time the customers will spend in
the system. To accomplish our objective, the spreadsheet will need to generate
random numbers and random variates, to contain the logic that controls how
customers are processed by the system, and to compute an estimate of system
performance mea- sures. The spreadsheet simulation is shown in Table 3.2 and
is divided into three main sections: Arrivals to ATM, ATM Processing Time,
and ATM Simulation Logic. The Arrivals to ATM and ATM Processing Time
provide the foundation for the simulation, while the ATM Simulation Logic
contains the spreadsheet pro- gramming that mimics customers owing through
the ATM system. The last row of the Time in Queue column and the Time in
System column under the ATM Simulation Logic section contains one
observation of the average time customers wait in the queue, 1.94 minutes, and
one observation of their average time in the system, 4.26 minutes. The values
were computed by averaging the 25 customer time values in the respective
columns.
Do you think customer number 17 (see the Customer Number column)
became upset over having to wait in the queue for 6.11 minutes to conduct a
0.43 minute transaction at the ATM (see the Time in Queue column and the
Service Time column under ATM Simulation Logic)? Has something like this
ever happened to you in real life? Simulation can realistically mimic the
behavior of a system. More time will be spent interpreting the results of our
spreadsheet simulation after we understand how it was put together.

3.5.1 Simulating Random Variates


The interarrival time is the elapsed time between customer arrivals to the ATM.
This time changes according to the exponential distribution with a mean of
3.0 minutes. That is, the time that elapses between one customer arrival and the
next is not the same from one customer to the next but averages out to be 3.0
min- utes. This is illustrated in Figure 3.6 in that the interarrival time
between cus- tomers 7 and 8 is much smaller than the interarrival time of 4.8
minutes between customers 6 and 7. The service time at the ATM is also a
random variable follow- ing the exponential distribution and averages 2.4
minutes. To simulate this sto- chastic system, a random number generator is
needed to produce observations (random variates) from the exponential
distribution. The inverse transformation method was used in Section 3.4.2 just
for this purpose.
The transformation equation is
X i = ln (1 Ui )

for i = 1, 2, 3, . . . , 25

where Xi represents the ith value realized from the exponential distribution with
mean , and Ui is the ith random number drawn from a uniform(0,1)
distribution. The i = 1, 2, 3, . . . , 25 indicates that we will compute 25 values
from the transfor- mation equation. However, we need to have two different
versions of this equation
to generate the two sets of 25 exponentially distributed random variates needed
to simulate 25 customers because the mean interarrival time of = 3.0 minutes
is dif- ferent than the mean service time of = 2.4 minutes. Let X 1i denote the
interarrival

TABLE 3.2

Spreadsheet Simulation of Automatic Teller Machine (ATM)

Arrivals to ATM

6
1

Harrell
Ghos
hBow
den:
Simula
tion
Using
ProMo
del,

ATM Processing Time

ATM Simulation Logic

Stream 1
(Z 1i )

Random
Number
(U 1i )

Interarriva
l Time
(X 1i )

Stream 2
(Z 2i )

Random
Number
(U 2i )

Service
Time
(X 2i )

Arriva
l Time
(2)

Begin
Service Time
(3)

Servic
e Time
(4)

0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25

3
66
109
116
7
22
81
40
75
42
117
28
79
126
89
80
19
18
125
68
23
102
97
120
91
122

0.516
0.852
0.906
0.055
0.172
0.633
0.313
0.586
0.328
0.914
0.219
0.617
0.984
0.695
0.625
0.148
0.141
0.977
0.531
0.180
0.797
0.758
0.938
0.711
0.953

2.18
5.73
7.09
0.17
0.57
3.01
1.13
2.65
1.19
7.36
0.74
2.88
12.41
3.56
2.94
0.48
0.46
11.32
2.27
0.60
4.78
4.26
8.34
3.72
9.17

122
5
108
95
78
105
32
35
98
13
20
39
54
113
72
107
74
21
60
111
30
121
112
51
50
29

Custome
r
Number
(1)

0.039
0.844
0.742
0.609
0.820
0.250
0.273
0.766
0.102
0.156
0.305
0.422
0.883
0.563
0.836
0.578
0.164
0.469
0.867
0.234
0.945
0.875
0.398
0.391
0.227

0.10
4.46
3.25
2.25
4.12
0.69
0.77
3.49
0.26
0.41
0.87
1.32
5.15
1.99
4.34
2.07
0.43
1.52
4.84
0.64
6.96
4.99
1.22
1.19
0.62

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25

2.18
7.91
15.00
15.17
15.74
18.75
19.88
22.53
23.72
31.08
31.82
34.70
47.11
50.67
53.61
54.09
54.55
65.87
68.14
68.74
73.52
77.78
86.12
89.84
99.01

2.18
7.91
15.00
18.25
20.50
24.62
25.31
26.08
29.57
31.08
31.82
34.70
47.11
52.26
54.25
58.59
60.66
65.87
68.14
72.98
73.62
80.58
86.12
89.84
99.01

0.10
4.46
3.25
2.25
4.12
0.69
0.77
3.49
0.26
0.41
0.87
1.32
5.15
1.99
4.34
2.07
0.43
1.52
4.84
0.64
6.96
4.99
1.22
1.19
0.62

Departur
Time in
Time in
e Time
Queue
System
(5) = (3) + (4) (6) = (3) (2) (7) = (5) (2)
2.28
12.37
18.25
20.50
24.62
25.31
26.08
29.57
29.83
31.49
32.69
36.02
52.26
54.25
58.59
60.66
61.09
67.39
72.98
73.62
80.58
85.57
87.34
91.03
99.63
Average

0.00
0.00
0.00
3.08
4.76
5.87
5.43
3.55
5.85
0.00
0.00
0.00
0.00
1.59
0.64
4.50
6.11
0.00
0.00
4.24
0.10
2.80
0.00
0.00
0.00
1.94

0.10
4.46
3.25
5.33
8.88
6.56
6.20
7.04
6.11
0.41
0.87
1.32
5.15
3.58
4.98
6.57
6.54
1.52
4.84
4.88
7.06
7.79
1.22
1.19
0.62
4.26

I.
S
tu
d
y
C
h

3.
Si
m
ul
at
io
n
B

The
McGr
aw
Hill
Comp

62

Part I Study Chapters

time and X 2i denote the service time generated for the ith customer simulated in
the system. The equation for transforming a random number into an interarrival
time observation from the exponential distribution with mean = 3.0 minutes
becomes
X 1i = 3.0 ln (1 U 1i )
for i = 1, 2, 3, . . . , 25
where U 1i denotes the ith value drawn from the random number generator using
Stream 1. This equation is used in the Arrivals to ATM section of Table 3.2
under the Interarrival Time (X 1i ) column.
The equation for transforming a random number into an ATM service time
ob- servation from the exponential distribution with mean = 2.4 minutes
becomes
X 2i = 2.4 ln (1 U 2i )
for i = 1, 2, 3, . . . , 25
where U 2i denotes the ith value drawn from the random number generator using
Stream 2. This equation is used in the ATM Processing Time section of Table 3.2
under the Service Time (X 2i ) column.
Lets produce the sequence of U 1i values that feeds the transformation
equa- tion (X 1i ) for interarrival times using a linear congruential generator
(LCG) similar to the one used in Table 3.1. The equations are
Z 1i = (21Z 1i 1 + 3) mod(128)
U 1i = Z 1i /128

for i = 1, 2, 3, . . . , 25

The authors dened Stream 1s starting or seed value to be 3. So we will use


Z 10 = 3 to kick off this stream of 25 random numbers. These equations are
used in the Arrivals to ATM section of Table 3.2 under the Stream 1 (Z 1i ) and
Random Number (U 1i ) columns.
Likewise, we will produce the sequence of U 2i values that feeds the transformation equation (X 2i ) for service times using
Z 2i = (21Z 2i 1 + 3) mod(128)
U 2i = Z 2i /128
for i = 1, 2, 3, . . . , 25
and will specify a starting seed value of Z 20 = 122, Stream 2s seed value, to
kick off the second stream of 25 random numbers. These equations are used
in the ATM Processing Time section of Table 3.2 under the Stream 2 (Z 2i ) and
Random Number (U 2i ) columns.
The spreadsheet presented in Table 3.2 illustrates 25 random variates for
both the interarrival time, column (X 1i ), and service time, column (X 2i ).
All time values are given in minutes in Table 3.2. To be sure we pull this
together correctly, lets compute a couple of interarrival times with mean
= 3.0 minutes and compare them to the values given in Table 3.2.
Given Z 10 = 3
Z 11 = (21Z 10 + 3) mod(128) = (21(3) + 3) mod(128)
= (66) mod(128) = 66
U 11 = Z 11/128 = 66/128 = 0.516
X 11 = ln (1 U 11) = 3.0 ln (1 0.516) = 2.18 minutes

63

Chapter 3 Simulation Basics

FIGURE 3.7
Microsoft Excel
snapshot of the
ATM spreadsheet
illustrating the
equations for the
Arrivals to ATM
section.

The value of 2.18 minutes is the rst value appearing under the column,
Interarrival Time (X 1i ). To compute the next interarrival time value X 12 , we
start by using the value of Z 11 to compute Z 12.
Given Z 11 = 66
Z 12 = (21Z 11 + 3) mod(128) = (21(66) + 3) mod(128) = 109
U 12 = Z 12/128 = 109/128 = 0.852
X 12 = 3 ln (1 U 12) = 3.0 ln (1 0.852) = 5.73 minutes
Figure 3.7 illustrates how the equations were programmed in Microsoft Excel
for the Arrivals to ATM section of the spreadsheet. Note that the U 1i and X 1i
values in Table 3.2 are rounded to three and two places to the right of the
decimal, respectively. The same rounding rule is used for U 2i and X 2i .
It would be useful for you to verify a few of the service time values with
mean = 2.4 minutes appearing in Table 3.2 using
Z 20 = 122
Z 2i = (21Z 2i 1 + 3) mod(128)
U 2i = Z 2i /128
X 2i = 2.4 ln (1 U 2i )

for i = 1, 2, 3, . . .

The equations started out looking a little difcult to manipulate but turned
out not to be so bad when we put some numbers in them and organized them
in a spreadsheetthough it was a bit tedious. The important thing to note here
is that although it is transparent to the user, ProModel uses a very similar
method to produce exponentially distributed random variates, and you now
understand how it is done.

64

Part I Study Chapters

The LCG just given has a maximum cycle length of 128 random numbers
(you may want to verify this), which is more than enough to generate 25 interarrival time values and 25 service time values for this simulation. However, it is a
poor random number generator compared to the one used by ProModel. It was
chosen because it is easy to program into a spreadsheet and to compute by hand
to facilitate our understanding. The biggest difference between it and the
random number generator in ProModel is that the ProModel random number
generator manipulates much larger numbers to pump out a much longer stream
of numbers that pass all statistical tests for randomness.
Before moving on, lets take a look at why we chose Z 10 = 3 and Z 20 =
122. Our goal was to make sure that we did not use the same uniform(0,1)
random
number to generate both an interarrival time and a service time. If you look
carefully at Table 3.2, you will notice that the seed value Z 20 = 122 is the Z
125 value from random number Stream 1. Stream 2 was merely dened to start
where Stream 1 ended. Thus our spreadsheet used a unique random number to
generate each interarrival and service time. Now lets add the necessary
logic to our spreadsheet to conduct the simulation of the ATM system.

3.5.2 Simulating Dynamic, Stochastic Systems


The heart of the simulation is the generation of the random variates that drive
the stochastic events in the simulation. However, the random variates are simply
a list of numbers at this point. The spreadsheet section labeled ATM Simulation
Logic in Table 3.2 is programmed to coordinate the execution of the events to
mimic the processing of customers through the ATM system. The simulation
program must keep up with the passage of time to coordinate the events. The
word dynamic appearing in the title for this section refers to the fact that the
simulation tracks the passage of time.
The rst column under the ATM Simulation Logic section of Table 3.2,
labeled Customer Number, is simply to keep a record of the order in which the
25 customers are processed, which is FIFO. The numbers appearing in parentheses under each column heading are used to illustrate how different columns are
added or subtracted to compute the values appearing in the simulation.
The Arrival Time column denotes the moment in time at which each
customer arrives to the ATM system. The rst customer arrives to the system at
time 2.18 minutes. This is the rst interarrival time value (X 11 = 2.18)
appearing in the Arrival to ATM section of the spreadsheet table. The second
customer arrives to the system at time 7.91 minutes. This is computed by taking
the arrival time of the rst customer of 2.18 minutes and adding to it the next
interarrival time of X 12 = 5.73 minutes that was generated in the Arrival to
ATM section of the spreadsheet table.
The arrival time of the second customer is 2.18 + 5.73 = 7.91 minutes. The
process continues with the third customer arriving at 7.91 + 7.09 = 15.00
minutes.
The trickiest piece to program into the spreadsheet is the part that determines
the moment in time at which customers begin service at the ATM after waiting in
the queue. Therefore, we will skip over the Begin Service Time column for now
and come back to it after we understand how the other columns are computed.

Chapter 3 Simulation Basics

65

The Service Time column simply records the simulated amount of time
required for the customer to complete their transaction at the ATM. These values
are copies of the service time X2i values generated in the ATM Processing Time
section of the spreadsheet.
The Departure Time column records the moment in time at which a
customer departs the system after completing their transaction at the ATM. To
compute the time at which a customer departs the system, we take the time at
which the cus- tomer gained access to the ATM to begin service, column (3),
and add to that the length of time the service required, column (4). For
example, the rst customer gained access to the ATM to begin service at 2.18
minutes, column (3). The service time for the customer was determined to be 0.10 minutes in column (4). So,
the customer departs 0.10 minutes later or at time 2.18 + 0.10 = 2.28
minutes. This customers short service time must be because they forgot their
PIN number and could not conduct their transaction.
The Time in Queue column records how long a customer waits in the queue
before gaining access to the ATM. To compute the time spent in the queue, we
take the time at which the ATM began serving the customer, column (3), and
sub- tract from that the time at which the customer arrived to the system,
column (2). The fourth customer arrives to the system at time 15.17 and begins
getting service from the ATM at 18.25 minutes; thus, the fourth customers time
in the queue is
18.25 15.17 = 3.08 minutes.
The Time in System column records how long a customer was in the system.
To compute the time spent in the system, we subtract the customers departure
time, column (5), from the customers arrival time, column (2). The fth
customer arrives to the system at 15.74 minutes and departs the system at
24.62 minutes. Therefore, this customer spent 24.62 15.74 = 8.88 minutes in
the system.
Now lets go back to the Begin Service Time column, which records the time
at which a customer begins to be served by the ATM. The very rst customer to
arrive to the system when it opens for service advances directly to the ATM.
There is no waiting time in the queue; thus the value recorded for the time that
the rst customer begins service at the ATM is the customers arrival time.
With the exception of the rst customer to arrive to the system, we have to
capture the logic that a customer cannot begin service at the ATM until the
previous customer using the ATM completes his or her transaction. One way to
do this is with an IF state- ment as follows:
IF (Current Customer's Arrival Time < Previous Customer's
Departure Time)
THEN (Current Customer's Begin Service Time = Previous
Customer's Departure Time)
ELSE (Current Customer's Begin Service Time = Current
Customer's Arrival Time)

Figure 3.8 illustrates how the IF statement was programmed in Microsoft


Excel. The format of the Excel IF statement is
IF (logical test, use this value if test is true, else use this
value if test is false)

66

Part I Study Chapters

FIGURE 3.8
Microsoft Excel
snapshot of the
ATM spreadsheet
illustrating the IF
statement for the
Begin Service Time
column.

The Excel spreadsheet cell L10 (column L, row 10) in Figure 3.8 is the
Begin Service Time for the second customer to arrive to the system and is
programmed with IF(K10<N9,N9,K10). Since the second customers arrival
time (Excel cell K10) is not less than the rst customers departure time (Excel
cell N9), the logical test evaluates to false and the second customers time to
begin service is set to his or her arrival time (Excel cell K10). The fourth
customer shown in Figure 3.8 pro- vides an example of when the logical test
evaluates to true, which results in the fourth customer beginning service when
the third customer departs the ATM.

3.5.3 Simulation
Analysis

Replications

and

Output

The spreadsheet model makes a nice simulation of the rst 25 customers to


arrive to the ATM system. And we have a simulation output observation of 1.94
minutes that could be used as an estimate for the average time that the 25
customers waited in the queue (see the last value under the Time in Queue
column, which represents the average of the 25 individual customer queue
times). We also have a simulation output observation of 4.26 minutes that
could be used as an estimate for the average time the 25 customers were in
the system. These results represent only one possible value of each performance
measure. Why? If at least one input vari- able to the simulation model is
random, then the output of the simulation model will also be random. The
interarrival times of customers to the ATM system and their service times are
random variables. Thus the output of the simulation model of the ATM system is
also a random variable. Before we bet that the average time customers spend in
the system each day is 4.26 minutes, we may want to run the simulation model
again with a new sample of interarrival times and service times to see what
happens. This would be analogous to going to the real ATM on a sec- ond day to
replicate our observing the rst 25 customers processed to compute a second
observation of the average time that the 25 customers spend in the system. In
simulation, we can accomplish this by changing the values of the seed numbers
used for our random number generator.

67

Chapter 3 Simulation Basics

TABLE 3.3 Summary of ATM System Simulation Output


Replication

Average Time in Queue

Average Time in System

1
2
Average

1.94 minutes
0.84 minutes
1.39 minutes

4.26 minutes
2.36 minutes
3.31 minutes

Changing the seed values Z 10 and Z 20 causes the spreadsheet program to


recompute all values in the spreadsheet. When we change the seed values Z 10
and Z 20 appropriately, we produce another replication of the simulation. When
we run replications of the simulation, we are driving the simulation with a set of
random numbers never used by the simulation before. This is analogous to the
fact that the ar- rival pattern of customers to the real ATM and their transaction
times at the ATM will not likely be the same from one day to the next. If Z 10 =
29 and Z 20 = 92 are used to start the random number generator for the ATM
simulation model, a new replica- tion of the simulation will be computed that
produces an average time in queue of
0.84 minutes and an average time in system of 2.36 minutes. Review question
num- ber eight at the end of the chapter which asks you to verify this second
replication. Table 3.3 contains the results from the two replications. Obviously,
the results from this second replication are very different than those produced
by the rst replication. Good thing we did not make bets on how much time
customers spend in the system based on the output of the simulation for the rst
replication only. Sta- tistically speaking, we should get a better estimate of the
average time customers spend in queue and the average time they are in the
system if we combine the results from the two replications into an overall
average. Doing so yields an estimate of
1.39 minutes for the average time in queue and an estimate of 3.31 minutes for
the average time in system (see Table 3.3). However, the large variation in the
output of the two simulation replications indicates that more replications are
needed to get reasonably accurate estimates. How many replications are
enough? You will know how to answer the question upon completing Chapter 9.
While spreadsheet technology is effective for conducting Monte Carlo
simu- lations and simulations of simple dynamic systems, it is ineffective and
inefcient as a simulation tool for complex systems. Discrete-event
simulation software technology was designed especially for mimicking the
behavior of complex sys- tems and is the subject of Chapter 4.

3.6 Summary

Modeling random behavior begins with transforming the output produced by a


random number generator into observations (random variates) from an appropriate statistical distribution. The values of the random variates are combined with
logical operators in a computer program to compute output that mimics the performance behavior of stochastic systems. Performance estimates for stochastic

68

Part I Study Chapters

simulations are obtained by calculating the average value of the performance


metric across several replications of the simulation. Models can realistically
simulate a variety of systems.

3.7 Review Questions


1. What is the difference between a stochastic model and a deterministic
model in terms of the input variables and the way results are interpreted?
2. Give a statistical description of the numbers produced by a random
number generator.
3. What are the two statistical properties that random numbers must
satisfy?
4. Given these two LCGs:
Zi = (9Zi1 + 3) mod(32)
Zi = (12Zi1 + 5) mod(32)
a. Which LCG will achieve its maximum cycle length? Answer
the question without computing any Zi values.
b. Compute Z1 through Z5 from a seed of 29 (Z0 = 29) for the
second LCG.
5. What is a random variate, and how are random variates generated?
6. Apply the inverse transformation method to generate three variates from
the following distributions using U1 = 0.10, U2 = 0.53, and U3 = 0.15.
a. Probability density function:
1

for x

f (x) =

0
elsewhere
where = 7 and = 4.
b. Probability mass function:

p(x) = P(X = x) =
15
0

1, 2, 3, 4, 5

for x
=

elsewhere

7. How would a random number generator be used to simulate a 12 percent


chance of rejecting a part because it is defective?
8. Reproduce the spreadsheet simulation of the ATM system presented
in Section 3.5. Set the random numbers seeds Z 10 = 29 and Z 20 =
92 to compute the average time customers spend in the queue and in
the system.
a. Verify that the average time customers spend in the queue and in the
system match the values given for the second replication in Table 3.3.

Chapter 3 Simulation Basics

69

b. Verify that the resulting random number Stream 1 and random


number Stream 2 are completely different than the corresponding
streams in Table 3.2. Is this a requirement for a new replication of
the simulation?

Reference
s
Banks, Jerry; John S. Carson II; Barry L. Nelson; and David M. Nicol. Discrete-Event
System Simulation. Englewood Cliffs, NJ: Prentice Hall, 2001.
Hoover, Stewart V., and Ronald F. Perry. Simulation: A Problem-Solving Approach.
Reading, MA: Addison-Wesley, 1989.
Johnson, R. A. Miller and Freunds Probability and Statistics for Engineers. 5th ed.
Englewood Cliffs, NJ: Prentice Hall, 1994.
Law, Averill M., and David W. Kelton. Simulation Modeling and Analysis. New York:
McGraw-Hill, 2000.
LEcuyer, P. Random Number Generation. In Handbook of Simulation: Principles,
Methodology, Advances, Applications, and Practice, ed. J. Banks, pp. 93137.
New York: John Wiley & Sons, 1998.
Pooch, Udo W., and James A. Wall. Discrete Event Simulation: A Practical Approach.
Boca Raton, FL: CRC Press, 1993.
Pritsker, A. A. B. Introduction to Simulation and SLAM II. 4th ed. New York: John Wiley
& Sons, 1995.
Ross, Sheldon M. A Course in Simulation. New York: Macmillan, 1990.
Shannon, Robert E. System Simulation: The Art and Science. Englewood Cliffs, NJ:
Prentice Hall, 1975.
Thesen, Arne, and Laurel E. Travis. Simulation for Decision Making. Minneapolis, MN:
West Publishing, 1992.
Widman, Lawrence E.; Kenneth A. Loparo; and Norman R. Nielsen. Articial Intelligence,
Simulation, and Modeling. New York: John Wiley & Sons, 1989.

HarrellGhoshBo
wden: Simulation
Using ProModel,
Second Edition

I. Study
Chapters

4.
DiscreteEve
nt

The
McGrawHill
Companies,

DISCRETE-EVENT
SIMULATION

When the only tool you have is a hammer, every problem begins to resemble a
nail.
Abraham Maslow

4.1 Introduction
Building on the foundation provided by Chapter 3 on how random numbers and
random variates are used to simulate stochastic systems, the focus of this chapter
is on discrete-event simulation, which is the main topic of this book. A
discrete- event simulation is one in which changes in the state of the
simulation model occur at discrete points in time as triggered by events. The
events in the automatic teller machine (ATM) simulation example of Chapter
3 that occur at discrete points in time are the arrivals of customers to the ATM
queue and the completion of their transactions at the ATM. However, you will
learn in this chapter that the spreadsheet simulation of the ATM system in
Chapter 3 was not technically exe- cuted as a discrete-event simulation.
This chapter rst denes what a discrete-event simulation is compared to a
continuous simulation. Next the chapter summarizes the basic technical issues
re- lated to discrete-event simulation to facilitate your understanding of how to
effec- tively use the tool. Questions that will be answered include these:

How does discrete-event simulation work?


What do commercial simulation software packages provide?
What are the differences between simulation languages and simulators?
What is the future of simulation technology?

A manual dynamic, stochastic, discrete-event simulation of the ATM example


system from Chapter 3 is given to further illustrate what goes on inside this type
of simulation.
71

72

Part I Study Chapters

4.2 Discrete-Event versus Continuous Simulation


Commonly, simulations are classied according to the following characteristics:
Static or dynamic
Stochastic or deterministic
Discrete-event or continuous
In Chapter 3 we discussed the differences between the rst two sets of characteristics. Now we direct our attention to discrete-event versus continuous simulations. A discrete-event simulation is one in which state changes occur at discrete
points in time as triggered by events. Typical simulation events might include

The arrival of an entity to a workstation.


The failure of a resource.
The completion of an activity.
The end of a shift.

State changes in a model occur when some event happens. The state of the
model becomes the collective state of all the elements in the model at a
particular point in time. State variables in a discrete-event simulation are
referred to as discrete- change state variables. A restaurant simulation is an
example of a discrete-event simulation because all of the state variables in the
model, such as the number of customers in the restaurant, are discrete-change
state variables (see Figure 4.1). Most manufacturing and service systems are
typically modeled using discrete- event simulation.
In continuous simulation, state variables change continuously with respect
to time and are therefore referred to as continuous-change state variables. An
exam- ple of a continuous-change state variable is the level of oil in an oil
tanker that is being either loaded or unloaded, or the temperature of a building
that is controlled by a heating and cooling system. Figure 4.2 compares a
discrete-change state variable and a continuous-change state variable as they
vary over time.
FIGURE 4.1
Discrete events cause
discrete state
changes.

Number of
customers
in restaurant
3
2

Start
Simulatio
n

Event 1
(customer
arrives)

Event 2
(custome
r arrives)

Event 3
(custome
r departs)

Time

73

Chapter 4 Discrete-Event Simulation

FIGURE 4.2
Comparison of a
discrete-change
state variable and a
continuous-change
state variable.

Continuouschange state
variable
Value

Discrete-change
state variable

Time

Continuous simulation products use either differential equations or difference equations to dene the rates of change in state variables over time.

4.2.1 Differential Equations


The change that occurs in some continuous-change state variables is expressed
in terms of the derivatives of the state variables. Equations involving derivatives
of a state variable are referred to as differential equations. The state variable v,
for example, might change as a function of both v and time t:
d v(t )

= v (t ) + t

dt
We then need a second equation to dene the initial condition of v:
v(0) = K
On a computer, numerical integration is used to calculate the change in a
particular response variable over time. Numerical integration is performed at the
end of successive small time increments referred to as steps. Numerical analysis
techniques, such as Runge-Kutta integration, are used to integrate the differential
equations numerically for each incremental time step. One or more threshold
values for each continuous-change state variable are usually dened that
determine when some action is to be triggered, such as shutting off a valve or
turning on a pump.

4.2.2 Difference Equations


Sometimes a continuous-change state variable can be modeled using difference
equations. In such instances, the time is decomposed into periods of length t. An
algebraic expression is then used to calculate the value of the state variable at
the end of period k + 1 based on the value of the state variable at the end of
period k. For example, the following difference equation might be used to
express the rate of change in the state variable v as a function of the current
value of v, a rate of change (r), and the length of the time period (t ):
v(k + 1) = v(k) + r t

74

Part I Study Chapters

Batch processing in which uids are pumped into and out of tanks can often be
modeled using difference equations.

4.2.3 Combined Continuous and Discrete Simulation


Many simulation software products provide both discrete-event and continuous
simulation capabilities. This enables systems that have both discrete-event and
continuous characteristics to be modeled, resulting in a hybrid simulation. Most
processing systems that have continuous-change state variables also have
discrete- change state variables. For example, a truck or tanker arrives at a ll
station (a discrete event) and begins lling a tank (a continuous process).
Four basic interactions occur between discrete- and continuous-change
variables:
1. A continuous variable value may suddenly increase or decrease as
the result of a discrete event (like the replenishment of inventory in
an inventory model).
2. The initiation of a discrete event may occur as the result of reaching a
threshold value in a continuous variable (like reaching a reorder point in
an inventory model).
3. The change rate of a continuous variable may be altered as the result of
a discrete event (a change in inventory usage rate as the result of a
sudden change in demand).
4. An initiation or interruption of change in a continuous variable may
occur as the result of a discrete event (the replenishment or depletion of
inventory initiates or terminates a continuous change of the continuous
variable).

4.3 How Discrete-Event Simulation Works


Most simulation software, including ProModel, presents a process-oriented
world view to the user for dening models. This is the way most humans tend
to think about systems that process entities. When describing a system, it is
natural to do so in terms of the process ow. Entities begin processing at activity
A then move on to activity B and so on. In discrete-event simulation, these
process ow den- itions are translated into a sequence of events for running
the simulation: rst event 1 happens (an entity begins processing at activity
A), then event 2 occurs (it completes processing at activity A), and so on.
Events in simulation are of two types: scheduled and conditional, both of which
create the activity delays in the simulation to replicate the passage of time.
A scheduled event is one whose time of occurrence can be determined
beforehand and can therefore be scheduled in advance. Assume, for example,
that an activity has just begun that will take X amount of time, where X is a
normally distributed random variable with a mean of 5 minutes and a standard
deviation of

Chapter 4 Discrete-Event Simulation

75

1.2 minutes. At the start of the activity, a normal random variate is generated
based on these parameters, say 4.2 minutes, and an activity completion event is
scheduled for that time into the future. Scheduled events are inserted chronologically into an event calendar to await the time of their occurrence. Events that
occur at predened intervals theoretically all could be determined in advance
and therefore be scheduled at the beginning of the simulation. For example,
entities arriving every ve minutes into the model could all be scheduled easily
at the start of the simulation. Rather than preschedule all events at once that
occur at a set fre- quency, however, they are scheduled only when the next
occurrence must be de- termined. In the case of a periodic arrival, the next
arrival would not be scheduled until the current scheduled arrival is actually
pulled from the event calendar for processing. This postponement until the
latest possible moment minimizes the size of the event calendar and eliminates
the necessity of knowing in advance how many events to schedule when the
length of the simulation may be unknown.
Conditional events are triggered by a condition being met rather than by the
passage of time. An example of a conditional event might be the capturing of a
resource that is predicated on the resource being available. Another example
would be an order waiting for all of the individual items making up the order to
be assembled. In these situations, the event time cannot be known beforehand,
so the pending event is simply placed into a waiting list until the conditions can
be satised. Often multiple pending events in a list are waiting for the same
condi- tion. For example, multiple entities might be waiting to use the same
resource when it becomes available. Internally, the resource would have a
waiting list for all items currently waiting to use it. While in most cases events
in a waiting list are processed rst-in, rst-out (FIFO), items could be inserted
and removed using a number of different criteria. For example, items may be
inserted according to item priority but be removed according to earliest due
date.
Events, whether scheduled or conditional, trigger the execution of logic that
is associated with that event. For example, when an entity frees a resource, the
state and statistical variables for the resource are updated, the graphical
animation is up- dated, and the input waiting list for the resource is examined to
see what activity to respond to next. Any new events resulting from the
processing of the current event are inserted into either the event calendar or
another appropriate waiting list.
In real life, events can occur simultaneously so that multiple entities can be
doing things at the same instant in time. In computer simulation, however, especially when running on a single processor, events can be processed only one at a
time even though it is the same instant in simulated time. As a consequence, a
method or rule must be established for processing events that occur at the exact
same simulated time. For some special cases, the order in which events are
processed at the current simulation time might be signicant. For example, an
entity that frees a resource and then tries to immediately get the same resource
might have an unfair advantage over other entities that might have been waiting
for that particular resource.
In ProModel, the entity, downtime, or other item that is currently being
processed is allowed to continue processing as far as it can at the current simulation time. That means it continues processing until it reaches either a conditional

76

Part I Study Chapters

FIGURE 4.3

Start

Logic diagram of
how discrete-event
simulation works.
Create simulation
database and
schedule initial events.

Advance clock
to next event
time.

Yes
Termination
event?

Update
statistics and
generate
output report.

No
Process event
and schedule any
new events.

Stop

Update
statistics, state
variables, and
animation.

Yes

Any conditional
events?
No

event that cannot be satised or a timed delay that causes a future event to be
scheduled. It is also possible that the object simply nishes all of the processing
dened for it and, in the case of an entity, exits the system. As an object is being
processed, any resources that are freed or other entities that might have been created as byproducts are placed in an action list and are processed one at a time in
a similar fashion after the current object reaches a stopping point. To
deliberately

77

Chapter 4 Discrete-Event Simulation

suspend the current object in order to allow items in the action list to be
processed, a zero delay time can be specied for the current object. This puts the
current item into the future events list (event calendar) for later processing,
even though it is still processed at the current simulation time.
When all scheduled and conditional events have been processed that are
possible at the current simulation time, the clock advances to the next scheduled
event and the process continues. When a termination event occurs, the
simulation ends and statistical reports are generated. The ongoing cycle of
processing sched- uled and conditional events, updating state and statistical
variables, and creating new events constitutes the essence of discrete-event
simulation (see Figure 4.3).

4.4 A Manual Discrete-Event Simulation Example


To illustrate how discrete-event simulation works, a manual simulation is
presented of the automatic teller machine (ATM) system of Chapter 3.
Customers arrive to use an automatic teller machine (ATM) at a mean
interarrival time of 3.0 minutes exponentially distributed. When customers
arrive to the system, they join a queue to wait for their turn on the ATM. The
queue has the capacity to hold an innite number of customers. The ATM itself
has a capacity of one (only one customer at a time can be processed at the
ATM). Customers spend an average of 2.4 minutes ex- ponentially distributed at
the ATM to complete their transactions, which is called the service time at the
ATM. Assuming that the simulation starts at time zero, sim- ulate the ATM
system for its rst 22 minutes of operation and estimate the expected waiting
time for customers in the queue (the average time customers wait in line for the
ATM) and the expected number of customers waiting in the queue (the average
number of customers waiting in the queue during the simulated time period). If
you are wondering why 22 minutes was selected as the simulation run length, it
is be- cause 22 minutes of manual simulation will nicely t on one page of the
textbook. An entity ow diagram of the ATM system is shown in Figure 4.4.

4.4.1 Simulation Model Assumptions


We did not list our assumptions for simulating the ATM system back in Chapter
3 although it would have been a good idea to have done so. In any
simulation,
FIGURE 4.4
Entity ow diagram for example automatic teller machine (ATM) system.
Arriving customers
(entities)

ATM queue
(FIFO)

ATM server
(resource)

Departing
customers
(entities)

78

Part I Study Chapters

certain assumptions must be made where information is not clear or complete.


Therefore, it is important that assumptions be clearly documented. The assumptions we will be making for this simulation are as follows:
There are no customers in the system initially, so the queue is empty and
the ATM is idle.
The move time from the queue to the ATM is negligible and therefore
ignored.
Customers are processed from the queue on a rst-in, rst-out (FIFO) basis.
The ATM never experiences failures.

4.4.2 Setting Up the Simulation


A discrete-event simulation does not generate all of the events in advance as
was done in the spreadsheet simulation of Chapter 3. Instead the simulation
schedules the arrival of customer entities to the ATM queue and the departure
of customer entities from the ATM as they occur over time. That is, the
simulation calls a function to obtain an exponentially distributed interarrival
time value only when a new customer entity needs to be scheduled to arrive to
the ATM queue. Like- wise, the simulation calls a function to obtain an
exponentially distributed service time value at the ATM only after a customer
entity has gained access (entry) to the ATM and the entitys future departure
time from the ATM is needed. An event calendar is maintained to coordinate
the processing of the customer arrival and customer departure events. As arrival
and departure event times are created, they are placed on the event calendar
in chronological order. As these events get processed, state variables (like the
contents of a queue) get changed and statistical accumulators (like the
cumulative time in the queue) are updated.
With this overview, lets list the structured components needed to conduct the
manual simulation.
Simulation Clock
As the simulation transitions from one discrete-event to the next, the simulation
clock is fast forwarded to the time that the next event is scheduled to occur.
There is no need for the clock to tick seconds away until reaching the time at
which the next event in the list is scheduled to occur because nothing will
happen that changes the state of the system until the next event occurs. Instead,
the simulation clock advances through a series of time steps. Let ti denote the
value of the simu- lation clock at time step i, for i = 0 to the number of
discrete events to process. Assuming that the simulation starts at time zero, then
the initial value of the sim- ulation clock is denoted as t0 = 0. Using this
nomenclature, t1 denotes the value of the simulation clock when the rst
discrete event in the list is processed, t2 de- notes the value of the simulation
clock when the second discrete-event in the list is processed, and so on.
Entity Attributes
To capture some statistics about the entities being processed through the system,
a discrete-event simulation maintains an array of entity attribute values.
Entity

Chapter 4 Discrete-Event Simulation

79

attributes are characteristics of the entity that are maintained for that entity until
the entity exits the system. For example, to compute the amount of time an
entity waited in a queue location, an attribute is needed to remember when the
entity en- tered the location. For the ATM simulation, one entity attribute is used
to remem- ber the customers time of arrival to the system. This entity attribute
is called the Arrival Time attribute. The simulation program computes how
long each cus- tomer entity waited in the queue by subtracting the time that the
customer entity arrived to the queue from the value of the simulation clock when
the customer en- tity gained access to the ATM.
State Variables
Two discrete-change state variables are needed to track how the status (state) of
the system changes as customer entities arrive in and depart from the ATM
system.
Number of Entities in Queue at time step i, NQi.
ATM Statusi to denote if the ATM is busy or idle at time step i.
Statistical Accumulators
The objective of the example manual simulation is to estimate the expected
amount of time customers wait in the queue and the expected number of customers waiting in the queue. The average time customers wait in queue is a
simple average. Computing this requires that we record how many customers
passed through the queue and the amount of time each customer waited in the
queue. The average number of customers in the queue is a time-weighted
average, which is usually called a time average in simulation. Computing this
requires that we not only observe the queues contents during the simulation but
that we also measure the amount of time that the queue maintained each of the
observed values. We record each observed value after it has been multiplied
(weighted) by the amount of time it was maintained.
Heres what the simulation needs to tally at each simulation time step i to
compute the two performance measures at the end of the simulation.
Simple-average time in queue.
Record the number of customer entities processed through the queue,
Total Processed. Note that the simulation may end before all customer
entities in the queue get a turn at the ATM. This accumulator keeps track
of how many customers actually made it through the queue.
For a customer processed through the queue, record the time that it waited
in the queue. This is computed by subtracting the value of the simulation
clock time when the entity enters the queue (stored in the entity attribute
array Arrival Time) from the value of the simulation clock time when the
entity leaves the queue, ti Arrival Time.
Time-average number of customers in the queue.
For the duration of the last time step, which is ti ti 1 , and the number
of customer entities in the queue during the last time step, which is NQi
1 , record the product of ti ti 1 and NQi 1 . Call the product
(ti ti 1)NQi 1 the Time-Weighted Number of Entities in the Queue.

80

Part I Study Chapters

Events
There are two types of recurring scheduled events that change the state of the
sys- tem: arrival events and departure events. An arrival event occurs when a
customer entity arrives to the queue. A departure event occurs when a customer
entity com- pletes its transaction at the ATM. Each processing of a customer
entitys arrival to the queue includes scheduling the future arrival of the next
customer entity to the ATM queue. Each time an entity gains access to the ATM,
its future departure from the system is scheduled based on its expected service
time at the ATM. We actually need a third event to end the simulation. This
event is usually called the termination event.
To schedule the time at which the next entity arrives to the system, the
simu- lation needs to generate an interarrival time and add it to the current
simulation clock time, ti . The interarrival time is exponentially distributed
with a mean of
3.0 minutes for our example ATM system. Assume that the function E (3.0) returns an exponentially distributed random variate with a mean of 3.0 minutes.
The future arrival time of the next customer entity can then be scheduled by
using the equation ti + E (3.0).
The customer service time at the ATM is exponentially distributed with a
mean of 2.4 minutes. The future departure time of an entity gaining access to the
ATM is scheduled by the equation ti + E (2.4).
Event Calendar
The event calendar maintains the list of active events (events that have been
scheduled and are waiting to be processed) in chronological order. The
simulation progresses by removing the rst event listed on the event calendar,
setting the simulation clock, ti , equal to the time at which the event is scheduled
to occur, and processing the event.

4.4.3 Running the Simulation


To run the manual simulation, we need a way to generate exponentially
distributed random variates for interarrival times [the function E (3.0)] and
service times [the function E (2.4)]. Rather than using our calculators to
generate a random number and transform it into an exponentially distributed
random variate each time one is needed for the simulation, lets take advantage
of the work we did to build the spreadsheet simulation of the ATM system in
Chapter 3. In Table 3.2 of Chapter 3, we generated 25 exponentially
distributed interarrival times with a mean of
3.0 minutes and 25 exponentially distributed service times with a mean of
2.4 minutes in advance of running the spreadsheet simulation. So when we need
a service time or an interarrival time for our manual simulation, lets use the
values from the Service Time and Interarrival Time columns of Table 3.2
rather than computing them by hand. Note, however, that we do not need to
generate all random variates in advance for our manual discrete-event
simulation (thats one of its advantages over spreadsheet simulation). We are
just using Table 3.2 to make our manual simulation a little less tedious, and
it will be instructive to

Chapter 4 Discrete-Event Simulation

81

compare the results of the manual simulation with those produced by the spreadsheet simulation.
Notice that Table 3.2 contains a subscript i in the leftmost column. This subscript denotes the customer entity number as opposed to the simulation time
step. We wanted to point this out to avoid any confusion because of the different
uses of the subscript. In fact, you can ignore the subscript in Table 3.2 as you
pick val- ues from the Service Time and Interarrival Time columns.
A discrete-event simulation logic diagram for the ATM system is shown in
Figure 4.5 to help us carry out the manual simulation. Table 4.1 presents the results of the manual simulation after processing 12 events using the simulation
logic diagram presented in Figure 4.5. The table tracks the creation and scheduling of events on the event calendar as well as how the state of the system
changes and how the values of the statistical accumulators change as events are
processed from the event calendar. Although Table 4.1 is completely lled in, it
was initially blank until the instructions presented in the simulation logic
diagram were exe- cuted. As you work through the simulation logic diagram,
you should process the information in Table 4.1 from the rst row down to the
last row, one row at a time (completely lling in a row before going down to the
next row). A dash () in a cell in Table 4.1 signies that the simulation logic
diagram does not require you to update that particular cell at the current
simulation time step. An arrow () in a cell in the table also signies that the
simulation logic diagram does not require you to update that cell at the current
time step. However, the arrows serve as a re- minder to look up one or more
rows above your current position in the table to de- termine the state of the ATM
system. Arrows appear under the Number of Entities in Queue, NQi column, and
ATM Statusi column. The only exception to the use of dashes or arrows is that
we keep a running total in the two Cumulative sub- columns in the table for
each time step. Lets get the manual simulation started.
i = 0, t0 = 0. As shown in Figure 4.5, the rst block after the start position
indicates that the model is initialized to its starting conditions. The
simulation time step begins at i = 0. The initial value of the simulation
clock is zero, t0 = 0. The system state variables are set to ATM Status0 =
Idle; Number of Entities in Queue, NQ0 = 0; and the Entity Attribute
Array is cleared. This reects the initial conditions of no customer entities
in the queue and an idle ATM. The statistical accumulator Total Processed is
set to zero. There are two different Cumulative variables in Table 4.1: one to
accumulate the time in queue values of ti Arrival Time, and the other to
accumulate the values of the time-weighted number of entities in the queue,
(ti ti 1)NQi 1 . Recall that ti Arrival Time is the amount of time that
entities, which gained access to the ATM, waited in queue. Both Cumulative
variables (ti Arrival Time) and (ti ti 1)NQi 1 are initialized to zero.
Next, an initial arrival event and termination event are scheduled and placed
under the Scheduled Future Events column. The listing of an event is
formatted as (Entity Number, Event, and Event Time). Entity Number
denotes the customer number that the event pertains to (such as the rst,
second, or third customer). Event is the type of event: a customer arrives, a

Start
i=0

8
2

Initialize variables and schedule initial arrival event and termination event (Scheduled Future Events).
i=i+1
Update Event Calendar: Insert Scheduled Future Events in chronological order.
Advance Clock, ti, to the Time of the first event on the calendar and process the event.

i=i+1

Schedule arrival event


for next customer entity
to occur at time ti + E(3).
Yes

Is ATM
idle?

No

Arrive

Event type?

Depart

End
Update statistics and generate output report.

Store
current customer
Arrival Time in last position of Entity
Schedule
departureentitys
event for
Attribute
Array
to
reflect
customer
joining the queue.
current customer entity entering
Add ATM
1 to NQ
of Entities
in Queue.
i, Number
to occur
at time
ti + E(2.4).
Update Time-Weighted
Store current customers
Number of Entities in Queue
Arrival Time in first position of
statistic.
Entity Attribute Array.
- Compute value for

Change ATM Statusi to Busy.


(ti ti 1)NQi 1 and
Update Entities Processed
update Cumulative.
through Queue statistics to
reflect customer entering ATM.
- Add 1 to Total Processed.
- Record Time in Queue of 0 for
ti - Arrival Time and update
Cumulative.
Update Time-Weighted Number
of Entities in Queue statistic.
- Record value of 0 for
(ti ti 1)NQi 1 and update
Cumulative.

FIGURE 4.5
Discrete-event simulation logic diagram for ATM system.

i=i+1

Yes

Any
customers
in queue?

No

End

Update Entity Attribute Array by


deleting departed customer entity
from first position in the array and
shifting waiting customers up.
Subtract 1 from NQi, Number of
Entities in Queue.
Schedule departure event for
customer entity entering the ATM to
occur at time ti + E(2.4).
Update Entities Processed
through Queue statistics to reflect
customer entering ATM.
- Add 1 to Total Processed.
- Compute Time in Queue
value for ti - Arrival Time and
update Cumulative.
Update Time-Weighted Number of
Entities in Queue statistic.
- Compute value for (ti ti 1)NQi 1
and update Cumulative.

Update Entity Attribute


Array by deleting departed
customer entity from first
position of the array.
Change ATM Statusi to Idle.

Harrell
Ghos
hBow
den:
Simula
tion
Using
ProMo
del,
I.
S
tu
d
y
C
h

4.
Disc
rete
Ev
ent
Sim

The
McGr
aw
Hill
Comp

(1, Arrive,
(_, End,
(1, Depart,
(2, Arrive,
(_, End,
(2, Arrive,
(_, End,
(2, Depart,
(3, Arrive,
(_, End,
(3, Arrive,
(_, End,
(4, Arrive,
(3, Depart,
(_, End,
(5, Arrive,
(3, Depart,
(_, End,
(3, Depart,
(6, Arrive,
(_, End,
(6, Arrive,
(4, Depart,
(_, End,
(7, Arrive,
(4, Depart,
(_, End,

2.18)
22.00)
2.28)
7.91)
22.00)
7.91)
22.00)
12.37)
15.00)
22.00)
15.00)
22.00)
15.17)
18.25)
22.00)
15.74)
18.25)
22.00)
18.25)
18.75)
22.00)
18.75)
20.50)
22.00)
19.88)
20.50)
22.00)

(4, Depart,
(_, End,
(8, Arrive,
(_, End,
(8, Arrive,
(5, Depart,

20.50)
22.00)
22.53)
22.00)
22.53)
24.62)

4
5
6
7
8
9
10

11
12

2.18

Arrive

2.28

Depart

7.91

Arrive

12.37

Depart

15.00

Arrive

15.17

Arrive

15.74

Arrive

18.25

Depart

18.75

Arrive

19.88

Arrive

20.50

Depart

22.00

End

*( )
( )
*(1, 2.18)
( )
*( )
( )

(Entity
Number,
Event, Time)

(ti Cumulative,
ti1)NQi1

(ti ti1)NQi1

Cumulative,
(ti Arrival
Time)

Time
Arrival
in Queue,
Time ti

Scheduled Future Events

Time-Weighted
Number of
Entities in Queue

Entities Processed
through Queue
Total Processed

Entity Attribute
Array (Entity
Number, Arrival
Time)

Statistical Accumulators
ATM Statusi

System State

Entities
*Entity
Waiting
Using
in ATM,
Queue, array
arraypositions
position 1
2, 3,
...

Clock, ti

8
3

(Entity Number, Event, Time)

Processed Event
Entity Number

Event Calendar

Number of Entities in Queue, NQi

Manual Discrete-Event Simulation of ATM System

Event

TABLE 4.1

Harrell
Ghos
hBow
den:
Simula
tion
Using
ProMo
del,
I.
S
tu
d
y
C
h

(1, Arrive, 2.18)


(_, End,
22.00)
(2, Arrival, 7.91)
(1, Depart, 2.28)

Idle

Busy

Idle

*(2, 7.91)
( )
*( )
( )

Busy

Idle

*(3, 15.00)
( )
*(3, 15.00)
(4, 15.17)

Busy

(4, Arrive, 15.17)


(3, Depart, 18.25)
(5, Arrive, 15.74)

*(3, 15.00)
(4, 15.17)
(5, 15.74)
*(4, 15.17)
(5, 15.74)

0.57

0.57

(6, Arrive, 18.75)

3.08

3.08

5.02

5.59

(4, Depart, 20.50)

*(4, 15.17)
(5, 15.74)
(6, 18.75)
*(4, 15.17)
(5, 15.74)
(6, 18.75)
(7, 19.88)
*(5, 15.74)
(6, 18.75)
(7, 19.88)

3.08

0.50

6.09

(7, Arrive, 19.88)

3.08

2.26

8.35

(8, Arrive, 22.53)

4.76

7.84

1.86

10.21

(5, Depart, 24.62)

7.84

3.00

13.21

No new events
(3, Arrive, 15.00)
(2, Depart, 12.37)
No new events

4.
Disc
rete
Ev
ent
Sim

The
McGr
aw
Hill
Comp

HarrellGhoshBo
wden: Simulation
Using ProModel,
Second Edition

84

I. Study
Chapters

4.
DiscreteEve
nt

The
McGrawHill
Companies,

Part I Study Chapters

customer departs, or the simulation ends. Time is the future time that the
event is to occur. The event (1, Arrive, 2.18) under the Scheduled Future
Events column prescribes that the rst customer entity is scheduled to arrive
at time 2.18 minutes. The arrival time was generated using the equation
t0 + E (3.0). To obtain the value returned from the function E (3), we went
to Table 3.2, read the rst random variate from the Interarrival Time column
(a value of 2.18 minutes), and added it to the current value of the simulation
clock, t0 = 0. The simulation is to be terminated after 22 minutes. Note the
( , End, 22.00) under the Scheduled Future Events column. For the
termination event, no value is assigned to Entity Number because it is not
relevant.
i = 1, t1 = 2.18. After the initialization step, the list of scheduled future
events is added to the event calendar in chronological order in preparation
for the next simulation time step i = 1. The simulation clock is fast
forwarded to the time that the next event is scheduled to occur, which is t1
= 2.18
(the arrival time of the rst customer to the ATM queue), and then the event
is processed. Following the simulation logic diagram, arrival events are
processed by rst scheduling the future arrival event for the next customer
entity using the equation t1 + E (3.0) = 2.18 + 5.73 = 7.91 minutes.
Note the value of 5.73 returned by the function E (3.0) is the second
random variate listed under the Interarrival Time column of Table 3.2. This
future event is placed under the Scheduled Future Events column in Table
4.1 as (2, Arrive, 7.91). Checking the status of the ATM from the
previous simulation time step reveals that the ATM is idle (ATM Status0 =
Idle). Therefore, the arriving customer entity immediately ows through
the queue to the ATM to conduct its transaction. The future departure event
of this entity from the ATM is scheduled using the equation t1 + E (2.4) =
2.18 + 0.10 = 2.28 minutes. See (1, Depart, 2.28) under the Scheduled
Future Events column, denoting that the rst customer entity is scheduled to
depart the ATM at time 2.28 minutes. Note that the value of 0.10 returned
by the function E (2.4) is the rst random variate listed under the Service
Time column of Table 3.2. The arriving customer entitys arrival time is
then stored in the rst position of the Entity Attribute Array to signify that
it is being served by the ATM. The ATM Status1 is set to Busy, and the
statistical accumulators for Entities Processed through Queue are updated.
Add 1 to Total Processed and since this entity entered the queue and
immediately advanced to the idle ATM for processing, record zero minutes
in the Time in Queue, t1 Arrival Time, subcolumn and update this
statistics cumulative value. The statistical accumulators for Time-Weighted
Number of Entities in the Queue are updated next. Record zero for
(t1 t0)NQ0 since there were no entities in queue during the previous
time step, NQ0 = 0, and update this statistics cumulative value. Note the
arrow entered under the Number of Entities in Queue, NQ1 column.
Recall that the arrow is placed there to signify that the number of entities
waiting
in the queue has not changed from its previous value.

Chapter 4 Discrete-Event Simulation

85

i = 2, t2 = 2.28. Following the loop back around to the top of the


simulation logic diagram, we place the two new future events onto the
event calendar in chronological order in preparation for the next simulation
time step
i = 2. The simulation clock is fast forwarded to t2 = 2.28, and the
departure event for the rst customer entity arriving to the system is
processed. Given that there are no customers in the queue from the previous
time step, NQ1 = 0 (follow the arrows up to get this value), we simply
remove the departed customer from the rst position of the Entity Attribute
Array and change the status of the ATM to idle, ATM Status2 = Idle.
The statistical accumulators do not require updating because there are no
customer entities waiting in the queue or leaving the queue. The dashes ()
entered under the statistical accumulator columns indicate that updates are
not required. No new future events are scheduled.
As before, we follow the loop back to the top of the simulation logic
diagram, and place any new events, of which there are none at the end of time
step i = 2, onto the event calendar in chronological order in preparation for the
next simula- tion time step i = 3. The simulation clock is fast forwarded to t3 =
7.91, and the arrival of the second customer entity to the ATM queue is
processed. Complete the processing of this event and continue the manual
simulation until the termination event ( , End, 22.00) reaches the top of the
event calendar.
As you work through the simulation logic diagram to complete the manual
simulation, you will see that the fourth customer arriving to the system requires
that you use logic from a different path in the diagram. When the fourth
customer entity arrives to the ATM queue, simulation time step i = 6, the
ATM is busy (see ATM Status5) processing customer entity 3s transaction.
Therefore, the fourth customer entity joins the queue and waits to use the
ATM. (Dont forget that it invoked the creation of the fth customers arrival
event.) The fourth en- titys arrival time of 15.17 minutes is stored in the last
position of the Entity At- tribute Array in keeping with the rst-in, rst-out
(FIFO) rule. The Number of Entities in Queue, NQ6 , is incremented to 1.
Further, the Time-Weighted Number of Entities in the Queue statistical
accumulators are updated by rst computing (t6 t5)NQ5 = (15.17 15.00)0
= 0 and then recording the result. Next, this statistics cumulative value is
updated. Customers 5, 6, and 7 also arrive to the system nding the ATM busy
and therefore take their place in the queue to wait for the ATM.
The fourth customer waited a total of 3.08 minutes in the queue (see the
ti Arrival Time subcolumn) before it gained access to the ATM in time step
i = 8 as the third customer departed. The value of 3.08 minutes in the queue
for the fourth customer computed in time step i = 8 by t8 Arrival
Time =
18.25 15.17 = 3.08 minutes. Note that Arrival Time is the time that the
fourth customer arrived to the queue, and that the value is stored in the Entity
Attribute Array.
At time t12 = 22.00 minutes the simulation terminates and tallies the
nal values for the statistical accumulators, indicating that a total of ve
customer
entities were processed through the queue. The total amount of time that these ve

86

Part I Study Chapters

customers waited in the queue is 7.84 minutes. The nal cumulative value for
Time-Weighted Number of Entities in the Queue is 13.21 minutes. Note that at
the end of the simulation, two customers are in the queue (customers 6 and 7)
and one is at the ATM (customer 5). A few quick observations are worth
considering before we discuss how the accumulated values are used to calculate
summary sta- tistics for a simulation output report.
This simple and brief (while tedious) manual simulation is relatively easy to
follow. But imagine a system with dozens of processes and dozens of factors inuencing behavior such as downtimes, mixed routings, resource contention, and
others. You can see how essential computers are for performing a simulation of
any magnitude. Computers have no difculty tracking the many relationships
and updating the numerous statistics that are present in most simulations.
Equally as important, computers are not error prone and can perform millions of
instructions per second with absolute accuracy. We also want to point out that
the simulation logic diagram (Figure 4.5) and Table 4.1 were designed to
convey the essence of what happens inside a discrete-event simulation program.
When you view a trace report of a ProModel simulation in Lab Chapter 8 you
will see the simularities between the trace report and Table 4.1. Although the
basic process presented is sound, its efciency could be improved. For
example, there is no need to keep both a scheduled future events list and
an event calendar. Instead, future events are inserted directly onto the event
calendar as they are created. We sepa- rated them to facilitate our describing the
ow of information in the discrete-event framework.

4.4.4 Calculating Results


When a simulation terminates, statistics that summarize the models behavior are
calculated and displayed in an output report. Many of the statistics reported in
a simulation output report consist of average values, although most software
reports a multitude of statistics including the maximum and minimum values
observed during the simulation. Average values may be either simple
averages or time averages.
Simple-Average Statistic
A simple-average statistic is calculated by dividing the sum of all observation
val- ues of a response variable by the number of observations:
Simple average =

in=1

xi n

where n is the number of observations and xi is the value of ith observation.


Exam- ple simple-average statistics include the average time entities spent in the
system (from system entry to system exit), the average time entities spent at a
specic location, or the average time per use for a resource. An average of an
observation- based response variable in ProModel is computed as a simple
average.

87

Chapter 4 Discrete-Event Simulation

The average time that customer entities waited in the queue for their turn on
the ATM during the manual simulation reported in Table 4.1 is a simple-average
statistic. Recall that the simulation processed ve customers through the queue.
Let xi denote the amount of time that the i th customer processed spent in the
queue. The average waiting time in queue based on the n = 5 observations is
Average time in queue
=

0 + 0 + 0 + 3.08 + 4.76

i =1

xi

7.84 minutes
=

= 1.57 minutes

The values necessary for computing this average are accumulated under the
Entities Processed through Queue columns of the manual simulation table (see
the last row of Table 4.1 for the cumulative value (ti Arrival Time) = 7.84
and Total Processed = 5).
Time-Average Statistic
A time-average statistic, sometimes called a time-weighted average, reports
the average value of a response variable weighted by the time duration for
each observed value of the variable:
n (T x )
i i
i
=1
Time average =
T
where xi denotes the value of the i th observation, Ti denotes the time duration of
the i th observation (the weighting factor), and T denotes the total duration
over which the observations were collected. Example time-average statistics
include the average number of entities in a system, the average number of
entities at a lo- cation, and the average utilization of a resource. An average of
a time-weighted response variable in ProModel is computed as a time average.
The average number of customer entities waiting in the queue location for
their turn on the ATM during the manual simulation is a time-average statistic.
Figure 4.6 is a plot of the number of customer entities in the queue during the
manual simulation recorded in Table 4.1. The 12 discrete-events manually simulated in Table 4.1 are labeled t1, t2, t 3 ,..., t11, t12 on the plot. Recall that ti denotes the value of the simulation clock at time step i in Table 4.1, and that its initial value is zero, t0 = 0.
Using the notation from the time-average equation just given, the total
simu- lation time illustrated in Figure 4.6 is T = 22 minutes. The Ti denotes
the dura- tion of time step i (distance between adjacent discrete-events in Figure
4.6). That is, Ti = ti ti 1 for i = 1, 2, 3, . . . , 12. The xi denotes the
queues contents
(number of customer entities in the queue) during each Ti
time interval. There- fore, xi = NQi 1 for i = 1, 2, 3, . . . , 12 (recall that in
Table 4.1, NQi 1 denotes
the number of customer entities in the queue
from ti 1 to ti ). The time-average

Number of customers in queue

8
8

Harrell
Ghos
hBow
den:
Simula
tion
Using
ProMo
del,
I.
S
tu
d
y
C
h

4.
Disc
rete
Ev
ent
Sim

0
0

T1 =
2.18

T3 = 7.91 - 2.28 = 5.63

10

11

12

13

t1 t2

15

16

T4 = 12.37 - 7.91 = 4.46 T5 = 2.63

t3

t4
Simulation time, T = 22

FIGURE 4.6
Number of customers in the queue during the manual simulation.

17

18

T7 = 0.57
t5 t6 t7

19

20

21

22

T8 = 2.51

T6 = 0.17

T2 = 0.1
t0

14

T12 = 1.50
t8

t11

t12

The
McGr
aw
Hill
Comp

89

Chapter 4 Discrete-Event Simulation

number of customer entities in the queue (lets call it Average NQ) is


Average NQ =

12

xi )

i =1

12

(Ti
ti
=

(ti

1)NQi 1

i
=1

T
(2.18)(0) + (0.1)(0) + (5.63)(0) + (4.46)(0) + (2.63)(0) + (0.17)(0) + (0.57)(1) + (2.51)(2) + +
Average NQ =
(1.5)(2)
22
13.21
= 0.60 customers
Average NQ
22
12
=
)
1 )NQ

(ti

You may recognize that the numerator of this equation (


ti

i =1

i 1

calculates the area under the plot of the queues contents during the simulation
(Figure 4.6). The values necessary for computing this area are accumulated
under the Time-Weighted Number of Entities in Queue column of Table 4.1
(see the Cumulative value of 13.21 in the tables last row).

4.4.5 Issues
Even though this example is a simple and somewhat crude simulation, it
provides a good illustration of basic simulation issues that need to be addressed
when con- ducting a simulation study. First, note that the simulation start-up
conditions can bias the output statistics. Since the system started out empty,
queue content statis- tics are slightly less than what they might be if we
began the simulation with customers already in the system. Second, note that
we ran the simulation for only 22 minutes before calculating the results. Had we
ran longer, it is very likely that the long-run average time in the queue would
have been somewhat different (most likely greater) than the time from the
short run because the simulation did not have a chance to reach a steady state.
These are the kinds of issues that should be addressed whenever running a
simulation. The modeler must carefully analyze the output and understand the
sig- nicance of the results that are given. This example also points to the
need for considering beforehand just how long a simulation should be run.
These issues are addressed in Chapters 9 and 10.

4.5 Commercial Simulation Software


While a simulation model may be programmed using any development language
such as C++ or Java, most models are built using commercial simulation
software such as ProModel. Commercial simulation software consists of
several modules for performing different functions during simulation modeling.
Typical modules are shown in Figure 4.7.

4.5.1 Modeling Interface Module


A modeler denes a model using an input or modeling interface module. This
module provides graphical tools, dialogs, and other text editing capabilities for

90

Part I Study Chapters

FIGURE 4.7
Typical components
of simulation
software.

Modeling
interface

Modeling
processor

Simulation
interface

Model Simulation
data
data

Simulation
processor

Output
interface

Output
data

Output
processor

Simulation
processor

entering and editing model information. External les used in the simulation are
specied here as well as run-time options (number of replications and so on).

4.5.2 Model Processor


When a completed model is run, the model processor takes the model database,
and any other external data les that are used as input data, and creates a simulation database. This involves data translation and possibly language compilation.
This data conversion is performed because the model database and external les
are generally in a format that cannot be used efciently by the simulation
proces- sor during the simulation. Some simulation languages are interpretive,
meaning that the simulation works with the input data the way they were
entered. This al- lows simulations to start running immediately without going
through a translator, but it slows down the speed at which the simulation
executes.
In addition to translating the model input data, other data elements are
created for use during simulation, including statistical counters and state
variables. Statis- tical counters are used to log statistical information during the
simulation such as the cumulative number of entries into the system or the
cumulative activity time at a workstation.

4.5.3 Simulation Interface Module


The simulation interface module displays the animation that occurs during the
simulation run. It also permits the user to interact with the simulation to control
the current animation speed, trace events, debug logic, query state variables,

Chapter 4 Discrete-Event Simulation

91

request snapshot reports, pan or zoom the layout, and so forth. If visual
interactive capability is provided, the user is even permitted to make changes
dynamically to model variables with immediate visual feedback of the effects of
such changes.
The animation speed can be adjusted and animation can even be disabled by
the user during the simulation. When unconstrained, a simulation is capable of
running as fast as the computer can process all of the events that occur within
the simulated time. The simulation clock advances instantly to each scheduled
event; the only central processing unit (CPU) time of the computer that is used is
what is necessary for processing the event logic. This is how simulation is able
to run in compressed time. It is also the reason why large models with millions
of events take so long to simulate. Ironically, in real life activities take time
while events take no time. In simulation, events take time while activities take
no time. To slow down a simulation, delay loops or system timers are used to
create pauses between events. These techniques give the appearance of elapsing
time in an animation. In some applications, it may even be desirable to run a
simulation at the same rate as a real clock. These real-time simulations are
achieved by synchronizing the simu- lation clock with the computers internal
system clock. Human-in-the-loop (such as operator training simulators) and
hardware-in-the-loop (testing of new equip- ment and control systems) are
examples of real-time simulations.

4.5.4 Simulation Processor


The simulation processor processes the simulated events and updates the statistical accumulators and state variables. A typical simulation processor consists of
the following basic components:
Clock variablea variable used to keep track of the elapsed time during
the simulation.
Event calendara list of scheduled events arranged chronologically
according to the time at which they are to occur.
Event dispatch managerthe internal logic used to update the clock and
manage the execution of events as they occur.
Event logicalgorithms that describe the logic to be executed and
statistics to be gathered when an event occurs.
Waiting listsone or more lists (arrays) containing events waiting for a
resource or other condition to be satised before continuing processing.
Random number generatoran algorithm for generating one or more
streams of pseudo-random numbers between 0 and 1.
Random variate generatorsroutines for generating random variates
from specied probability distributions.

4.5.5 Animation Processor


The animation processor interacts with the simulation database to update
the graphical data to correspond to the changing state data. Animation is
usually

92

Part I Study Chapters

displayed during the simulation itself, although some simulation products create
an animation le that can be played back at the end of the simulation. In addition
to animated gures, dynamic graphs and history plots can be displayed during
the simulation.
Animation and dynamically updated displays and graphs provide a visual
rep- resentation of what is happening in the model while the simulation is
running. Animation comes in varying degrees of realism from three-dimensional
animation to simple animated owcharts. Often, the only output from the
simulation that is of interest is what is displayed in the animation. This is
particularly true when simula- tion is used for facilitating conceptualization or
for communication purposes.
A lot can be learned about model behavior by watching the animation (a
pic- ture is worth a thousand words, and an animation is worth a thousand
pictures). Animation can be as simple as circles moving from box to box, to
detailed, real- istic graphical representations. The strategic use of graphics
should be planned in advance to make the best use of them. While insufcient
animation can weaken the message, excessive use of graphics can distract
from the central point to be made. It is always good to dress up the simulation
graphics for the nal presenta- tion; however, such embellishments should be
deferred at least until after the model has been debugged.
For most simulations where statistical analysis is required, animation is no
substitute for the postsimulation summary, which gives a quantitative overview
of the entire system performance. Basing decisions on the animation alone
reects shallow thinking and can even result in unwarranted conclusions.

4.5.6 Output Processor


The output processor summarizes the statistical data collected during the simulation run and creates an output database. Statistics are reported on such performance measures as

Resource utilization.
Queue sizes.
Waiting times.
Processing rates.

In addition to standard output reporting, most simulation software provides the


ability to write user-specied data out to external les.

4.5.7 Output Interface Module


The output interface module provides a user interface for displaying the output
results from the simulation. Output results may be displayed in the form of
reports or charts (bar charts, pie charts, or the like). Output data analysis
capabilities such as correlation analysis and condence interval calculations
also are often pro- vided. Some simulation products even point out potential
problem areas such as bottlenecks.

Chapter 4 Discrete-Event Simulation

93

4.6 Simulation Using ProModel


ProModel is a powerful, yet easy-to-use, commercial simulation package that is
designed to effectively model any discrete-event processing system. It also has
continuous modeling capabilities for modeling ow in and out of tanks and other
vessels. The online tutorial in ProModel describes the building, running, and
output analysis of simulation models. A brief overview of how ProModel works
is presented here. The labs in this book provide a more detailed, hands-on
approach for actually building and running simulations using ProModel.

4.6.1 Building a Model


A model is dened in ProModel using simple graphical tools, data entry tables,
and ll-in-the-blank dialog boxes. In ProModel, a model consists of entities (the
items being processed), locations (the places where processing occurs),
resources (agents used to process and move entities), and paths (aisles and
pathways along which entities and resources traverse). Dialogs are associated
with each of these modeling elements for dening operational behavior such as
entity arrivals and processing logic. Schedules, downtimes, and other attributes
can also be dened for entities, resources, and locations.
Most of the system elements are dened graphically in ProModel (see
Figure 4.8). A graphic representing a location, for example, is placed on the
layout to create a new location in the model (see Figure 4.9). Information
about this location can then be entered such as its name, capacity, and so on.
Default values are provided to help simplify this process. Dening objects
graphically provides a highly intuitive and visual approach to model building.
The use of graphics is optional, and a model can even be dened without using
any graphics. In addition to graphic objects provided by the modeling software,
import capability is avail- able to bring in graphics from other packages. This
includes complete facility layouts created using CAD software such as
AutoCAD.

4.6.2 Running the Simulation


When running a model created in ProModel, the model database is translated or
compiled to create the simulation database. The animation in ProModel is
displayed concurrently with the simulation. Animation graphics are classied as
either static or dynamic. Static graphics include walls, aisles, machines, screen
text, and others. Static graphics provide the background against which the
animation
FIGURE 4.8
Sample of ProModel
graphic objects.

94

Part I Study Chapters

FIGURE 4.9
ProModel animation provides useful feedback.

takes place. This background might be a CAD layout imported into the model.
The dynamic animation objects that move around on the background during the
simu- lation include entities (parts, customers, and so on) and resources
(people, fork trucks, and so forth). Animation also includes dynamically
updated counters, indicators, gauges, and graphs that display count, status, and
statistical information (see Figure 4.9).

4.6.3 Output Analysis


The output processor in ProModel provides both summary and detailed statistics
on key performance measures. Simulation results are presented in the form of
re- ports, plots, histograms, pie charts, and others. Output data analysis
capabilities such as condence interval estimation are provided for more
precise analysis. Outputs from multiple replications and multiple scenarios
can also be summa- rized and compared. Averaging performance across
replications and showing multiple scenario output side-by-side make the results
much easier to interpret.
Summary Reports
Summary reports show totals, averages, and other overall values of interest.
Figure 4.10 shows an output report summary generated from a ProModel
simula- tion run.

95

Chapter 4 Discrete-Event Simulation

FIGURE 4.10
Summary report of simulation activity.
-------------------------------------------------------------------------------General Report
Output from C:\ProMod4\models\demos\Mfg_cost.mod [Manufacturing Costing Optimization]
Date: Feb/27/2003
Time: [Link] PM
-------------------------------------------------------------------------------Scenario
: Model Parameters
Replication
: 1 of 1
Warmup Time
: 5 hr
Simulation Time : 15 hr
-------------------------------------------------------------------------------LOCATIONS
Location
Name
----------Receive
NC Lathe 1
NC Lathe 2
Degrease
Inspect
Bearing Que
Loc1

Scheduled
Hours
--------10
10
10
10
10
10
10

Capacity
-------2
1
1
2
1
100
5

Total
Entries
------21
57
57
114
113
90
117

Average
Minutes
Per Entry
--------57.1428
10.1164
9.8918
10.1889
4.6900
34.5174
25.6410

Average
Contents
-------2
0.961065
0.939725
1.9359
0.883293
5.17762
5

Maximum
Contents
-------2
1
1
2
1
13
5

Current
Contents
-------2
1
1
2
1
11
5

RESOURCES

Resource
Name
-------CellOp.1
CellOp.2
CellOp.3
CellOp

Units
----1
1
1
3

Scheduled
Hours
--------10
10
10
30

Number
Of Times
Used
-------122
118
115
355

Average
Minutes
Per
Usage
-------2.7376
2.7265
2.5416
2.6704

Average
Minutes
Travel
To Use
-------0.1038
0.1062
0.1020
0.1040

Average
Minutes
Moving

Average
Minutes
Wait for
Res, etc.
--------31.6055
3.2269
2.4885
35.5899

Average
Minutes
Travel
To Park
-------0.0000
0.0000
0.0000
0.0000

% Blocked
In Travel
--------0.00
0.00
0.00
0.00

% Util
-----57.76
55.71
50.67
54.71

ENTITY ACTIVITY

Entity
Name
------Pallet
Blank
Cog
Reject
Bearing

Total
Exits
----19
0
79
33
78

Current
Quantity
In System
--------2
7
3
0
12

Average
Minutes
In
System
--------63.1657
52.5925
49.5600
42.1855

-------0.0000
0.8492
0.8536
0.0500

Average
Minutes
In
Operation
--------1.0000
33.5332
33.0656
0.0000

Average
Minutes
Blocked
--------30.5602
14.9831
13.1522
6.5455

96
FIGURE 4.11
Time-series graph
showing changes in
queue size over time.

FIGURE 4.12
Histogram of queue
contents.

Part I Study Chapters

97

Chapter 4 Discrete-Event Simulation

Time Series Plots and Histograms


Sometimes a summary report is too general to capture the information being
sought. Fluctuations in model behavior over time can be viewed using a time
series report. In ProModel, time series output data are displayed graphically so
that patterns can be identied easily. Time series plots can show how inventory
levels uctuate throughout the simulation or how changes in workload affect resource utilization. Figure 4.11 is a time series graph showing how the length of a
queue uctuates over time.
Once collected, time series data can be grouped in different ways to show
pat- terns or trends. One common way of showing patterns is using a
histogram. The histogram in Figure 4.12 breaks down contents in a queue to
show the percentage of time different quantities were in that queue.
As depicted in Figure 4.12, over 95 percent of the time, there were fewer
than 12 items in the queue. This is more meaningful than simply knowing
what the average length of the queue was.

4.7 Languages versus Simulators


A distinction is frequently made between simulation languages and simulators.
Simulation languages are often considered more general purpose and have fewer
predened constructs for modeling specic types of systems. Simulators, on the
other hand, are designed to handle specic problemsfor example, a job shop
sim- ulator or a clinic simulator. The rst simulators that appeared provided
little, if any, programming capability, just as the rst simulation languages
provided few, if any, special modeling constructs to facilitate modeling.
Consequently, simulators ac- quired the reputation of being easy to use but
inexible, while simulation languages were branded as being very exible but
difcult to use (see Figure 4.13).
Over time, the distinction between simulation languages and simulators has
become blurred as specialized modeling constructs have been added to generalpurpose simulation languages, making them easier to use. During the same
period, general programming extensions have been added to simulators,
making them more exible. The most popular simulation tools today combine
powerful industry- specic constructs with exible programming capabilities, all
accessible from an intuitive graphical user interface (Bowden 1998). Some
tools are even cong- urable, allowing the software to be adapted to specic
applications yet still retain- ing programming capability. Rather than put
languages and simulators on opposite ends of the same spectrum as though
exibility and ease of use were mutually ex- clusive, it is more appropriate to
measure the exibility and ease of use for all simulation software along two
separate axes (Figure 4.14).
FIGURE 4.13

Simulators

Languages

Old paradigm that


polarized ease of
use and exibility.

Ease of use

Flexibility

98

Part I Study Chapters

Early
simulators

Current best-of-breed
products

Early
languages

Hard

Ease of use

New paradigm that


views ease of use
and exibility as
independent
characteristics.

Easy

FIGURE 4.14

Low

Flexibility

High

4.8 Future of Simulation


Simulation products have evolved to provide more than simply stand-alone
simu- lation capability. Modern simulation products have open architectures
based on component technology and standard data access methods (like SQL)
to provide interfacing capability with other applications such as CAD programs
and enter- prise planning tools. Surveys reported annually in Industrial
Engineering Solu- tions show that most simulation products have the following
features:

Input data analysis for distribution tting.


Point-and-click graphical user interface.
Reusable components and templates.
Two- (2-D) and three-dimensional (3-D) animation.
Online help and tutorials.
Interactive debugging.
Automatic model generation.
Output analysis tools.
Optimization.
Open architecture and database connectivity.

Simulation is a technology that will continue to evolve as related technologies improve and more time is devoted to the development of the software.
Prod- ucts will become easier to use with more intelligence being incorporated
into the software itself. Evidence of this trend can already be seen by
optimization and other time-saving utilities that are appearing in simulation
products. Animation and other graphical visualization techniques will continue
to play an important role in simulation. As 3-D and other graphic technologies
advance, these features will also be incorporated into simulation products.

Chapter 4 Discrete-Event Simulation

99

Simulation products targeted at vertical markets are on the rise. This trend is
driven by efforts to make simulation easier to use and more solution oriented.
Specic areas where dedicated simulators have been developed include call center management, supply chain management, and high-speed processing. At the
same time many simulation applications are becoming more narrowly focused,
others are becoming more global and look at the entire enterprise or value chain
in a hierarchical fashion from top to bottom.
Perhaps the most dramatic change in simulation will be in the area of software interoperability and technology integration. Historically, simulation has
been viewed as a stand-alone, project-based technology. Simulation models were
built to support an analysis project, to predict the performance of complex
systems, and to select the best alternative from a few well-dened alternatives.
Typically these projects were time-consuming and expensive, and relied heavily
on the expertise of a simulation analyst or consultant. The models produced
were generally single use models that were discarded after the project.
In recent years, the simulation industry has seen increasing interest in extending the useful life of simulation models by using them on an ongoing basis
(Harrell and Hicks 1998). Front-end spreadsheets and push-button user
interfaces are making such models more accessible to decision makers. In these
exible sim- ulation models, controlled changes can be made to models
throughout the system life cycle. This trend is growing to include dynamic links
to databases and other data sources, enabling entire models actually to be built
and run in the background using data already available from other enterprise
applications.
The trend to integrate simulation as an embedded component in enterprise
applications is part of a larger development of software components that can be
distributed over the Internet. This movement is being fueled by three emerging
information technologies: (1) component technology that delivers true object
orientation; (2) the Internet or World Wide Web, which connects business communities and industries; and (3) Web service technologies such as JZEE and
Microsofts .NET (DOTNET). These technologies promise to enable parallel
and distributed model execution and provide a mechanism for maintaining distributed model repositories that can be shared by many modelers (Fishwick
1997). The interest in Web-based simulation, like all other Web-based
applications, con- tinues to grow.

4.9 Summary

Most manufacturing and service systems are modeled using dynamic, stochastic,
discrete-event simulation. Discrete-event simulation works by converting all activities to events and consequent reactions. Events are either time-triggered or
condition-triggered, and are therefore processed either chronologically or when
a satisfying condition has been met.
Simulation models are generally dened using commercial simulation software that provides convenient modeling constructs and analysis tools.
Simulation

100

Part I Study Chapters

software consists of several modules with which the user interacts. Internally,
model data are converted to simulation data, which are processed during the
simulation. At the end of the simulation, statistics are summarized in an output
database that can be tabulated or graphed in various forms. The future of simulation is promising and will continue to incorporate exciting new technologies.

4.10

Review Questions
1. Give an example of a discrete-change state variable and a continuouschange state variable.
2. In simulation, the completion of an activity time that is random must be
known at the start of the activity. Why is this necessary?
3. Give an example of an activity whose completion is a scheduled event
and one whose completion is a conditional event.
4. For the rst 10 customers processed completely through the ATM
spreadsheet simulation presented in Table 3.2 of Chapter 3, construct a
table similar to Table 4.1 as you carry out a manual discrete-event
simulation of the ATM system to
a. Compute the average amount of time the rst 10 customers spent in
the system. Hint: Add time in system and corresponding
cumulative columns to the table.
b. Compute the average amount of time the rst 10 customers spent in
the queue.
c. Plot the number of customers in the queue over the course of the
simulation and compute the average number of customers in the
queue for the simulation.
d. Compute the utilization of the ATM. Hint: Dene a utilization
variable that is equal to zero when the ATM is idle and is equal to 1
when the ATM is busy. At the end of the simulation, compute a timeweighted average of the utilization variable.
5. Identify whether each of the following output statistics would be
computed as a simple or as a time-weighted average value.
a. Average utilization of a resource.
b. Average time entities spend in a queue.
c. Average time entities spend waiting for a resource.
d. Average number of entities waiting for a particular resource.
e. Average repair time for a machine.
6. Give an example of a situation where a time series graph would be
more useful than just seeing an average value.
7. In real life, activities take time and events take no time. During a
simulation, activities take no time and events take all of the time.
Explain this paradox.

Chapter 4 Discrete-Event Simulation

101

8. What are the main components of simulation software?


9. Look up a simulation product on the Internet or in a trade journal and
describe two promotional features of the product being advertised.
10.
For each of the following simulation applications identify one discreteand one continuous-change state variable.
a. Inventory control of an oil storage and pumping facility.
b. Study of beverage production in a soft drink production facility.
c. Study of congestion in a busy trafc intersection.

Reference
s

Bowden, Royce. The Spectrum of Simulation Software. IIE Solutions, May 1998,
pp. 4446.
Fishwick, Paul A. Web-Based Simulation. In Proceedings of the 1997 Winter
Simulation Conference, ed. S. Andradottir, K. J. Healy, D. H. Withers, and B. L.
Nelson. Institute of Electric and Electronics Engineers, Piscataway, NJ, 1997. pp.
100109.
Gottfried, Byron S. Elements of Stochastic Process Simulation. Englewood Cliffs, NJ:
Prentice Hall, 1984, p. 8.
Haider, S. W., and J. Banks. Simulation Software Products for Analyzing Manufacturing
Systems. Industrial Engineering, July 1986, p. 98.
Harrell, Charles R., and Don Hicks. Simulation Software Component Architecture for
Simulation-Based Enterprise Applications. Proceedings of the 1998 Winter Simulation Conference, ed. D. J. Medeiros, E. F. Watson, J. S. Carson, and M. S.
Manivannan. Institute of Electrical and Electronics Engineers, Piscataway, New
Jersey, 1998, pp. 171721.

HarrellGhoshBo
wden: Simulation
Using ProModel,
Second Edition

I. Study
Chapters

5. Getting
Started

The
McGrawHill
Companies,

GETTING STARTED

For which of you, intending to build a tower, sitteth not down rst, and
counteth the cost, whether he have sufcient to nish it? Lest haply, after he
hath laid the foundation, and is not able to nish it, all that behold it begin to
mock him, Saying, This man began to build, and was not able to nish.
Luke 14:2830

5.1 Introduction
In this chapter we look at how to begin a simulation project. Specically, we
discuss how to select a project and set up a plan for successfully completing it.
Simulation is not something you do simply because you have a tool and a
process to which it can be applied. Nor should you begin a simulation without
forethought and preparation. A simulation project should be carefully planned
following basic project management principles and practices. Questions to be
answered in this chapter are

How do you prepare to do a simulation study?


What are the steps for doing a simulation study?
What are typical objectives for a simulation study?
What is required to successfully complete a simulation project?
What are some pitfalls to avoid when doing simulation?

While specic tasks may vary from project to project, the basic procedure
for doing simulation is essentially the same. Much as in building a house, you
are better off following a time-proven methodology than approaching it
haphazardly. In this chapter, we present the preliminary activities for preparing
to conduct a simulation study. We then cover the steps for successfully
completing a simulation project. Subsequent chapters elaborate on these steps.
Here we focus primarily
103

104

Part I Study Chapters

on the rst step: dening the objective, scope, and requirements of the study.
Poor planning, ill-dened objectives, unrealistic expectations, and unanticipated
costs can turn a simulation project sour. For a simulation project to succeed, the
objec- tives and scope should be clearly dened and requirements identied and
quantied for conducting the project.

5.2 Preliminary Activities


Simulation is not a tool to be applied indiscriminately with little or no forethought. The decision to use simulation itself requires some consideration. Is the
application appropriate for simulation? Are other approaches equally as effective
yet less expensive? These and other questions should be raised when exploring
the potential use of simulation. Once it has been determined that simulation is
the right approach, other preparations should be made to ensure that the
necessary personnel and resources are in place to conduct the study. Personnel
must be iden- tied and trained. The right software should also be carefully
selected. The nature of the study and the policies of the organization will largely
dictate how to prepare for a simulation project.

5.2.1 Selecting an Application


The decision to use simulation usually begins with a systemic problem for which
a solution is being sought. The problem might be as simple as getting better
utilization from a key resource or as complex as how to increase throughput
while at the same time reducing cycle time in a large factory. Such problems
become opportunities for system improvement through the use of simulation.
While many opportunities may exist for using simulation, beginners in
simu- lation should select an application that is not too complicated yet can have
an im- pact on an organizations bottom line. On the one hand, you want to
focus on problems of signicance, but you dont want to get in over your
head. The rst simulation project should be one that is well dened and can be
completed within a few weeks. This pilot project might consist of a small
process such as a manu- facturing cell or a service operation with only a few
process steps. The goal at this point should be focused more on gaining
experience in simulation than making dramatic, enterprisewide improvements.
Conducting a pilot project is much like a warm-up exercise. It should be treated
as a learning experience as well as a con- dence and momentum builder.
As one gains experience in the use of simulation, more challenging projects
can be undertaken. As each project is considered, it should be evaluated rst for
its suitability for simulation. The following questions can help you determine
whether a process is a good candidate for simulation:
Is the process well dened?
Is process information readily available?

Chapter 5 Getting Started

105

Does the process have interdependencies?


Does the process exhibit variability?
Are the potential cost savings greater than the cost of doing the study?
If it is a new process, is there time to perform a simulation analysis?
If it is an existing process, would it be less costly to experiment on the
actual system?
Is management willing to support the project?
The wisdom of asking these questions should be clear. You dont want to
waste time on efforts or problems that are too unstructured or for which
simulation is not well suited. The last question, which relates to management
support, is a reminder that no simulation project should be undertaken if
management is un- willing to support the study or take the recommendations of
the study seriously. Obviously, multiple projects may t the criteria just listed, in
which case the list of candidates for simulation should be prioritized using a
costbenet analysis (a.k.a. biggest-bang-for-the-buck analysis). Management
preferences and timing may also come into play in prioritization.

5.2.2 Personnel Identication


One of the rst preparations for a simulation project is to have management
assign a project leader who works alone or with a project team. When assessing
personnel requirements for the simulation, consideration should be given to
factors such as level of expertise, length of the assign