Simulation Using Promodel
Simulation Using Promodel
wden: Simulation
Using ProModel,
Second Edition
I. Study
Chapters
1.
Introduction
to
The
McGrawHill
Companies,
INTRODUCTION
TO
SIMULATION
1.1 Introduction
On March 19, 1999, the following story appeared in The Wall Street Journal:
Captain Chet Rivers knew that his 747-400 was loaded to the limit. The giant plane,
weighing almost 450,000 pounds by itself, was carrying a full load of passengers
and baggage, plus 400,000 pounds of fuel for the long ight from San
Francisco to Australia. As he revved his four engines for takeoff, Capt. Rivers
noticed that San Franciscos famous fog was creeping in, obscuring the hills to the
north and west of the airport.
At full throttle the plane began to roll ponderously down the runway, slowly at
rst but building up to ight speed well within normal limits. Capt. Rivers pulled
the throt- tle back and the airplane took to the air, heading northwest across the San
Francisco peninsula towards the ocean. It looked like the start of another routine
ight. Suddenly the plane began to shudder violently. Several loud explosions
shook the craft and smoke and ames, easily visible in the midnight sky,
illuminated the right wing. Although the plane was shaking so violently that it was
hard to read the instruments, Capt. Rivers was able to tell that the right inboard
engine was malfunctioning, back- ring violently. He immediately shut down the
engine, stopping the explosions and shaking.
However this introduced a new problem. With two engines on the left wing at
full power and only one on the right, the plane was pushed into a right turn,
bringing it directly towards San Bruno Mountain, located a few miles northwest of
the airport. Capt. Rivers instinctively turned his control wheel to the left to bring
the plane back on course. That action extended the aileronscontrol surfaces on
the trailing edges of the wingsto tilt the plane back to the left. However, it
also extended the
spoilerspanels on the tops of the wingsincreasing drag and lowering lift. With
the nose still pointed up, the heavy jet began to slow. As the plane neared stall speed,
the control stick began to shake to warn the pilot to bring the nose down to gain air
speed. Capt. Rivers immediately did so, removing that danger, but now San Bruno
Mountain was directly ahead. Capt. Rivers was unable to see the mountain due to
the thick fog that had rolled in, but the planes ground proximity sensor sounded an
au- tomatic warning, calling terrain, terrain, pull up, pull up. Rivers frantically
pulled back on the stick to clear the peak, but with the spoilers up and the plane
still in a skidding right turn, it was too late. The plane and its full load of 100
tons of fuel crashed with a sickening explosion into the hillside just above a
densely populated housing area.
Hey Chet, that could ruin your whole day, said Capt. Riverss supervisor, who
was sitting beside him watching the whole thing. Lets rewind the tape and see
what you did wrong. Sure Mel, replied Chet as the two men stood up and
stepped out- side the 747 cockpit simulator. I think I know my mistake already. I
should have used my rudder, not my wheel, to bring the plane back on course. Say,
I need a breather after that experience. Im just glad that this wasnt the real thing.
The incident above was never reported in the nations newspapers, even though
it would have been one of the most tragic disasters in aviation history, because it
never really happened. It took place in a cockpit simulator, a device which uses computer technology to predict and recreate an airplanes behavior with gut-wrenching
realism.
The relief you undoubtedly felt to discover that this disastrous incident was
just a simulation gives you a sense of the impact that simulation can have in
avert- ing real-world catastrophes. This story illustrates just one of the many
ways sim- ulation is being used to help minimize the risk of making costly and
sometimes fatal mistakes in real life. Simulation technology is nding its way
into an in- creasing number of applications ranging from training for aircraft
pilots to the testing of new product prototypes. The one thing that these
applications have in common is that they all provide a virtual environment that
helps prepare for real- life situations, resulting in signicant savings in time,
money, and even lives.
One area where simulation is nding increased application is in manufacturing and service system design and improvement. Its unique ability to accurately
predict the performance of complex systems makes it ideally suited for
systems planning. Just as a ight simulator reduces the risk of making costly
errors in ac- tual ight, system simulation reduces the risk of having systems
that operate inef- ciently or that fail to meet minimum performance
requirements. While this may not be life-threatening to an individual, it certainly
places a company (not to men- tion careers) in jeopardy.
In this chapter we introduce the topic of simulation and answer the
following questions:
What is simulation?
Why is simulation used?
How is simulation performed?
When and where should simulation be used?
FIGURE 1.1
Simulation provides animation capability.
The power of simulation lies in the fact that it provides a method of analysis
that is not only formal and predictive, but is capable of accurately predicting the
performance of even the most complex systems. Deming (1989) states,
Manage- ment of a system is action based on prediction. Rational prediction
requires sys- tematic learning and comparisons of predictions of short-term
and long-term results from possible alternative courses of action. The key to
sound manage- ment decisions lies in the ability to accurately predict the
outcomes of alternative courses of action. Simulation provides precisely that
kind of foresight. By simu- lating alternative production schedules, operating
policies, stafng levels, job priorities, decision rules, and the like, a manager
can more accurately predict out- comes and therefore make more informed and
effective management decisions. With the importance in todays competitive
market of getting it right the rst time, the lesson is becoming clear: if at
rst you dont succeed, you probably should have simulated it.
By using a computer to model a system before it is built or to test operating
policies before they are actually implemented, many of the pitfalls that are often
encountered in the start-up of a new system or the modication of an existing
sys- tem can be avoided. Improvements that traditionally took months and even
years of ne-tuning to achieve can be attained in a matter of days or even
hours. Be- cause simulation runs in compressed time, weeks of system operation
can be sim- ulated in only a few minutes or even seconds. The characteristics
of simulation that make it such a powerful planning and decision-making tool
can be summa- rized as follows:
FIGURE 1.2
System
Simulation provides
a virtual method for
doing system
experimentation.
Concept
Model
10
FIGURE 1.3
The process of
simulation
experimentation
.
Start
Formulate a
hypothesis
Develop a
simulation model
No
Run simulation
experiment
Hypothesis
correct?
Yes
End
11
Simulation is no longer considered a method of last resort, nor is it a technique reserved only for simulation experts. The availability of easy-to-use
sim- ulation software and the ubiquity of powerful desktop computers have
made simulation not only more accessible, but also more appealing to
planners and managers who tend to avoid any kind of solution that appears too
complicated. A solution tool is not of much use if it is more complicated than
the problem that it is intended to solve. With simple data entry tables and
automatic output reporting and graphing, simulation is becoming much easier to
use and the reluctance to use it is disappearing.
The primary use of simulation continues to be in the area of
manufacturing. Manufacturing systems, which include warehousing and
distribution systems, tend to have clearly dened relationships and formalized
procedures that are well suited to simulation modeling. They are also the
systems that stand to benet the most from such an analysis tool since capital
investments are so high and changes are so disruptive. Recent trends to
standardize and systematize other business processes such as order processing,
invoicing, and customer support are boosting the application of simulation in
these areas as well. It has been observed that 80 percent of all business
processes are repetitive and can benet from the same analysis techniques used
to improve manufacturing systems (Harrington 1991). With this being the case,
the use of simulation in designing and improving busi- ness processes of every
kind will likely continue to grow.
While the primary use of simulation is in decision support, it is by no means
limited to applications requiring a decision. An increasing use of simulation is
in the area of communication and visualization. Modern simulation software
in- corporates visual animation that stimulates interest in the model and
effectively communicates complex system dynamics. A proposal for a new
system design can be sold much easier if it can actually be shown how it will
operate.
On a smaller scale, simulation is being used to provide interactive,
computer- based training in which a management trainee is given the
opportunity to practice decision-making skills by interacting with the model
during the simulation. It is also being used in real-time control applications
where the model interacts with the real system to monitor progress and
provide master control. The power of simulation to capture system dynamics
both visually and functionally opens up numerous opportunities for its use in an
integrated environment.
Since the primary use of simulation is in decision support, most of our discussion will focus on the use of simulation to make system design and
operational decisions. As a decision support tool, simulation has been used to
help plan and
12
Work-ow planning.
Capacity planning.
Cycle time reduction.
Staff and resource planning.
Work prioritization.
Bottleneck analysis.
Quality improvement.
Cost reduction.
Inventory reduction.
Throughput analysis.
Productivity improvement.
Layout analysis.
Line balancing.
Batch size optimization.
Production scheduling.
Resource scheduling.
Maintenance scheduling.
Control system design.
13
not mean that there can be no uncertainty in the system. If random behavior can
be described using probability expressions and distributions, they can be simulated. It is only when it isnt even possible to make reasonable assumptions of
how a system operates (because either no information is available or behavior is
totally erratic) that simulation (or any other analysis tool for that matter)
becomes useless. Likewise, one-time projects or processes that are never
repeated the same way twice are poor candidates for simulation. If the scenario
you are modeling is likely never going to happen again, it is of little benet to
do a simulation.
Activities and events should be interdependent and variable. A system may
have lots of activities, but if they never interfere with each other or are deterministic (that is, they have no variation), then using simulation is probably unnecessary. It isnt the number of activities that makes a system difcult to
analyze. It is the number of interdependent, random activities. The effect of
simple interde- pendencies is easy to predict if there is no variability in the
activities. Determining the ow rate for a system consisting of 10 processing
activities is very straightfor- ward if all activity times are constant and activities
are never interrupted. Likewise, random activities that operate independently of
each other are usually easy to ana- lyze. For example, 10 machines operating in
isolation from each other can be ex- pected to produce at a rate that is based on
the average cycle time of each machine less any anticipated downtime. It is the
combination of interdependencies and ran- dom behavior that really produces
the unpredictable results. Simpler analytical methods such as mathematical
calculations and spreadsheet software become less adequate as the number of
activities that are both interdependent and random in- creases. For this reason,
simulation is primarily suited to systems involving both interdependencies and
variability.
The cost impact of the decision should be greater than the cost of doing the
simulation. Sometimes the impact of the decision itself is so insignicant that it
doesnt warrant the time and effort to conduct a simulation. Suppose, for
example, you are trying to decide whether a worker should repair rejects as
they occur or wait until four or ve accumulate before making repairs. If you are
certain that the next downstream activity is relatively insensitive to whether
repairs are done sooner rather than later, the decision becomes inconsequential
and simulation is a wasted effort.
The cost to experiment on the actual system should be greater than the cost
of simulation. While simulation avoids the time delay and cost associated with
ex- perimenting on the real system, in some situations it may actually be quicker
and more economical to experiment on the real system. For example, the
decision in a customer mailing process of whether to seal envelopes before or
after they are addressed can easily be made by simply trying each method and
comparing the results. The rule of thumb here is that if a question can be
answered through direct experimentation quickly, inexpensively, and with
minimal impact to the current operation, then dont use simulation.
Experimenting on the actual system also eliminates some of the drawbacks
associated with simulation, such as proving model validity.
There may be other situations where simulation is appropriate independent
of the criteria just listed (see Banks and Gibson 1997). This is certainly true in
the
14
case of models built purely for visualization purposes. If you are trying to sell a
system design or simply communicate how a system works, a realistic animation
created using simulation can be very useful, even though nonbenecial from an
analysis point of view.
15
Systems engineering.
Statistical analysis and design of experiments.
Modeling principles and concepts.
Basic programming and computer skills.
Training on one or more simulation products.
Familiarity with the system being investigated.
Experience has shown that some people learn simulation more rapidly and
become more adept at it than others. People who are good abstract thinkers yet
also pay close attention to detail seem to be the best suited for doing simulation.
Such individuals are able to see the forest while still keeping an eye on the trees
(these are people who tend to be good at putting together 1,000-piece puzzles).
They are able to quickly scope a project, gather the pertinent data, and get a useful model up and running without lots of starts and stops. A good modeler is
some- what of a sleuth, eager yet methodical and discriminating in piecing
together all of the evidence that will help put the model pieces together.
If short on time, talent, resources, or interest, the decision maker need not
despair. Plenty of consultants who are professionally trained and experienced
can provide simulation services. A competitive bid will help get the best price,
but one should be sure that the individual assigned to the project has good
credentials. If the use of simulation is only occasional, relying on a consultant
may be the preferred approach.
16
17
FIGURE 1.4
Concept
Cost of making
changes at
subsequent stages of
system development.
Desig
n
Installation
Operation
Cost
System stage
Comparison of
cumulative system
costs with and
without simulation.
System costs
FIGURE 1.5
Cost
without
simulation
Cost with
simulatio
n
Design
phase
Implementatio
n phase
Operatio
n phase
the short-term cost may be slightly higher due to the added labor and software
costs associated with simulation, the long-term costs associated with capital
investments and system operation are considerably lower due to better
efciencies realized through simulation. Dismissing the use of simulation on
the basis of sticker price is myopic and shows a lack of understanding of the
long-term sav- ings that come from having well-designed, efciently operating
systems.
Many examples can be cited to show how simulation has been used to avoid
costly errors in the start-up of a new system. Simulation prevented an
unnecessary expenditure when a Fortune 500 company was designing a facility
for producing and storing subassemblies and needed to determine the number of
containers re- quired for holding the subassemblies. It was initially felt that
3,000 containers
18
were needed until a simulation study showed that throughput did not improve
sig- nicantly when the number of containers was increased from 2,250 to
3,000. By purchasing 2,250 containers instead of 3,000, a savings of $528,375
was expected in the rst year, with annual savings thereafter of over $200,000
due to the savings in oor space and storage resulting from having 750 fewer
containers (Law and McComas 1988).
Even if dramatic savings are not realized each time a model is built, simulation at least inspires condence that a particular system design is capable of
meet- ing required performance objectives and thus minimizes the risk often
associated with new start-ups. The economic benet associated with instilling
condence was evidenced when an entrepreneur, who was attempting to secure
bank nanc- ing to start a blanket factory, used a simulation model to show the
feasibility of the proposed factory. Based on the processing times and
equipment lists supplied by industry experts, the model showed that the
output projections in the business plan were well within the capability of the
proposed facility. Although unfamiliar with the blanket business, bank ofcials
felt more secure in agreeing to support the venture (Bateman et al. 1997).
Often simulation can help improve productivity by exposing ways of
making better use of existing assets. By looking at a system holistically, longstanding problems such as bottlenecks, redundancies, and inefciencies that
previously went unnoticed start to become more apparent and can be eliminated.
The trick is to nd waste, or muda, advises Shingo; after all, the most
damaging kind of waste is the waste we dont recognize (Shingo 1992).
Consider the following actual examples where simulation helped uncover and
eliminate wasteful practices:
GE Nuclear Energy was seeking ways to improve productivity without
investing large amounts of capital. Through the use of simulation, the
company was able to increase the output of highly specialized reactor
parts by 80 percent. The cycle time required for production of each part
was reduced by an average of 50 percent. These results were obtained
by running a series of models, each one solving production problems
highlighted by the previous model (Bateman et al. 1997).
A large manufacturing company with stamping plants located throughout
the world produced stamped aluminum and brass parts on order according
to customer specications. Each plant had from 20 to 50 stamping presses
that were utilized anywhere from 20 to 85 percent. A simulation study
was conducted to experiment with possible ways of increasing capacity
utilization. As a result of the study, machine utilization improved from an
average of 37 to 60 percent (Hancock, Dissen, and Merten 1977).
A diagnostic radiology department in a community hospital was
modeled to evaluate patient and staff scheduling, and to assist in
expansion planning over the next ve years. Analysis using the
simulation model enabled improvements to be discovered in operating
procedures that precluded the necessity for any major expansions in
department size (Perry and Baum 1976).
19
In each of these examples, signicant productivity improvements were realized without the need for making major investments. The improvements came
through nding ways to operate more efciently and utilize existing resources
more effectively. These capacity improvement opportunities were brought to
light through the use of simulation.
20
behavior such as the way entities arrive and their routings can be dened with
lit- tle, if any, programming using the data entry tables that are provided.
ProModel is used by thousands of professionals in manufacturing and servicerelated indus- tries and is taught in hundreds of institutions of higher learning.
Part III contains case study assignments that can be used for student
projects to apply the theory they have learned from Part I and to try out the skills
they have ac- quired from doing the lab exercises (Part II). It is recommended
that students be as- signed at least one simulation project during the course.
Preferably this is a project performed for a nearby company or institution so it
will be meaningful. If such a project cannot be found, or as an additional
practice exercise, the case studies pro- vided should be useful. Student projects
should be selected early in the course so that data gathering can get started and
the project completed within the allotted time. The chapters in Part I are
sequenced to parallel an actual simulation project.
1.11Summary
Businesses today face the challenge of quickly designing and implementing
com- plex production and service systems that are capable of meeting
growing de- mands for quality, delivery, affordability, and service. With recent
advances in computing and software technology, simulation tools are now
available to help meet this challenge. Simulation is a powerful technology that
is being used with increasing frequency to improve system performance by
providing a way to make better design and management decisions. When used
properly, simulation can re- duce the risks associated with starting up a new
operation or making improve- ments to existing operations.
Because simulation accounts for interdependencies and variability, it provides insights that cannot be obtained any other way. Where important system
decisions are being made of an operational nature, simulation is an invaluable
decision-making tool. Its usefulness increases as variability and interdependency
increase and the importance of the decision becomes greater.
Lastly, simulation actually makes designing systems fun! Not only can a designer try out new design concepts to see what works best, but the visualization
makes it take on a realism that is like watching an actual system in operation.
Through simulation, decision makers can play what-if games with a new system
or modied process before it actually gets implemented. This engaging process
stimulates creative thinking and results in good design decisions.
1.12
Review Questions
1. Dene simulation.
2. What reasons are there for the increased popularity of computer
simulation?
21
3. What are two specic questions that simulation might help answer in a
bank? In a manufacturing facility? In a dental ofce?
4. What are three advantages that simulation has over alternative
approaches to systems design?
5. Does simulation itself optimize a system design? Explain.
6. How does simulation follow the scientic method?
7. A restaurant gets extremely busy during lunch (11:00 A.M. to 2:00 P.M.)
and is trying to decide whether it should increase the number of waitresses
from two to three. What considerations would you look at to determine
whether simulation should be used to make this decision?
8. How would you develop an economic justication for using simulation?
9. Is a simulation exercise wasted if it exposes no problems in a system
design? Explain.
10.A simulation run was made showing that a modeled factory could produce
130 parts per hour. What information would you want to know about the
simulation study before placing any condence in the results?
11.A PC board manufacturer has high work-in-process (WIP) inventories, yet
machines and equipment seem underutilized. How could simulation help
solve this problem?
[Link] important is a statistical background for doing simulation?
[Link] can a programming background be useful in doing simulation?
[Link] are good project management and communication skills important in
simulation?
[Link] should the process owner be heavily involved in a simulation
project?
[Link] which of the following problems would simulation likely be useful?
a. Increasing the throughput of a production line.
b. Increasing the pace of a worker on an assembly line.
c. Decreasing the time that patrons at an amusement park spend
waiting in line.
d. Determining the percentage defective from a particular machine.
e. Determining where to place inspection points in a process.
f. Finding the most efcient way to ll out an order form.
Reference
s
Audon, Wyston Hugh, and L. Kronenberger. The Faber Book of Aphorisms. London: Faber
and Faber, 1964.
Banks, J., and R. Gibson. 10 Rules for Determining When Simulation Is Not Appropriate. IIE Solutions, September 1997, pp. 3032.
Bateman, Robert E.; Royce O. Bowden; Thomas J. Gogg; Charles R. Harrell; and Jack
R. A. Mott. System Improvement Using Simulation. Utah: PROMODEL Corp., 1997.
22
Deming, W. E. Foundation for Management of Quality in the Western World. Paper read
at a meeting of the Institute of Management Sciences, Osaka, Japan, 24 July 1989.
Glenney, Neil E., and Gerald T. Mackulak. Modeling & Simulation Provide Key to CIM
Implementation Philosophy. Industrial Engineering, May 1985.
Hancock, Walton; R. Dissen; and A. Merten. An Example of Simulation to Improve
Plant Productivity. AIIE Transactions, March 1977, pp. 210.
Harrell, Charles R., and Donald Hicks. Simulation Software Component Architecture
for Simulation-Based Enterprise Applications. In Proceedings of the 1998 Winter
Simulation Conference, ed. D. J. Medeiros, E. F. Watson, J. S. Carson, and
M. S. Manivannan, pp. 171721. Institute of Electrical and Electronics Engineers,
Piscataway, New Jersey.
Harrington, H. James. Business Process Improvement. New York: McGraw-Hill, 1991.
Hoover, Stewart V., and Ronald F. Perry. Simulation: A Problem-Solving Approach.
Reading, MA: Addison-Wesley, 1989.
Law, A. M., and M. G. McComas. How Simulation Pays Off. Manufacturing
Engineer- ing, February 1988, pp. 3739.
Mott, Jack, and Kerim Tumay. Developing a Strategy for Justifying Simulation. Industrial Engineering, July 1992, pp. 3842.
Oxford American Dictionary. New York: Oxford University Press, 1980. [compiled by]
Eugene Enrich et al.
Perry, R. F., and R. F. Baum. Resource Allocation and Scheduling for a Radiology
Department. In Cost Control in Hospitals. Ann Arbor, MI: Health Administration
Press, 1976.
Rohrer, Matt, and Jerry Banks. Required Skills of a Simulation Analyst. IIE Solutions,
May 1998, pp. 723.
Schriber, T. J. The Nature and Role of Simulation in the Design of Manufacturing
Systems. Simulation in CIM and Articial Intelligence Techniques, ed. J. Retti and
K. E. Wichmann. S.D., CA.: Society for Computer Simulation, 1987, pp. 58.
Shannon, Robert E. Introduction to the Art and Science of Simulation. In
Proceedings of the 1998 Winter Simulation Conference, ed. D. J. Medeiros,
E. F. Watson,
J. S. Carson, and M. S. Manivannan, pp. 714. Piscataway, NJ: Institute of Electrical
and Electronics Engineers.
Shingo, Shigeo. The Shingo Production Management SystemImproving Process Functions. Trans. Andrew P. Dillon. Cambridge, MA: Productivity Press, 1992.
Solberg, James. Design and Analysis of Integrated Manufacturing Systems. In W. Dale
Compton. Washington, D.C.: National Academy Press, 1988, p. 4.
The Wall Street Journal, March 19, 1999. United 747s Near Miss Sparks a Widespread
Review of Pilot Skills, p. A1.
HarrellGhoshBo
wden: Simulation
Using ProModel,
Second Edition
I. Study
Chapters
2. System
Dynamics
The
McGrawHill
Companies,
SYSTEM DYNAMICS
Introduction
Knowing how to do simulation doesnt make someone a good systems designer
any more than knowing how to use a CAD system makes one a good product
de- signer. Simulation is a tool that is useful only if one understands the nature
of the problem to be solved. It is designed to help solve systemic problems that
are op- erational in nature. Simulation exercises fail to produce useful results
more often because of a lack of understanding of system dynamics than a lack
of knowing how to use the simulation software. The challenge is in
understanding how the system operates, knowing what you want to achieve
with the system, and being able to identify key leverage points for best
achieving desired objectives. To illustrate the nature of this challenge, consider
the following actual scenario:
The pipe mill for the XYZ Steel Corporation was an important prot center, turning
steel slabs selling for under $200/ton into a product with virtually unlimited demand
selling for well over $450/ton. The mill took coils of steel of the proper thickness
and width through a series of machines that trimmed the edges, bent the steel
into a cylinder, welded the seam, and cut the resulting pipe into appropriate lengths,
all on a continuously running line. The line was even designed to weld the end of
one coil to the beginning of the next one on the y, allowing the line to run
continually for days on end.
Unfortunately the mill was able to run only about 50 percent of its theoretical capacity over the long term, costing the company tens of millions of dollars a year in
lost revenue. In an effort to improve the mills productivity, management studied
each step in the process. It was fairly easy to nd the slowest step in the line, but
additional study showed that only a small percentage of lost production was due to
problems at this bottleneck operation. Sometimes a step upstream from the
bottleneck would
23
24
have a problem, causing the bottleneck to run out of work, or a downstream step
would go down temporarily, causing work to back up and stop the bottleneck. Sometimes the bottleneck would get so far behind that there was no place to put incoming,
newly made pipe. In this case the workers would stop the entire pipe-making process
until the bottleneck was able to catch up. Often the bottleneck would then be idle
wait- ing until the newly started line was functioning properly again and the new
pipe had a chance to reach it. Sometimes problems at the bottleneck were actually
caused by im- proper work at a previous location.
In short, there was no single cause for the poor productivity seen at this plant.
Rather, several separate causes all contributed to the problem in complex ways.
Man- agement was at a loss to know which of several possible improvements
(additional or faster capacity at the bottleneck operation, additional storage space
between stations, better rules for when to shut down and start up the pipe-forming
section of the mill, better quality control, or better training at certain critical
locations) would have the most impact for the least cost. Yet the poor performance
of the mill was costing enor- mous amounts of money. Management was under
pressure to do something, but what should it be?
This example illustrates the nature and difculty of the decisions that an
operations manager faces. Managers need to make decisions that are the best
in some sense. To do so, however, requires that they have clearly dened goals
and understand the system well enough to identify cause-and-effect
relationships.
While every system is different, just as every product design is different,
the basic elements and types of relationships are the same. Knowing how the
elements of a system interact and how overall performance can be improved are
essential to the effective use of simulation. This chapter reviews basic system
dynamics and answers the following questions:
What is a system?
What are the elements of a system?
What makes systems so complex?
What are useful system metrics?
What is a systems approach to systems planning?
How do traditional systems analysis techniques compare with simulation?
2 System Denition
We live in a society that is composed of complex, human-made systems that
we depend on for our safety, convenience, and livelihood. Routinely we rely on
transportation, health care, production, and distribution systems to provide
needed goods and services. Furthermore, we place high demands on the quality,
conve- nience, timeliness, and cost of the goods and services that are provided
by these systems. Remember the last time you were caught in a trafc jam, or
waited for what seemed like an eternity in a restaurant or doctors ofce?
Contrast that ex- perience with the satisfaction that comes when you nd a
store that sells quality merchandise at discount prices or when you locate a
health care organization that
25
System Elements
From a simulation perspective, a system can be said to consist of entities, activities, resources, and controls (see Figure 2.1). These elements dene the
who, what, where, when, and how of entity processing. This model for
describing a
FIGURE 2.1
Elements of a
system.
Incoming entities
Activities
Resources
Controls
System
Outgoing entities
26
2.3.1 Entities
Entities are the items processed through the system such as products, customers,
and documents. Different entities may have unique characteristics such as cost,
shape, priority, quality, or condition. Entities may be further subdivided into the
following types:
Human or animate (customers, patients, etc.).
Inanimate (parts, documents, bins, etc.).
Intangible (calls, electronic mail, etc.).
For most manufacturing and service systems, the entities are discrete items.
This is the case for discrete part manufacturing and is certainly the case for
nearly all service systems that process customers, documents, and others. For
some pro- duction systems, called continuous systems, a nondiscrete substance
is processed rather than discrete entities. Examples of continuous systems are oil
reneries and paper mills.
2.3.2 Activities
Activities are the tasks performed in the system that are either directly or
indirectly involved in the processing of entities. Examples of activities include
servicing a customer, cutting a part on a machine, or repairing a piece of equipment. Activities usually consume time and often involve the use of resources.
Activities may be classied as
Entity processing (check-in, treatment, inspection, fabrication, etc.).
Entity and resource movement (forklift travel, riding in an elevator, etc.).
Resource adjustments, maintenance, and repairs (machine setups, copy
machine repair, etc.).
2.3.3 Resources
Resources are the means by which activities are performed. They provide the
supporting facilities, equipment, and personnel for carrying out activities. While
resources facilitate entity processing, inadequate resources can constrain
process- ing by limiting the rate at which processing can take place.
Resources have characteristics such as capacity, speed, cycle time, and
reliability. Like entities, resources can be categorized as
Human or animate (operators, doctors, maintenance personnel, etc.).
Inanimate (equipment, tooling, oor space, etc.).
Intangible (information, electrical power, etc.).
27
2.3.4 Controls
Controls dictate how, when, and where activities are performed. Controls impose
order on the system. At the highest level, controls consist of schedules, plans,
and policies. At the lowest level, controls take the form of written procedures
and ma- chine control logic. At all levels, controls provide the information and
decision logic for how the system should operate. Examples of controls include
Routing sequences.
Production plans.
Work schedules.
Task prioritization.
Control software.
Instruction sheets.
While the sheer number of elements in a system can stagger the mind (the
number of different entities, activities, resources, and controls can easily exceed
100), the interactions of these elements are what make systems so complex and
28
Interdependencies
Variability
Complexity
These two factors characterize virtually all human-made systems and make
system behavior difcult to analyze and predict. As shown in Figure 2.2, the degree of analytical difculty increases exponentially as the number of interdependencies and random variables increases.
2.4.1 Interdependencies
FIGURE 2.2
Analytical difculty
as a function of the
number of
interdependencies and
random variables.
Number of interdependencies
and random variables
29
two or more activities. A doctor treating one patient, for example, may be
unable to immediately respond to another patient needing his or her attention.
This delay in response may also set other forces in motion.
It should be clear that the complexity of a system has less to do with
the number of elements in the system than with the number of interdependent
rela- tionships. Even interdependent relationships can vary in degree, causing
more or less impact on overall system behavior. System interdependency may
be either tight or loose depending on how closely elements are linked.
Elements that are tightly coupled have a greater impact on system operation
and performance than elements that are only loosely connected. When an
element such as a worker or machine is delayed in a tightly coupled system, the
impact is immediately felt by other elements in the system and the entire
process may be brought to a screech- ing halt.
In a loosely coupled system, activities have only a minor, and often delayed,
impact on other elements in the system. Systems guru Peter Senge (1990) notes
that for many systems, Cause and effect are not closely related in time and
space. Sometimes the distance in time and space between cause-and-effect relationships becomes quite sizable. If enough reserve inventory has been
stockpiled, a truckers strike cutting off the delivery of raw materials to a
transmission plant in one part of the world may not affect automobile assembly
in another part of the world for weeks. Cause-and-effect relationships are like a
ripple of water that di- minishes in impact as the distance in time and space
increases.
Obviously, the preferred approach to dealing with interdependencies is to
eliminate them altogether. Unfortunately, this is not entirely possible for most
situations and actually defeats the purpose of having systems in the rst place.
The whole idea of a system is to achieve a synergy that otherwise would be unattainable if every component were to function in complete isolation. Several
methods are used to decouple system elements or at least isolate their inuence
so disruptions are not felt so easily. These include providing buffer inventories,
implementing redundant or backup measures, and dedicating resources to single tasks. The downside to these mitigating techniques is that they often lead to
excessive inventories and underutilized resources. The point to be made here
is that interdependencies, though they may be minimized somewhat, are simply a fact of life and are best dealt with through effective coordination and
management.
2.4.2 Variability
Variability is a characteristic inherent in any system involving humans and
machinery. Uncertainty in supplier deliveries, random equipment failures, unpredictable absenteeism, and uctuating demand all combine to create havoc in
plan- ning system operations. Variability compounds the already unpredictable
effect of interdependencies, making systems even more complex and
unpredictable. Vari- ability propagates in a system so that highly variable
outputs from one worksta- tion become highly variable inputs to another
(Hopp and Spearman 2000).
30
Examples
Activity times
Decisions
Quantities
Event intervals
Attributes
Table 2.1 identies the types of random variability that are typical of most manufacturing and service systems.
The tendency in systems planning is to ignore variability and calculate system capacity and performance based on average values. Many commercial
sched- uling packages such as MRP (material requirements planning) software
work this way. Ignoring variability distorts the true picture and leads to
inaccurate perfor- mance predictions. Designing systems based on average
requirements is like deciding whether to wear a coat based on the average
annual temperature or pre- scribing the same eyeglasses for everyone based on
average eyesight. Adults have been known to drown in water that was only
four feet deepon the average! Wherever variability occurs, an attempt should
be made to describe the nature or pattern of the variability and assess the range
of the impact that variability might have on system performance.
Perhaps the most illustrative example of the impact that variability can have
on system behavior is the simple situation where entities enter into a single
queue to wait for a single server. An example of this might be customers
lining up in front of an ATM. Suppose that the time between customer arrivals
is exponen- tially distributed with an average time of one minute and that they
take an average time of one minute, exponentially distributed, to transact their
business. In queu- ing theory, this is called an M/M/1 queuing system. If we
calculate system per- formance based solely on average time, there will never
be any customers waiting in the queue. Every minute that a customer arrives the
previous customer nishes his or her transaction. Now if we calculate the
number of customers waiting in line, taking into account the variation, we will
discover that the waiting line grows to innity (the technical term is that the
system explodes). Who would guess that in a situation involving only one
interdependent relationship that variation alone would make the difference
between zero items waiting in a queue and an in- nite number in the queue?
By all means, variability should be reduced and even eliminated wherever
possible. System planning is much easier if you dont have to contend with it.
Where it is inevitable, however, simulation can help predict the impact it will
have on system performance. Likewise, simulation can help identify the
degree of improvement that can be realized if variability is reduced or even
eliminated. For
31
example, it can tell you how much reduction in overall ow time and ow time
variation can be achieved if operation time variation can be reduced by, say,
20 percent.
32
33
Variancethe degree of uctuation that can and often does occur in any
of the preceding metrics. Variance introduces uncertainty, and therefore
risk, in achieving desired performance goals. Manufacturers and service
providers are often interested in reducing variance in delivery and service
times. For example, cycle times and throughput rates are going to have
some variance associated with them. Variance is reduced by controlling
activity times, improving resource reliability, and adhering to schedules.
These metrics can be given for the entire system, or they can be broken down by
individual resource, entity type, or some other characteristic. By relating these
metrics to other factors, additional meaningful metrics can be derived that are
useful for benchmarking or other comparative analysis. Typical relational
metrics include minimum theoretical ow time divided by actual ow time
(ow time efciency), cost per unit produced (unit cost), annual inventory sold
divided by average inventory (inventory turns or turnover ratio), or units
produced per cost or labor input (productivity).
34
often based on whether the cost to implement a change produces a higher return
in performance.
35
Cost
Optimum
Total cost
Resource costs
Waiting costs
Number of resources
36
As shown in Figure 2.3, the number of resources at which the sum of the
resource costs and waiting costs is at a minimum is the optimum number of
resources to have. It also becomes the optimum acceptable waiting time.
In systems design, arriving at an optimum system design is not always
realistic, given the almost endless congurations that are sometimes possible and
limited time that is available. From a practical standpoint, the best that can be
expected is a near optimum solution that gets us close enough to our objective,
given the time constraints for making the decision.
37
FIGURE 2.4
Four-step iterative
approach to
systems
improvement.
Identify problems
and
opportunities.
Select and
implement
the best
solution.
Develop
alternative
solutions.
Evaluate
the
solutions.
38
identify possible areas of focus and leverage points for applying a solution.
Techniques such as cause-and-effect analysis and pareto analysis are useful here.
Once a problem or opportunity has been identied and key decision variables
isolated, alternative solutions can be explored. This is where most of the design
and engineering expertise comes into play. Knowledge of best practices for common types of processes can also be helpful. The designer should be open to all
possible alternative feasible solutions so that the best possible solutions dont get
overlooked.
Generating alternative solutions requires creativity as well as organizational
and engineering skills. Brainstorming sessions, in which designers exhaust every
conceivably possible solution idea, are particularly useful. Designers should use
every stretch of the imagination and not be stied by conventional solutions
alone. The best ideas come when system planners begin to think innovatively
and break from traditional ways of doing things. Simulation is particularly
helpful in this process in that it encourages thinking in radical new ways.
39
FIGURE 2.5
Simulation
improves
performance
predictability.
System predictability
these techniques still can provide rough estimates but fall short in producing the
insights and accurate answers that simulation provides. Systems implemented
using these techniques usually require some adjustments after implementation to
compensate for inaccurate calculations. For example, if after implementing a
sys- tem it is discovered that the number of resources initially calculated is
insufcient to meet processing requirements, additional resources are added.
This adjustment can create extensive delays and costly modications if special
personnel training or custom equipment is involved. As a precautionary
measure, a safety factor is sometimes added to resource and space calculations
to ensure they are adequate. Overdesigning a system, however, also can be
costly and wasteful.
As system interdependency and variability increase, not only does system
performance decrease, but the ability to accurately predict system performance
decreases as well (Lloyd and Melton 1997). Simulation enables a planner to
ac- curately predict the expected performance of a system design and ultimately
make better design decisions.
Systems analysis tools, in addition to simulation, include simple
calculations, spreadsheets, operations research techniques (such as linear
programming and queuing theory), and special computerized tools for
scheduling, layout, and so forth. While these tools can provide quick and
approximate solutions, they tend to make oversimplifying assumptions, perform
only static calculations, and are lim- ited to narrow classes of problems.
Additionally, they fail to fully account for interdependencies and variability of
complex systems and therefore are not as ac- curate as simulation in predicting
complex system performance (see Figure 2.5). They all lack the power,
versatility, and visual appeal of simulation. They do pro- vide quick solutions,
however, and for certain situations produce adequate results. They are
important to cover here, not only because they sometimes provide a good
alternative to simulation, but also because they can complement simulation by
providing initial design estimates for input to the simulation model. They also
100%
With
simulation
50%
Without
simulation
0%
Call centers
Doctor's
offices
Machining
cells
Low
Banks
Emergency
rooms Production
lines
Medium
System
complexity
Airports
Hospital
s
Factorie
s
High
40
can be useful to help validate the results of a simulation by comparing them with
results obtained using an analytic model.
2.9.2 Spreadsheets
Spreadsheet software comes in handy when calculations, sometimes involving
hundreds of values, need to be made. Manipulating rows and columns of
numbers on a computer is much easier than doing it on paper, even with a
calculator handy. Spreadsheets can be used to perform rough-cut analysis
such as calculating average throughput or estimating machine requirements.
The drawback to spread- sheet software is the inability (or, at least, limited
ability) to include variability in activity times, arrival rates, and so on, and to
account for the effects of inter- dependencies.
What-if experiments can be run on spreadsheets based on expected
values (average customer arrivals, average activity times, mean time between
equipment failures) and simple interactions (activity A must be performed
before activity B). This type of spreadsheet simulation can be very useful for
getting rough perfor- mance estimates. For some applications with little
variability and component in- teraction, a spreadsheet simulation may be
adequate. However, calculations based on only average values and
oversimplied interdependencies potentially can be misleading and result in
poor decisions. As one ProModel user reported, We just completed our nal
presentation of a simulation project and successfully saved approximately
$600,000. Our management was prepared to purchase an addi- tional
overhead crane based on spreadsheet analysis. We subsequently built a
ProModel simulation that demonstrated an additional crane will not be
necessary. The simulation also illustrated some potential problems that were not
readily ap- parent with spreadsheet analysis.
Another weakness of spreadsheet modeling is the fact that all behavior is
assumed to be period-driven rather than event-driven. Perhaps you have tried to
41
gure out how your bank account balance uctuated during a particular period
when all you had to go on was your monthly statements. Using ending balances
does not reect changes as they occurred during the period. You can know the
cur- rent state of the system at any point in time only by updating the state
variables of the system each time an event or transaction occurs. When it comes
to dynamic models, spreadsheet simulation suffers from the curse of
dimensionality be- cause the size of the model becomes unmanageable.
42
FIGURE 2.6
Arriving entities
Queue Server
Departing entities
Queuing
system
conguration.
.
.
one or more queues and one or more servers (see Figure 2.6). Entities, referred
to in queuing theory as the calling population, enter the queuing system and
either are immediately served if a server is available or wait in a queue until a
server be- comes available. Entities may be serviced using one of several
queuing disci- plines: rst-in, rst-out (FIFO); last-in, rst-out (LIFO); priority;
and others. The system capacity, or number of entities allowed in the system at
any one time, may be either nite or, as is often the case, innite. Several
different entity queuing be- haviors can be analyzed such as balking (rejecting
entry), reneging (abandoning the queue), or jockeying (switching queues).
Different interarrival time distribu- tions (such as constant or exponential) may
also be analyzed, coming from either a nite or innite population. Service
times may also follow one of several distri- butions such as exponential or
constant.
Kendall (1953) devised a simple system for classifying queuing systems in
the form A/B/s, where A is the type of interarrival distribution, B is the type of
service time distribution, and s is the number of servers. Typical distribution
types for A and B are
M
G
D
43
(1
( )
2
L q = L = )
(1
1
W=
Wq =
( )
Pn = (1 ) n
n = 0, 1, . . .
If either the expected number of entities in the system or the expected waiting
time is known, the other can be calculated easily using Littles law (1961):
L = W
Littles law also can be applied to the queue length and waiting time:
L q = Wq
Example: Suppose customers arrive to use an automatic teller machine (ATM) at
an interarrival time of 3 minutes exponentially distributed and spend an average
of
2.4 minutes exponentially distributed at the machine. What is the expected number
of customers the system and in the queue? What is the expected waiting time for
cus- tomers in the system and in the queue?
= 20 per hour
= 25 per hour
=
Solving for L:
L=
=
= .8
( )
20
(25 20)
20
5
=4
44
2
(1 )
.82
(1 .8)
= .64
.2
= 3.2
Solving for W using Littles formula:
W=
=
4
20
= .20 hrs
= 12 minutes
Solving for Wq using Littles formula:
Wq = L q
= 3.2
20
= .16 hrs
= 9.6 minutes
Descriptive OR techniques such as queuing theory are useful for the most
basic problems, but as systems become even moderately complex, the problems
get very complicated and quickly become mathematically intractable. In
contrast, simulation provides close estimates for even the most complex
systems (assum- ing the model is valid). In addition, the statistical output of
simulation is not limited to only one or two metrics but instead provides
information on all per- formance measures. Furthermore, while OR techniques
give only average per- formance measures, simulation can generate detailed
time-series data and histograms providing a complete picture of performance
over time.
45
2.10 Summary
An understanding of system dynamics is essential to using any tool for planning
system operations. Manufacturing and service systems consist of interrelated
elements (personnel, equipment, and so forth) that interactively function to produce a specied outcome (an end product, a satised customer, and so on).
Systems are made up of entities (the objects being processed), resources (the
per- sonnel, equipment, and facilities used to process the entities), activities
(the process steps), and controls (the rules specifying the who, what, where,
when, and how of entity processing).
The two characteristics of systems that make them so difcult to analyze are
interdependencies and variability. Interdependencies cause the behavior of one
element to affect other elements in the system either directly or indirectly. Variability compounds the effect of interdependencies in the system, making system
behavior nearly impossible to predict without the use of simulation.
The variables of interest in systems analysis are decision, response, and
state variables. Decision variables dene how a system works; response
variables indicate how a system performs; and state variables indicate system
conditions at specic points in time. System performance metrics or response
variables are gen- erally time, utilization, inventory, quality, or cost related.
Improving system per- formance requires the correct manipulation of decision
variables. System opti- mization seeks to nd the best overall setting of
decision variable values that maximizes or minimizes a particular response
variable value.
Given the complex nature of system elements and the requirement to make
good design decisions in the shortest time possible, it is evident that simulation
can play a vital role in systems planning. Traditional systems analysis techniques
are effective in providing quick but often rough solutions to dynamic systems
problems. They generally fall short in their ability to deal with the complexity
and dynamically changing conditions in manufacturing and service systems.
Simula- tion is capable of imitating complex systems of nearly any size and to
nearly any level of detail. It gives accurate estimates of multiple performance
metrics and leads designers toward good design decisions.
46
References
Blanchard, Benjamin S. System Engineering Management. New York: John Wiley &
Sons, 1991.
Hopp, Wallace J., and M. Spearman. Factory Physics. New York: Irwin/McGraw-Hill,
2000, p. 282.
Kendall, D. G. Stochastic Processes Occurring in the Theory of Queues and Their
Analysis by the Method of Imbedded Markov Chains. Annals of
Mathematical Statistics 24 (1953), pp. 33854.
Kofman, Fred, and P. Senge. Communities of Commitment: The Heart of Learning
Organizations. Sarita Chawla and John Renesch, (eds.), Portland, OR. Productivity
Press, 1995.
Law, Averill M., and David W. Kelton. Simulation Modeling and Analysis. New York:
McGraw-Hill, 2000.
Little, J. D. C. A Proof for the Queuing Formula: L = W. Operations Research 9, no.
3 (1961), pp. 38387.
Lloyd, S., and K. Melton. Using Statistical Process Control to Obtain More Precise
Distribution Fitting Using Distribution Fitting Software. Simulators
International XIV 29, no. 3 (April 1997), pp. 19398.
Senge, Peter. The Fifth Discipline. New York: Doubleday, 1990.
Simon, Herbert A. Models of Man. New York: John Wiley & Sons, 1957, p. 198.
HarrellGhoshBo
wden: Simulation
Using ProModel,
Second Edition
I. Study
Chapters
3. Simulation
Basics
The
McGrawHill
Companies,
SIMULATION BASICS
3.1 Introduction
Simulation is much more meaningful when we understand what it is actually
doing. Understanding how simulation works helps us to know whether we are
applying it correctly and what the output results mean. Many books have been
written that give thorough and detailed discussions of the science of simulation
(see Banks et al. 2001; Hoover and Perry 1989; Law and Kelton 2000; Pooch
and Wall 1993; Ross 1990; Shannon 1975; Thesen and Travis 1992; and
Widman, Loparo, and Nielsen 1989). This chapter attempts to summarize the
basic technical issues related to simulation that are essential to understand in
order to get the greatest benet from the tool. The chapter discusses the different
types of simulation and how random behavior is sim- ulated. A spreadsheet
simulation example is given in this chapter to illustrate how various techniques
are combined to simulate the behavior of a common system.
48
will look at what the rst two characteristics mean in this chapter and focus on
what the third characteristic means in Chapter 4.
49
FIGURE 3.1
Examples of (a) a deterministic simulation and (b) a stochastic simulation.
Constant
inputs
Constant
outputs
7
3.4
Random
inputs
Random
outputs
12.3
Simulatio
n
Simulation
106
(a)
(b)
50
FIGURE 3.2
Examples of (a) a
discrete probability
distribution and (b) a
continuous
probability
distribution.
p(x) 1.0
.8
(a )
.6
.4
.2
0
0
4
3
5
Discrete Value
f (x) 1.0
.8
.6
(b )
.4
.2
0
0
4
3
5
Continuous Value
51
FIGURE 3.3
The
uniform(0,1)
distribution of a
random number
generator.
f(x)
f(x) =
1
0 elsewhere
1
Mean = , =
2
1
Variance = u2 =
12
1.
0
1.0
52
where the constant a is called the multiplier, the constant c the increment, and
the constant m the modulus (Law and Kelton 2000). The user must provide a
seed or starting value, denoted Z 0, to begin generating the sequence of integer
values. Z 0, a, c, and m are all nonnegative integers. The value of Zi is computed
by dividing
(aZ i1 + c) by m and setting Z i equal to the remainder part of the division, which
is the result returned by the mod function. Therefore, the Zi values are bounded
by 0 Zi m 1 and are uniformly distributed in the discrete case.
However, we desire the continuous version of the uniform distribution with
values ranging between a low of zero and a high of one, which we will denote
as Ui for i = 1, 2, 3, . . . . Accordingly, the value of Ui is computed by dividing
Zi by m.
In a moment, we will consider some requirements for selecting the values
for a, c, and m to ensure that the random number generator produces a long
sequence of numbers before it begins to repeat them. For now, however, lets
assign the fol- lowing values a = 21, c = 3, and m = 16 and generate a few
pseudo-random numbers. Table 3.1 contains a sequence of 20 random numbers
generated from the recursive formula
Zi = (21Zi1 + 3) mod(16)
An integer value of 13 was somewhat arbitrarily selected between 0 and
m 1 = 16 1 = 15 as the seed (Z 0 = 13) to begin generating the sequence of
TABLE 3.1 Example LCG Zi = (21Zi1 + 3) mod(16),
with Z0 = 13
i
21Zi1 + 3
Zi
Ui = Zi/16
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
276
87
150
129
24
171
234
213
108
255
318
297
192
3
66
45
276
87
150
129
13
4
7
6
1
8
11
10
5
12
15
14
9
0
3
2
13
4
7
6
1
0.2500
0.4375
0.3750
0.0625
0.5000
0.6875
0.6250
0.3125
0.7500
0.9375
0.8750
0.5625
0.0000
0.1875
0.1250
0.8125
0.2500
0.4375
0.3750
0.0625
53
54
Following this guideline, the LCG can achieve a full cycle length of over 2.1
billion (231 to be exact) random numbers.
Frequently, the long sequence of random numbers is subdivided into smaller
segments. These subsegments are referred to as streams. For example, Stream 1
could begin with the random number in the rst position of the sequence and
continue down to the random number in the 200,000th position of the sequence.
Stream 2, then, would start with the random number in the 200,001st position of
the sequence and end at the 400,000th position, and so on. Using this approach,
each type of random event in the simulation model can be controlled by a unique
stream of random numbers. For example, Stream 1 could be used to generate the
arrival pattern of cars to a restaurants drive-through window and Stream 2 could
be used to generate the time required for the driver of the car to place an order.
This assumes that no more than 200,000 random numbers are needed to simulate
each type of event. The practical and statistical advantages of assigning unique
streams to each type of event in the model are described in Chapter 10.
To subdivide the generators sequence of random numbers into streams, you
rst need to decide how many random numbers to place in each stream. Next,
you begin generating the entire sequence of random numbers (cycle length)
produced by the generator and recording the Zi values that mark the beginning of
each stream. Therefore, each stream has its own starting or seed value. When
using the random number generator to drive different events in a simulation
model, the previously generated random number from a particular stream is
used as input to the generator to generate the next random number from that
stream. For convenience, you may want to think of each stream as a separate
random number generator to be used in different places in the model. For
example, see Figure 10.5 in Chapter 10.
There are two types of linear congruential generators: the mixed
congruential generator and the multiplicative congruential generator. Mixed
congruential gener- ators are designed by assigning c > 0. Multiplicative
congruential generators are designed by assigning c = 0. The multiplicative
generator is more efcient than the mixed generator because it does not require
the addition of c. The maximum cycle length for a multiplicative generator can
be set within one unit of the maximum
cycle length of the mixed generator by carefully selecting values for a and m.
From a practical standpoint, the difference in cycle length is insignicant
considering that both types of generators can boast cycle lengths of more than
2.1 billion.
ProModel uses the following multiplicative generator:
Zi = (630,360,016Zi1)
mod(2
31
1)
55
important properties dened at the beginning of this section. The numbers produced by the random number generator must be (1) independent and (2)
uniformly distributed between zero and one (uniform(0,1)). To verify that the
generator satises these properties, you rst generate a sequence of random
numbers U1, U2, U3, . . . and then subject them to an appropriate test of
hypothesis.
The hypotheses for testing the independence property are
H 0: Ui values from the generator are independent
H1: Ui values from the generator are not independent
Several statistical methods have been developed for testing these hypotheses at
a specied signicance level . One of the most commonly used methods is
the runs test. Banks et al. (2001) review three different versions of the runs test
for conducting this independence test. Additionally, two runs tests are implemented in Stat::Fitthe Runs Above and Below the Median Test and the Runs
Up and Runs Down Test. Chapter 6 contains additional material on tests for
independence.
The hypotheses for testing the uniformity property are
H 0: Ui values are uniform(0,1)
H1: Ui values are not uniform(0,1)
Several statistical methods have also been developed for testing these
hypotheses at a specied signicance level . The Kolmogorov-Smirnov test
and the chi- square test are perhaps the most frequently used tests. (See
Chapter 6 for a de- scription of the chi-square test.) The objective is to
determine if the uniform(0,1) distribution ts or describes the sequence of
random numbers produced by the random number generator. These tests are
included in the Stat::Fit software and are further described in many
introductory textbooks on probability and statistics (see, for example, Johnson
1994).
56
discrete and continuous distributions, is described starting rst with the continuous case. For a review of the other methods, see Law and Kelton (2000).
Continuous Distributions
The application of the inverse transformation method to generate random
variates from continuous distributions is straightforward and efcient for many
continu- ous distributions. For a given probability density function f (x), nd
the cumula- tive distribution function of X. That is, F(x) = P(X x). Next,
set U = F(x),
where U is uniform(0,1), and solve for x. Solving for x yields x = F1(U ). The
equation x = F1(U ) transforms U into a value for x that conforms to the
given distribution f (x).
As an example, suppose that we need to generate variates from the
exponential distribution with mean . The probability density function f (x) and
corresponding cumulative distribution function F(x) are
x/
e
f (x) = 1
0
r
1 ex/
F(x) =
0
for x > 0
elsewhere
for x > 0
elsewhere
ln (ex/) = ln (1 U)
x/ = ln (1 U)
x = ln (1 U)
The random variate x in the above equation is exponentially distributed with mean
provided U is uniform(0,1).
Suppose three observations of an exponentially distributed random variable
with mean = 2 are desired. The next three numbers generated by the
random number generator are U1 = 0.27, U2 = 0.89, and U3 = 0.13. The three
numbers are transformed into variates x1, x2, and x3 from the exponential
distribution with mean = 2 as follows:
x1 = 2 ln (1 U1) = 2 ln (1 0.27) = 0.63
x2 = 2 ln (1 U2) = 2 ln (1 0.89) = 4.41
x3 = 2 ln (1 U3) = 2 ln (1 0.13) = 0.28
Figure 3.4 provides a graphical representation of the inverse transformation
method in the context of this example. The rst step is to generate U, where U is
uniform(0,1). Next, locate U on the y axis and draw a horizontal line from that
point to the cumulative distribution function [F(x) = 1 ex/2]. From this point
57
FIGURE 3.4
Graphical explanation
of inverse
transformation
method for continuous
variates.
F(x)
1.00
U2 = 1
ex2/ 2
= 0.89
0.50
U1 = 1 ex1/ 2 = 0.27
x1 = 2 ln (1 0.27) = 0.63
x2 = 2 ln (1 0.89) = 4.41
of intersection with F(x), a vertical line is dropped down to the x axis to obtain
the corresponding value of the variate. This process is illustrated in Figure
3.4 for generating variates x1 and x2 given U1 = 0.27 and U2 = 0.89.
Application of the inverse transformation method is straightforward as long
as there is a closed-form formula for the cumulative distribution function, which
is the case for many continuous distributions. However, the normal distribution
is one exception. Thus it is not possible to solve for a simple equation to
generate normally distributed variates. For these cases, there are other methods
that can be used to generate the random variates. See, for example, Law and
Kelton (2000) for a description of additional methods for generating random
variates from continuous distributions.
Discrete Distributions
The application of the inverse transformation method to generate variates from
discrete distributions is basically the same as for the continuous case. The difference is in how it is implemented. For example, consider the following
probability mass function:
0.10
for x = 1
p(x) = P(X = x) = 0.30 for x = 2
0.60
for x = 3
The random variate x has three possible values. The probability that x is equal to
1 is 0.10, P(X = 1) = 0.10; P(X = 2) = 0.30; and P(X = 3) = 0.60. The
cumulative distribution function F(x) is shown in Figure 3.5. The random variable x
could be used in a simulation to represent the number of defective components
on a circuit board or the number of drinks ordered from a drive-through window,
for example.
Suppose that an observation from the above discrete distribution is desired.
The rst step is to generate U, where U is uniform(0,1). Using Figure 3.5, the
58
FIGURE 3.5
Graphical explanation
of inverse
transformation
method for discrete
variates.
F(x)
1.00
U2 = 0.89
0.40
U1 = 0.27
U3 = 0.05
0.1
0
1
x3 = 1
2
x1 = 2
3
x2 = 3
59
ATM
queue
(FIFO)
6
Interarrival time
4.8 minutes
7th customer
arrives at
21.0 min.
6th customer
arrives at
16.2 min.
ATM server
(resource)
Departing
customers
(entities)
60
for i = 1, 2, 3, . . . , 25
where Xi represents the ith value realized from the exponential distribution with
mean , and Ui is the ith random number drawn from a uniform(0,1)
distribution. The i = 1, 2, 3, . . . , 25 indicates that we will compute 25 values
from the transfor- mation equation. However, we need to have two different
versions of this equation
to generate the two sets of 25 exponentially distributed random variates needed
to simulate 25 customers because the mean interarrival time of = 3.0 minutes
is dif- ferent than the mean service time of = 2.4 minutes. Let X 1i denote the
interarrival
TABLE 3.2
Arrivals to ATM
6
1
Harrell
Ghos
hBow
den:
Simula
tion
Using
ProMo
del,
Stream 1
(Z 1i )
Random
Number
(U 1i )
Interarriva
l Time
(X 1i )
Stream 2
(Z 2i )
Random
Number
(U 2i )
Service
Time
(X 2i )
Arriva
l Time
(2)
Begin
Service Time
(3)
Servic
e Time
(4)
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
3
66
109
116
7
22
81
40
75
42
117
28
79
126
89
80
19
18
125
68
23
102
97
120
91
122
0.516
0.852
0.906
0.055
0.172
0.633
0.313
0.586
0.328
0.914
0.219
0.617
0.984
0.695
0.625
0.148
0.141
0.977
0.531
0.180
0.797
0.758
0.938
0.711
0.953
2.18
5.73
7.09
0.17
0.57
3.01
1.13
2.65
1.19
7.36
0.74
2.88
12.41
3.56
2.94
0.48
0.46
11.32
2.27
0.60
4.78
4.26
8.34
3.72
9.17
122
5
108
95
78
105
32
35
98
13
20
39
54
113
72
107
74
21
60
111
30
121
112
51
50
29
Custome
r
Number
(1)
0.039
0.844
0.742
0.609
0.820
0.250
0.273
0.766
0.102
0.156
0.305
0.422
0.883
0.563
0.836
0.578
0.164
0.469
0.867
0.234
0.945
0.875
0.398
0.391
0.227
0.10
4.46
3.25
2.25
4.12
0.69
0.77
3.49
0.26
0.41
0.87
1.32
5.15
1.99
4.34
2.07
0.43
1.52
4.84
0.64
6.96
4.99
1.22
1.19
0.62
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
2.18
7.91
15.00
15.17
15.74
18.75
19.88
22.53
23.72
31.08
31.82
34.70
47.11
50.67
53.61
54.09
54.55
65.87
68.14
68.74
73.52
77.78
86.12
89.84
99.01
2.18
7.91
15.00
18.25
20.50
24.62
25.31
26.08
29.57
31.08
31.82
34.70
47.11
52.26
54.25
58.59
60.66
65.87
68.14
72.98
73.62
80.58
86.12
89.84
99.01
0.10
4.46
3.25
2.25
4.12
0.69
0.77
3.49
0.26
0.41
0.87
1.32
5.15
1.99
4.34
2.07
0.43
1.52
4.84
0.64
6.96
4.99
1.22
1.19
0.62
Departur
Time in
Time in
e Time
Queue
System
(5) = (3) + (4) (6) = (3) (2) (7) = (5) (2)
2.28
12.37
18.25
20.50
24.62
25.31
26.08
29.57
29.83
31.49
32.69
36.02
52.26
54.25
58.59
60.66
61.09
67.39
72.98
73.62
80.58
85.57
87.34
91.03
99.63
Average
0.00
0.00
0.00
3.08
4.76
5.87
5.43
3.55
5.85
0.00
0.00
0.00
0.00
1.59
0.64
4.50
6.11
0.00
0.00
4.24
0.10
2.80
0.00
0.00
0.00
1.94
0.10
4.46
3.25
5.33
8.88
6.56
6.20
7.04
6.11
0.41
0.87
1.32
5.15
3.58
4.98
6.57
6.54
1.52
4.84
4.88
7.06
7.79
1.22
1.19
0.62
4.26
I.
S
tu
d
y
C
h
3.
Si
m
ul
at
io
n
B
The
McGr
aw
Hill
Comp
62
time and X 2i denote the service time generated for the ith customer simulated in
the system. The equation for transforming a random number into an interarrival
time observation from the exponential distribution with mean = 3.0 minutes
becomes
X 1i = 3.0 ln (1 U 1i )
for i = 1, 2, 3, . . . , 25
where U 1i denotes the ith value drawn from the random number generator using
Stream 1. This equation is used in the Arrivals to ATM section of Table 3.2
under the Interarrival Time (X 1i ) column.
The equation for transforming a random number into an ATM service time
ob- servation from the exponential distribution with mean = 2.4 minutes
becomes
X 2i = 2.4 ln (1 U 2i )
for i = 1, 2, 3, . . . , 25
where U 2i denotes the ith value drawn from the random number generator using
Stream 2. This equation is used in the ATM Processing Time section of Table 3.2
under the Service Time (X 2i ) column.
Lets produce the sequence of U 1i values that feeds the transformation
equa- tion (X 1i ) for interarrival times using a linear congruential generator
(LCG) similar to the one used in Table 3.1. The equations are
Z 1i = (21Z 1i 1 + 3) mod(128)
U 1i = Z 1i /128
for i = 1, 2, 3, . . . , 25
63
FIGURE 3.7
Microsoft Excel
snapshot of the
ATM spreadsheet
illustrating the
equations for the
Arrivals to ATM
section.
The value of 2.18 minutes is the rst value appearing under the column,
Interarrival Time (X 1i ). To compute the next interarrival time value X 12 , we
start by using the value of Z 11 to compute Z 12.
Given Z 11 = 66
Z 12 = (21Z 11 + 3) mod(128) = (21(66) + 3) mod(128) = 109
U 12 = Z 12/128 = 109/128 = 0.852
X 12 = 3 ln (1 U 12) = 3.0 ln (1 0.852) = 5.73 minutes
Figure 3.7 illustrates how the equations were programmed in Microsoft Excel
for the Arrivals to ATM section of the spreadsheet. Note that the U 1i and X 1i
values in Table 3.2 are rounded to three and two places to the right of the
decimal, respectively. The same rounding rule is used for U 2i and X 2i .
It would be useful for you to verify a few of the service time values with
mean = 2.4 minutes appearing in Table 3.2 using
Z 20 = 122
Z 2i = (21Z 2i 1 + 3) mod(128)
U 2i = Z 2i /128
X 2i = 2.4 ln (1 U 2i )
for i = 1, 2, 3, . . .
The equations started out looking a little difcult to manipulate but turned
out not to be so bad when we put some numbers in them and organized them
in a spreadsheetthough it was a bit tedious. The important thing to note here
is that although it is transparent to the user, ProModel uses a very similar
method to produce exponentially distributed random variates, and you now
understand how it is done.
64
The LCG just given has a maximum cycle length of 128 random numbers
(you may want to verify this), which is more than enough to generate 25 interarrival time values and 25 service time values for this simulation. However, it is a
poor random number generator compared to the one used by ProModel. It was
chosen because it is easy to program into a spreadsheet and to compute by hand
to facilitate our understanding. The biggest difference between it and the
random number generator in ProModel is that the ProModel random number
generator manipulates much larger numbers to pump out a much longer stream
of numbers that pass all statistical tests for randomness.
Before moving on, lets take a look at why we chose Z 10 = 3 and Z 20 =
122. Our goal was to make sure that we did not use the same uniform(0,1)
random
number to generate both an interarrival time and a service time. If you look
carefully at Table 3.2, you will notice that the seed value Z 20 = 122 is the Z
125 value from random number Stream 1. Stream 2 was merely dened to start
where Stream 1 ended. Thus our spreadsheet used a unique random number to
generate each interarrival and service time. Now lets add the necessary
logic to our spreadsheet to conduct the simulation of the ATM system.
65
The Service Time column simply records the simulated amount of time
required for the customer to complete their transaction at the ATM. These values
are copies of the service time X2i values generated in the ATM Processing Time
section of the spreadsheet.
The Departure Time column records the moment in time at which a
customer departs the system after completing their transaction at the ATM. To
compute the time at which a customer departs the system, we take the time at
which the cus- tomer gained access to the ATM to begin service, column (3),
and add to that the length of time the service required, column (4). For
example, the rst customer gained access to the ATM to begin service at 2.18
minutes, column (3). The service time for the customer was determined to be 0.10 minutes in column (4). So,
the customer departs 0.10 minutes later or at time 2.18 + 0.10 = 2.28
minutes. This customers short service time must be because they forgot their
PIN number and could not conduct their transaction.
The Time in Queue column records how long a customer waits in the queue
before gaining access to the ATM. To compute the time spent in the queue, we
take the time at which the ATM began serving the customer, column (3), and
sub- tract from that the time at which the customer arrived to the system,
column (2). The fourth customer arrives to the system at time 15.17 and begins
getting service from the ATM at 18.25 minutes; thus, the fourth customers time
in the queue is
18.25 15.17 = 3.08 minutes.
The Time in System column records how long a customer was in the system.
To compute the time spent in the system, we subtract the customers departure
time, column (5), from the customers arrival time, column (2). The fth
customer arrives to the system at 15.74 minutes and departs the system at
24.62 minutes. Therefore, this customer spent 24.62 15.74 = 8.88 minutes in
the system.
Now lets go back to the Begin Service Time column, which records the time
at which a customer begins to be served by the ATM. The very rst customer to
arrive to the system when it opens for service advances directly to the ATM.
There is no waiting time in the queue; thus the value recorded for the time that
the rst customer begins service at the ATM is the customers arrival time.
With the exception of the rst customer to arrive to the system, we have to
capture the logic that a customer cannot begin service at the ATM until the
previous customer using the ATM completes his or her transaction. One way to
do this is with an IF state- ment as follows:
IF (Current Customer's Arrival Time < Previous Customer's
Departure Time)
THEN (Current Customer's Begin Service Time = Previous
Customer's Departure Time)
ELSE (Current Customer's Begin Service Time = Current
Customer's Arrival Time)
66
FIGURE 3.8
Microsoft Excel
snapshot of the
ATM spreadsheet
illustrating the IF
statement for the
Begin Service Time
column.
The Excel spreadsheet cell L10 (column L, row 10) in Figure 3.8 is the
Begin Service Time for the second customer to arrive to the system and is
programmed with IF(K10<N9,N9,K10). Since the second customers arrival
time (Excel cell K10) is not less than the rst customers departure time (Excel
cell N9), the logical test evaluates to false and the second customers time to
begin service is set to his or her arrival time (Excel cell K10). The fourth
customer shown in Figure 3.8 pro- vides an example of when the logical test
evaluates to true, which results in the fourth customer beginning service when
the third customer departs the ATM.
3.5.3 Simulation
Analysis
Replications
and
Output
67
1
2
Average
1.94 minutes
0.84 minutes
1.39 minutes
4.26 minutes
2.36 minutes
3.31 minutes
3.6 Summary
68
for x
f (x) =
0
elsewhere
where = 7 and = 4.
b. Probability mass function:
p(x) = P(X = x) =
15
0
1, 2, 3, 4, 5
for x
=
elsewhere
69
Reference
s
Banks, Jerry; John S. Carson II; Barry L. Nelson; and David M. Nicol. Discrete-Event
System Simulation. Englewood Cliffs, NJ: Prentice Hall, 2001.
Hoover, Stewart V., and Ronald F. Perry. Simulation: A Problem-Solving Approach.
Reading, MA: Addison-Wesley, 1989.
Johnson, R. A. Miller and Freunds Probability and Statistics for Engineers. 5th ed.
Englewood Cliffs, NJ: Prentice Hall, 1994.
Law, Averill M., and David W. Kelton. Simulation Modeling and Analysis. New York:
McGraw-Hill, 2000.
LEcuyer, P. Random Number Generation. In Handbook of Simulation: Principles,
Methodology, Advances, Applications, and Practice, ed. J. Banks, pp. 93137.
New York: John Wiley & Sons, 1998.
Pooch, Udo W., and James A. Wall. Discrete Event Simulation: A Practical Approach.
Boca Raton, FL: CRC Press, 1993.
Pritsker, A. A. B. Introduction to Simulation and SLAM II. 4th ed. New York: John Wiley
& Sons, 1995.
Ross, Sheldon M. A Course in Simulation. New York: Macmillan, 1990.
Shannon, Robert E. System Simulation: The Art and Science. Englewood Cliffs, NJ:
Prentice Hall, 1975.
Thesen, Arne, and Laurel E. Travis. Simulation for Decision Making. Minneapolis, MN:
West Publishing, 1992.
Widman, Lawrence E.; Kenneth A. Loparo; and Norman R. Nielsen. Articial Intelligence,
Simulation, and Modeling. New York: John Wiley & Sons, 1989.
HarrellGhoshBo
wden: Simulation
Using ProModel,
Second Edition
I. Study
Chapters
4.
DiscreteEve
nt
The
McGrawHill
Companies,
DISCRETE-EVENT
SIMULATION
When the only tool you have is a hammer, every problem begins to resemble a
nail.
Abraham Maslow
4.1 Introduction
Building on the foundation provided by Chapter 3 on how random numbers and
random variates are used to simulate stochastic systems, the focus of this chapter
is on discrete-event simulation, which is the main topic of this book. A
discrete- event simulation is one in which changes in the state of the
simulation model occur at discrete points in time as triggered by events. The
events in the automatic teller machine (ATM) simulation example of Chapter
3 that occur at discrete points in time are the arrivals of customers to the ATM
queue and the completion of their transactions at the ATM. However, you will
learn in this chapter that the spreadsheet simulation of the ATM system in
Chapter 3 was not technically exe- cuted as a discrete-event simulation.
This chapter rst denes what a discrete-event simulation is compared to a
continuous simulation. Next the chapter summarizes the basic technical issues
re- lated to discrete-event simulation to facilitate your understanding of how to
effec- tively use the tool. Questions that will be answered include these:
72
State changes in a model occur when some event happens. The state of the
model becomes the collective state of all the elements in the model at a
particular point in time. State variables in a discrete-event simulation are
referred to as discrete- change state variables. A restaurant simulation is an
example of a discrete-event simulation because all of the state variables in the
model, such as the number of customers in the restaurant, are discrete-change
state variables (see Figure 4.1). Most manufacturing and service systems are
typically modeled using discrete- event simulation.
In continuous simulation, state variables change continuously with respect
to time and are therefore referred to as continuous-change state variables. An
exam- ple of a continuous-change state variable is the level of oil in an oil
tanker that is being either loaded or unloaded, or the temperature of a building
that is controlled by a heating and cooling system. Figure 4.2 compares a
discrete-change state variable and a continuous-change state variable as they
vary over time.
FIGURE 4.1
Discrete events cause
discrete state
changes.
Number of
customers
in restaurant
3
2
Start
Simulatio
n
Event 1
(customer
arrives)
Event 2
(custome
r arrives)
Event 3
(custome
r departs)
Time
73
FIGURE 4.2
Comparison of a
discrete-change
state variable and a
continuous-change
state variable.
Continuouschange state
variable
Value
Discrete-change
state variable
Time
Continuous simulation products use either differential equations or difference equations to dene the rates of change in state variables over time.
= v (t ) + t
dt
We then need a second equation to dene the initial condition of v:
v(0) = K
On a computer, numerical integration is used to calculate the change in a
particular response variable over time. Numerical integration is performed at the
end of successive small time increments referred to as steps. Numerical analysis
techniques, such as Runge-Kutta integration, are used to integrate the differential
equations numerically for each incremental time step. One or more threshold
values for each continuous-change state variable are usually dened that
determine when some action is to be triggered, such as shutting off a valve or
turning on a pump.
74
Batch processing in which uids are pumped into and out of tanks can often be
modeled using difference equations.
75
1.2 minutes. At the start of the activity, a normal random variate is generated
based on these parameters, say 4.2 minutes, and an activity completion event is
scheduled for that time into the future. Scheduled events are inserted chronologically into an event calendar to await the time of their occurrence. Events that
occur at predened intervals theoretically all could be determined in advance
and therefore be scheduled at the beginning of the simulation. For example,
entities arriving every ve minutes into the model could all be scheduled easily
at the start of the simulation. Rather than preschedule all events at once that
occur at a set fre- quency, however, they are scheduled only when the next
occurrence must be de- termined. In the case of a periodic arrival, the next
arrival would not be scheduled until the current scheduled arrival is actually
pulled from the event calendar for processing. This postponement until the
latest possible moment minimizes the size of the event calendar and eliminates
the necessity of knowing in advance how many events to schedule when the
length of the simulation may be unknown.
Conditional events are triggered by a condition being met rather than by the
passage of time. An example of a conditional event might be the capturing of a
resource that is predicated on the resource being available. Another example
would be an order waiting for all of the individual items making up the order to
be assembled. In these situations, the event time cannot be known beforehand,
so the pending event is simply placed into a waiting list until the conditions can
be satised. Often multiple pending events in a list are waiting for the same
condi- tion. For example, multiple entities might be waiting to use the same
resource when it becomes available. Internally, the resource would have a
waiting list for all items currently waiting to use it. While in most cases events
in a waiting list are processed rst-in, rst-out (FIFO), items could be inserted
and removed using a number of different criteria. For example, items may be
inserted according to item priority but be removed according to earliest due
date.
Events, whether scheduled or conditional, trigger the execution of logic that
is associated with that event. For example, when an entity frees a resource, the
state and statistical variables for the resource are updated, the graphical
animation is up- dated, and the input waiting list for the resource is examined to
see what activity to respond to next. Any new events resulting from the
processing of the current event are inserted into either the event calendar or
another appropriate waiting list.
In real life, events can occur simultaneously so that multiple entities can be
doing things at the same instant in time. In computer simulation, however, especially when running on a single processor, events can be processed only one at a
time even though it is the same instant in simulated time. As a consequence, a
method or rule must be established for processing events that occur at the exact
same simulated time. For some special cases, the order in which events are
processed at the current simulation time might be signicant. For example, an
entity that frees a resource and then tries to immediately get the same resource
might have an unfair advantage over other entities that might have been waiting
for that particular resource.
In ProModel, the entity, downtime, or other item that is currently being
processed is allowed to continue processing as far as it can at the current simulation time. That means it continues processing until it reaches either a conditional
76
FIGURE 4.3
Start
Logic diagram of
how discrete-event
simulation works.
Create simulation
database and
schedule initial events.
Advance clock
to next event
time.
Yes
Termination
event?
Update
statistics and
generate
output report.
No
Process event
and schedule any
new events.
Stop
Update
statistics, state
variables, and
animation.
Yes
Any conditional
events?
No
event that cannot be satised or a timed delay that causes a future event to be
scheduled. It is also possible that the object simply nishes all of the processing
dened for it and, in the case of an entity, exits the system. As an object is being
processed, any resources that are freed or other entities that might have been created as byproducts are placed in an action list and are processed one at a time in
a similar fashion after the current object reaches a stopping point. To
deliberately
77
suspend the current object in order to allow items in the action list to be
processed, a zero delay time can be specied for the current object. This puts the
current item into the future events list (event calendar) for later processing,
even though it is still processed at the current simulation time.
When all scheduled and conditional events have been processed that are
possible at the current simulation time, the clock advances to the next scheduled
event and the process continues. When a termination event occurs, the
simulation ends and statistical reports are generated. The ongoing cycle of
processing sched- uled and conditional events, updating state and statistical
variables, and creating new events constitutes the essence of discrete-event
simulation (see Figure 4.3).
ATM queue
(FIFO)
ATM server
(resource)
Departing
customers
(entities)
78
79
attributes are characteristics of the entity that are maintained for that entity until
the entity exits the system. For example, to compute the amount of time an
entity waited in a queue location, an attribute is needed to remember when the
entity en- tered the location. For the ATM simulation, one entity attribute is used
to remem- ber the customers time of arrival to the system. This entity attribute
is called the Arrival Time attribute. The simulation program computes how
long each cus- tomer entity waited in the queue by subtracting the time that the
customer entity arrived to the queue from the value of the simulation clock when
the customer en- tity gained access to the ATM.
State Variables
Two discrete-change state variables are needed to track how the status (state) of
the system changes as customer entities arrive in and depart from the ATM
system.
Number of Entities in Queue at time step i, NQi.
ATM Statusi to denote if the ATM is busy or idle at time step i.
Statistical Accumulators
The objective of the example manual simulation is to estimate the expected
amount of time customers wait in the queue and the expected number of customers waiting in the queue. The average time customers wait in queue is a
simple average. Computing this requires that we record how many customers
passed through the queue and the amount of time each customer waited in the
queue. The average number of customers in the queue is a time-weighted
average, which is usually called a time average in simulation. Computing this
requires that we not only observe the queues contents during the simulation but
that we also measure the amount of time that the queue maintained each of the
observed values. We record each observed value after it has been multiplied
(weighted) by the amount of time it was maintained.
Heres what the simulation needs to tally at each simulation time step i to
compute the two performance measures at the end of the simulation.
Simple-average time in queue.
Record the number of customer entities processed through the queue,
Total Processed. Note that the simulation may end before all customer
entities in the queue get a turn at the ATM. This accumulator keeps track
of how many customers actually made it through the queue.
For a customer processed through the queue, record the time that it waited
in the queue. This is computed by subtracting the value of the simulation
clock time when the entity enters the queue (stored in the entity attribute
array Arrival Time) from the value of the simulation clock time when the
entity leaves the queue, ti Arrival Time.
Time-average number of customers in the queue.
For the duration of the last time step, which is ti ti 1 , and the number
of customer entities in the queue during the last time step, which is NQi
1 , record the product of ti ti 1 and NQi 1 . Call the product
(ti ti 1)NQi 1 the Time-Weighted Number of Entities in the Queue.
80
Events
There are two types of recurring scheduled events that change the state of the
sys- tem: arrival events and departure events. An arrival event occurs when a
customer entity arrives to the queue. A departure event occurs when a customer
entity com- pletes its transaction at the ATM. Each processing of a customer
entitys arrival to the queue includes scheduling the future arrival of the next
customer entity to the ATM queue. Each time an entity gains access to the ATM,
its future departure from the system is scheduled based on its expected service
time at the ATM. We actually need a third event to end the simulation. This
event is usually called the termination event.
To schedule the time at which the next entity arrives to the system, the
simu- lation needs to generate an interarrival time and add it to the current
simulation clock time, ti . The interarrival time is exponentially distributed
with a mean of
3.0 minutes for our example ATM system. Assume that the function E (3.0) returns an exponentially distributed random variate with a mean of 3.0 minutes.
The future arrival time of the next customer entity can then be scheduled by
using the equation ti + E (3.0).
The customer service time at the ATM is exponentially distributed with a
mean of 2.4 minutes. The future departure time of an entity gaining access to the
ATM is scheduled by the equation ti + E (2.4).
Event Calendar
The event calendar maintains the list of active events (events that have been
scheduled and are waiting to be processed) in chronological order. The
simulation progresses by removing the rst event listed on the event calendar,
setting the simulation clock, ti , equal to the time at which the event is scheduled
to occur, and processing the event.
81
compare the results of the manual simulation with those produced by the spreadsheet simulation.
Notice that Table 3.2 contains a subscript i in the leftmost column. This subscript denotes the customer entity number as opposed to the simulation time
step. We wanted to point this out to avoid any confusion because of the different
uses of the subscript. In fact, you can ignore the subscript in Table 3.2 as you
pick val- ues from the Service Time and Interarrival Time columns.
A discrete-event simulation logic diagram for the ATM system is shown in
Figure 4.5 to help us carry out the manual simulation. Table 4.1 presents the results of the manual simulation after processing 12 events using the simulation
logic diagram presented in Figure 4.5. The table tracks the creation and scheduling of events on the event calendar as well as how the state of the system
changes and how the values of the statistical accumulators change as events are
processed from the event calendar. Although Table 4.1 is completely lled in, it
was initially blank until the instructions presented in the simulation logic
diagram were exe- cuted. As you work through the simulation logic diagram,
you should process the information in Table 4.1 from the rst row down to the
last row, one row at a time (completely lling in a row before going down to the
next row). A dash () in a cell in Table 4.1 signies that the simulation logic
diagram does not require you to update that particular cell at the current
simulation time step. An arrow () in a cell in the table also signies that the
simulation logic diagram does not require you to update that cell at the current
time step. However, the arrows serve as a re- minder to look up one or more
rows above your current position in the table to de- termine the state of the ATM
system. Arrows appear under the Number of Entities in Queue, NQi column, and
ATM Statusi column. The only exception to the use of dashes or arrows is that
we keep a running total in the two Cumulative sub- columns in the table for
each time step. Lets get the manual simulation started.
i = 0, t0 = 0. As shown in Figure 4.5, the rst block after the start position
indicates that the model is initialized to its starting conditions. The
simulation time step begins at i = 0. The initial value of the simulation
clock is zero, t0 = 0. The system state variables are set to ATM Status0 =
Idle; Number of Entities in Queue, NQ0 = 0; and the Entity Attribute
Array is cleared. This reects the initial conditions of no customer entities
in the queue and an idle ATM. The statistical accumulator Total Processed is
set to zero. There are two different Cumulative variables in Table 4.1: one to
accumulate the time in queue values of ti Arrival Time, and the other to
accumulate the values of the time-weighted number of entities in the queue,
(ti ti 1)NQi 1 . Recall that ti Arrival Time is the amount of time that
entities, which gained access to the ATM, waited in queue. Both Cumulative
variables (ti Arrival Time) and (ti ti 1)NQi 1 are initialized to zero.
Next, an initial arrival event and termination event are scheduled and placed
under the Scheduled Future Events column. The listing of an event is
formatted as (Entity Number, Event, and Event Time). Entity Number
denotes the customer number that the event pertains to (such as the rst,
second, or third customer). Event is the type of event: a customer arrives, a
Start
i=0
8
2
Initialize variables and schedule initial arrival event and termination event (Scheduled Future Events).
i=i+1
Update Event Calendar: Insert Scheduled Future Events in chronological order.
Advance Clock, ti, to the Time of the first event on the calendar and process the event.
i=i+1
Is ATM
idle?
No
Arrive
Event type?
Depart
End
Update statistics and generate output report.
Store
current customer
Arrival Time in last position of Entity
Schedule
departureentitys
event for
Attribute
Array
to
reflect
customer
joining the queue.
current customer entity entering
Add ATM
1 to NQ
of Entities
in Queue.
i, Number
to occur
at time
ti + E(2.4).
Update Time-Weighted
Store current customers
Number of Entities in Queue
Arrival Time in first position of
statistic.
Entity Attribute Array.
- Compute value for
FIGURE 4.5
Discrete-event simulation logic diagram for ATM system.
i=i+1
Yes
Any
customers
in queue?
No
End
Harrell
Ghos
hBow
den:
Simula
tion
Using
ProMo
del,
I.
S
tu
d
y
C
h
4.
Disc
rete
Ev
ent
Sim
The
McGr
aw
Hill
Comp
(1, Arrive,
(_, End,
(1, Depart,
(2, Arrive,
(_, End,
(2, Arrive,
(_, End,
(2, Depart,
(3, Arrive,
(_, End,
(3, Arrive,
(_, End,
(4, Arrive,
(3, Depart,
(_, End,
(5, Arrive,
(3, Depart,
(_, End,
(3, Depart,
(6, Arrive,
(_, End,
(6, Arrive,
(4, Depart,
(_, End,
(7, Arrive,
(4, Depart,
(_, End,
2.18)
22.00)
2.28)
7.91)
22.00)
7.91)
22.00)
12.37)
15.00)
22.00)
15.00)
22.00)
15.17)
18.25)
22.00)
15.74)
18.25)
22.00)
18.25)
18.75)
22.00)
18.75)
20.50)
22.00)
19.88)
20.50)
22.00)
(4, Depart,
(_, End,
(8, Arrive,
(_, End,
(8, Arrive,
(5, Depart,
20.50)
22.00)
22.53)
22.00)
22.53)
24.62)
4
5
6
7
8
9
10
11
12
2.18
Arrive
2.28
Depart
7.91
Arrive
12.37
Depart
15.00
Arrive
15.17
Arrive
15.74
Arrive
18.25
Depart
18.75
Arrive
19.88
Arrive
20.50
Depart
22.00
End
*( )
( )
*(1, 2.18)
( )
*( )
( )
(Entity
Number,
Event, Time)
(ti Cumulative,
ti1)NQi1
(ti ti1)NQi1
Cumulative,
(ti Arrival
Time)
Time
Arrival
in Queue,
Time ti
Time-Weighted
Number of
Entities in Queue
Entities Processed
through Queue
Total Processed
Entity Attribute
Array (Entity
Number, Arrival
Time)
Statistical Accumulators
ATM Statusi
System State
Entities
*Entity
Waiting
Using
in ATM,
Queue, array
arraypositions
position 1
2, 3,
...
Clock, ti
8
3
Processed Event
Entity Number
Event Calendar
Event
TABLE 4.1
Harrell
Ghos
hBow
den:
Simula
tion
Using
ProMo
del,
I.
S
tu
d
y
C
h
Idle
Busy
Idle
*(2, 7.91)
( )
*( )
( )
Busy
Idle
*(3, 15.00)
( )
*(3, 15.00)
(4, 15.17)
Busy
*(3, 15.00)
(4, 15.17)
(5, 15.74)
*(4, 15.17)
(5, 15.74)
0.57
0.57
3.08
3.08
5.02
5.59
*(4, 15.17)
(5, 15.74)
(6, 18.75)
*(4, 15.17)
(5, 15.74)
(6, 18.75)
(7, 19.88)
*(5, 15.74)
(6, 18.75)
(7, 19.88)
3.08
0.50
6.09
3.08
2.26
8.35
4.76
7.84
1.86
10.21
7.84
3.00
13.21
No new events
(3, Arrive, 15.00)
(2, Depart, 12.37)
No new events
4.
Disc
rete
Ev
ent
Sim
The
McGr
aw
Hill
Comp
HarrellGhoshBo
wden: Simulation
Using ProModel,
Second Edition
84
I. Study
Chapters
4.
DiscreteEve
nt
The
McGrawHill
Companies,
customer departs, or the simulation ends. Time is the future time that the
event is to occur. The event (1, Arrive, 2.18) under the Scheduled Future
Events column prescribes that the rst customer entity is scheduled to arrive
at time 2.18 minutes. The arrival time was generated using the equation
t0 + E (3.0). To obtain the value returned from the function E (3), we went
to Table 3.2, read the rst random variate from the Interarrival Time column
(a value of 2.18 minutes), and added it to the current value of the simulation
clock, t0 = 0. The simulation is to be terminated after 22 minutes. Note the
( , End, 22.00) under the Scheduled Future Events column. For the
termination event, no value is assigned to Entity Number because it is not
relevant.
i = 1, t1 = 2.18. After the initialization step, the list of scheduled future
events is added to the event calendar in chronological order in preparation
for the next simulation time step i = 1. The simulation clock is fast
forwarded to the time that the next event is scheduled to occur, which is t1
= 2.18
(the arrival time of the rst customer to the ATM queue), and then the event
is processed. Following the simulation logic diagram, arrival events are
processed by rst scheduling the future arrival event for the next customer
entity using the equation t1 + E (3.0) = 2.18 + 5.73 = 7.91 minutes.
Note the value of 5.73 returned by the function E (3.0) is the second
random variate listed under the Interarrival Time column of Table 3.2. This
future event is placed under the Scheduled Future Events column in Table
4.1 as (2, Arrive, 7.91). Checking the status of the ATM from the
previous simulation time step reveals that the ATM is idle (ATM Status0 =
Idle). Therefore, the arriving customer entity immediately ows through
the queue to the ATM to conduct its transaction. The future departure event
of this entity from the ATM is scheduled using the equation t1 + E (2.4) =
2.18 + 0.10 = 2.28 minutes. See (1, Depart, 2.28) under the Scheduled
Future Events column, denoting that the rst customer entity is scheduled to
depart the ATM at time 2.28 minutes. Note that the value of 0.10 returned
by the function E (2.4) is the rst random variate listed under the Service
Time column of Table 3.2. The arriving customer entitys arrival time is
then stored in the rst position of the Entity Attribute Array to signify that
it is being served by the ATM. The ATM Status1 is set to Busy, and the
statistical accumulators for Entities Processed through Queue are updated.
Add 1 to Total Processed and since this entity entered the queue and
immediately advanced to the idle ATM for processing, record zero minutes
in the Time in Queue, t1 Arrival Time, subcolumn and update this
statistics cumulative value. The statistical accumulators for Time-Weighted
Number of Entities in the Queue are updated next. Record zero for
(t1 t0)NQ0 since there were no entities in queue during the previous
time step, NQ0 = 0, and update this statistics cumulative value. Note the
arrow entered under the Number of Entities in Queue, NQ1 column.
Recall that the arrow is placed there to signify that the number of entities
waiting
in the queue has not changed from its previous value.
85
86
customers waited in the queue is 7.84 minutes. The nal cumulative value for
Time-Weighted Number of Entities in the Queue is 13.21 minutes. Note that at
the end of the simulation, two customers are in the queue (customers 6 and 7)
and one is at the ATM (customer 5). A few quick observations are worth
considering before we discuss how the accumulated values are used to calculate
summary sta- tistics for a simulation output report.
This simple and brief (while tedious) manual simulation is relatively easy to
follow. But imagine a system with dozens of processes and dozens of factors inuencing behavior such as downtimes, mixed routings, resource contention, and
others. You can see how essential computers are for performing a simulation of
any magnitude. Computers have no difculty tracking the many relationships
and updating the numerous statistics that are present in most simulations.
Equally as important, computers are not error prone and can perform millions of
instructions per second with absolute accuracy. We also want to point out that
the simulation logic diagram (Figure 4.5) and Table 4.1 were designed to
convey the essence of what happens inside a discrete-event simulation program.
When you view a trace report of a ProModel simulation in Lab Chapter 8 you
will see the simularities between the trace report and Table 4.1. Although the
basic process presented is sound, its efciency could be improved. For
example, there is no need to keep both a scheduled future events list and
an event calendar. Instead, future events are inserted directly onto the event
calendar as they are created. We sepa- rated them to facilitate our describing the
ow of information in the discrete-event framework.
in=1
xi n
87
The average time that customer entities waited in the queue for their turn on
the ATM during the manual simulation reported in Table 4.1 is a simple-average
statistic. Recall that the simulation processed ve customers through the queue.
Let xi denote the amount of time that the i th customer processed spent in the
queue. The average waiting time in queue based on the n = 5 observations is
Average time in queue
=
0 + 0 + 0 + 3.08 + 4.76
i =1
xi
7.84 minutes
=
= 1.57 minutes
The values necessary for computing this average are accumulated under the
Entities Processed through Queue columns of the manual simulation table (see
the last row of Table 4.1 for the cumulative value (ti Arrival Time) = 7.84
and Total Processed = 5).
Time-Average Statistic
A time-average statistic, sometimes called a time-weighted average, reports
the average value of a response variable weighted by the time duration for
each observed value of the variable:
n (T x )
i i
i
=1
Time average =
T
where xi denotes the value of the i th observation, Ti denotes the time duration of
the i th observation (the weighting factor), and T denotes the total duration
over which the observations were collected. Example time-average statistics
include the average number of entities in a system, the average number of
entities at a lo- cation, and the average utilization of a resource. An average of
a time-weighted response variable in ProModel is computed as a time average.
The average number of customer entities waiting in the queue location for
their turn on the ATM during the manual simulation is a time-average statistic.
Figure 4.6 is a plot of the number of customer entities in the queue during the
manual simulation recorded in Table 4.1. The 12 discrete-events manually simulated in Table 4.1 are labeled t1, t2, t 3 ,..., t11, t12 on the plot. Recall that ti denotes the value of the simulation clock at time step i in Table 4.1, and that its initial value is zero, t0 = 0.
Using the notation from the time-average equation just given, the total
simu- lation time illustrated in Figure 4.6 is T = 22 minutes. The Ti denotes
the dura- tion of time step i (distance between adjacent discrete-events in Figure
4.6). That is, Ti = ti ti 1 for i = 1, 2, 3, . . . , 12. The xi denotes the
queues contents
(number of customer entities in the queue) during each Ti
time interval. There- fore, xi = NQi 1 for i = 1, 2, 3, . . . , 12 (recall that in
Table 4.1, NQi 1 denotes
the number of customer entities in the queue
from ti 1 to ti ). The time-average
8
8
Harrell
Ghos
hBow
den:
Simula
tion
Using
ProMo
del,
I.
S
tu
d
y
C
h
4.
Disc
rete
Ev
ent
Sim
0
0
T1 =
2.18
10
11
12
13
t1 t2
15
16
t3
t4
Simulation time, T = 22
FIGURE 4.6
Number of customers in the queue during the manual simulation.
17
18
T7 = 0.57
t5 t6 t7
19
20
21
22
T8 = 2.51
T6 = 0.17
T2 = 0.1
t0
14
T12 = 1.50
t8
t11
t12
The
McGr
aw
Hill
Comp
89
12
xi )
i =1
12
(Ti
ti
=
(ti
1)NQi 1
i
=1
T
(2.18)(0) + (0.1)(0) + (5.63)(0) + (4.46)(0) + (2.63)(0) + (0.17)(0) + (0.57)(1) + (2.51)(2) + +
Average NQ =
(1.5)(2)
22
13.21
= 0.60 customers
Average NQ
22
12
=
)
1 )NQ
(ti
i =1
i 1
calculates the area under the plot of the queues contents during the simulation
(Figure 4.6). The values necessary for computing this area are accumulated
under the Time-Weighted Number of Entities in Queue column of Table 4.1
(see the Cumulative value of 13.21 in the tables last row).
4.4.5 Issues
Even though this example is a simple and somewhat crude simulation, it
provides a good illustration of basic simulation issues that need to be addressed
when con- ducting a simulation study. First, note that the simulation start-up
conditions can bias the output statistics. Since the system started out empty,
queue content statis- tics are slightly less than what they might be if we
began the simulation with customers already in the system. Second, note that
we ran the simulation for only 22 minutes before calculating the results. Had we
ran longer, it is very likely that the long-run average time in the queue would
have been somewhat different (most likely greater) than the time from the
short run because the simulation did not have a chance to reach a steady state.
These are the kinds of issues that should be addressed whenever running a
simulation. The modeler must carefully analyze the output and understand the
sig- nicance of the results that are given. This example also points to the
need for considering beforehand just how long a simulation should be run.
These issues are addressed in Chapters 9 and 10.
90
FIGURE 4.7
Typical components
of simulation
software.
Modeling
interface
Modeling
processor
Simulation
interface
Model Simulation
data
data
Simulation
processor
Output
interface
Output
data
Output
processor
Simulation
processor
entering and editing model information. External les used in the simulation are
specied here as well as run-time options (number of replications and so on).
91
request snapshot reports, pan or zoom the layout, and so forth. If visual
interactive capability is provided, the user is even permitted to make changes
dynamically to model variables with immediate visual feedback of the effects of
such changes.
The animation speed can be adjusted and animation can even be disabled by
the user during the simulation. When unconstrained, a simulation is capable of
running as fast as the computer can process all of the events that occur within
the simulated time. The simulation clock advances instantly to each scheduled
event; the only central processing unit (CPU) time of the computer that is used is
what is necessary for processing the event logic. This is how simulation is able
to run in compressed time. It is also the reason why large models with millions
of events take so long to simulate. Ironically, in real life activities take time
while events take no time. In simulation, events take time while activities take
no time. To slow down a simulation, delay loops or system timers are used to
create pauses between events. These techniques give the appearance of elapsing
time in an animation. In some applications, it may even be desirable to run a
simulation at the same rate as a real clock. These real-time simulations are
achieved by synchronizing the simu- lation clock with the computers internal
system clock. Human-in-the-loop (such as operator training simulators) and
hardware-in-the-loop (testing of new equip- ment and control systems) are
examples of real-time simulations.
92
displayed during the simulation itself, although some simulation products create
an animation le that can be played back at the end of the simulation. In addition
to animated gures, dynamic graphs and history plots can be displayed during
the simulation.
Animation and dynamically updated displays and graphs provide a visual
rep- resentation of what is happening in the model while the simulation is
running. Animation comes in varying degrees of realism from three-dimensional
animation to simple animated owcharts. Often, the only output from the
simulation that is of interest is what is displayed in the animation. This is
particularly true when simula- tion is used for facilitating conceptualization or
for communication purposes.
A lot can be learned about model behavior by watching the animation (a
pic- ture is worth a thousand words, and an animation is worth a thousand
pictures). Animation can be as simple as circles moving from box to box, to
detailed, real- istic graphical representations. The strategic use of graphics
should be planned in advance to make the best use of them. While insufcient
animation can weaken the message, excessive use of graphics can distract
from the central point to be made. It is always good to dress up the simulation
graphics for the nal presenta- tion; however, such embellishments should be
deferred at least until after the model has been debugged.
For most simulations where statistical analysis is required, animation is no
substitute for the postsimulation summary, which gives a quantitative overview
of the entire system performance. Basing decisions on the animation alone
reects shallow thinking and can even result in unwarranted conclusions.
Resource utilization.
Queue sizes.
Waiting times.
Processing rates.
93
94
FIGURE 4.9
ProModel animation provides useful feedback.
takes place. This background might be a CAD layout imported into the model.
The dynamic animation objects that move around on the background during the
simu- lation include entities (parts, customers, and so on) and resources
(people, fork trucks, and so forth). Animation also includes dynamically
updated counters, indicators, gauges, and graphs that display count, status, and
statistical information (see Figure 4.9).
95
FIGURE 4.10
Summary report of simulation activity.
-------------------------------------------------------------------------------General Report
Output from C:\ProMod4\models\demos\Mfg_cost.mod [Manufacturing Costing Optimization]
Date: Feb/27/2003
Time: [Link] PM
-------------------------------------------------------------------------------Scenario
: Model Parameters
Replication
: 1 of 1
Warmup Time
: 5 hr
Simulation Time : 15 hr
-------------------------------------------------------------------------------LOCATIONS
Location
Name
----------Receive
NC Lathe 1
NC Lathe 2
Degrease
Inspect
Bearing Que
Loc1
Scheduled
Hours
--------10
10
10
10
10
10
10
Capacity
-------2
1
1
2
1
100
5
Total
Entries
------21
57
57
114
113
90
117
Average
Minutes
Per Entry
--------57.1428
10.1164
9.8918
10.1889
4.6900
34.5174
25.6410
Average
Contents
-------2
0.961065
0.939725
1.9359
0.883293
5.17762
5
Maximum
Contents
-------2
1
1
2
1
13
5
Current
Contents
-------2
1
1
2
1
11
5
RESOURCES
Resource
Name
-------CellOp.1
CellOp.2
CellOp.3
CellOp
Units
----1
1
1
3
Scheduled
Hours
--------10
10
10
30
Number
Of Times
Used
-------122
118
115
355
Average
Minutes
Per
Usage
-------2.7376
2.7265
2.5416
2.6704
Average
Minutes
Travel
To Use
-------0.1038
0.1062
0.1020
0.1040
Average
Minutes
Moving
Average
Minutes
Wait for
Res, etc.
--------31.6055
3.2269
2.4885
35.5899
Average
Minutes
Travel
To Park
-------0.0000
0.0000
0.0000
0.0000
% Blocked
In Travel
--------0.00
0.00
0.00
0.00
% Util
-----57.76
55.71
50.67
54.71
ENTITY ACTIVITY
Entity
Name
------Pallet
Blank
Cog
Reject
Bearing
Total
Exits
----19
0
79
33
78
Current
Quantity
In System
--------2
7
3
0
12
Average
Minutes
In
System
--------63.1657
52.5925
49.5600
42.1855
-------0.0000
0.8492
0.8536
0.0500
Average
Minutes
In
Operation
--------1.0000
33.5332
33.0656
0.0000
Average
Minutes
Blocked
--------30.5602
14.9831
13.1522
6.5455
96
FIGURE 4.11
Time-series graph
showing changes in
queue size over time.
FIGURE 4.12
Histogram of queue
contents.
97
Simulators
Languages
Ease of use
Flexibility
98
Early
simulators
Current best-of-breed
products
Early
languages
Hard
Ease of use
Easy
FIGURE 4.14
Low
Flexibility
High
Simulation is a technology that will continue to evolve as related technologies improve and more time is devoted to the development of the software.
Prod- ucts will become easier to use with more intelligence being incorporated
into the software itself. Evidence of this trend can already be seen by
optimization and other time-saving utilities that are appearing in simulation
products. Animation and other graphical visualization techniques will continue
to play an important role in simulation. As 3-D and other graphic technologies
advance, these features will also be incorporated into simulation products.
99
Simulation products targeted at vertical markets are on the rise. This trend is
driven by efforts to make simulation easier to use and more solution oriented.
Specic areas where dedicated simulators have been developed include call center management, supply chain management, and high-speed processing. At the
same time many simulation applications are becoming more narrowly focused,
others are becoming more global and look at the entire enterprise or value chain
in a hierarchical fashion from top to bottom.
Perhaps the most dramatic change in simulation will be in the area of software interoperability and technology integration. Historically, simulation has
been viewed as a stand-alone, project-based technology. Simulation models were
built to support an analysis project, to predict the performance of complex
systems, and to select the best alternative from a few well-dened alternatives.
Typically these projects were time-consuming and expensive, and relied heavily
on the expertise of a simulation analyst or consultant. The models produced
were generally single use models that were discarded after the project.
In recent years, the simulation industry has seen increasing interest in extending the useful life of simulation models by using them on an ongoing basis
(Harrell and Hicks 1998). Front-end spreadsheets and push-button user
interfaces are making such models more accessible to decision makers. In these
exible sim- ulation models, controlled changes can be made to models
throughout the system life cycle. This trend is growing to include dynamic links
to databases and other data sources, enabling entire models actually to be built
and run in the background using data already available from other enterprise
applications.
The trend to integrate simulation as an embedded component in enterprise
applications is part of a larger development of software components that can be
distributed over the Internet. This movement is being fueled by three emerging
information technologies: (1) component technology that delivers true object
orientation; (2) the Internet or World Wide Web, which connects business communities and industries; and (3) Web service technologies such as JZEE and
Microsofts .NET (DOTNET). These technologies promise to enable parallel
and distributed model execution and provide a mechanism for maintaining distributed model repositories that can be shared by many modelers (Fishwick
1997). The interest in Web-based simulation, like all other Web-based
applications, con- tinues to grow.
4.9 Summary
Most manufacturing and service systems are modeled using dynamic, stochastic,
discrete-event simulation. Discrete-event simulation works by converting all activities to events and consequent reactions. Events are either time-triggered or
condition-triggered, and are therefore processed either chronologically or when
a satisfying condition has been met.
Simulation models are generally dened using commercial simulation software that provides convenient modeling constructs and analysis tools.
Simulation
100
software consists of several modules with which the user interacts. Internally,
model data are converted to simulation data, which are processed during the
simulation. At the end of the simulation, statistics are summarized in an output
database that can be tabulated or graphed in various forms. The future of simulation is promising and will continue to incorporate exciting new technologies.
4.10
Review Questions
1. Give an example of a discrete-change state variable and a continuouschange state variable.
2. In simulation, the completion of an activity time that is random must be
known at the start of the activity. Why is this necessary?
3. Give an example of an activity whose completion is a scheduled event
and one whose completion is a conditional event.
4. For the rst 10 customers processed completely through the ATM
spreadsheet simulation presented in Table 3.2 of Chapter 3, construct a
table similar to Table 4.1 as you carry out a manual discrete-event
simulation of the ATM system to
a. Compute the average amount of time the rst 10 customers spent in
the system. Hint: Add time in system and corresponding
cumulative columns to the table.
b. Compute the average amount of time the rst 10 customers spent in
the queue.
c. Plot the number of customers in the queue over the course of the
simulation and compute the average number of customers in the
queue for the simulation.
d. Compute the utilization of the ATM. Hint: Dene a utilization
variable that is equal to zero when the ATM is idle and is equal to 1
when the ATM is busy. At the end of the simulation, compute a timeweighted average of the utilization variable.
5. Identify whether each of the following output statistics would be
computed as a simple or as a time-weighted average value.
a. Average utilization of a resource.
b. Average time entities spend in a queue.
c. Average time entities spend waiting for a resource.
d. Average number of entities waiting for a particular resource.
e. Average repair time for a machine.
6. Give an example of a situation where a time series graph would be
more useful than just seeing an average value.
7. In real life, activities take time and events take no time. During a
simulation, activities take no time and events take all of the time.
Explain this paradox.
101
Reference
s
Bowden, Royce. The Spectrum of Simulation Software. IIE Solutions, May 1998,
pp. 4446.
Fishwick, Paul A. Web-Based Simulation. In Proceedings of the 1997 Winter
Simulation Conference, ed. S. Andradottir, K. J. Healy, D. H. Withers, and B. L.
Nelson. Institute of Electric and Electronics Engineers, Piscataway, NJ, 1997. pp.
100109.
Gottfried, Byron S. Elements of Stochastic Process Simulation. Englewood Cliffs, NJ:
Prentice Hall, 1984, p. 8.
Haider, S. W., and J. Banks. Simulation Software Products for Analyzing Manufacturing
Systems. Industrial Engineering, July 1986, p. 98.
Harrell, Charles R., and Don Hicks. Simulation Software Component Architecture for
Simulation-Based Enterprise Applications. Proceedings of the 1998 Winter Simulation Conference, ed. D. J. Medeiros, E. F. Watson, J. S. Carson, and M. S.
Manivannan. Institute of Electrical and Electronics Engineers, Piscataway, New
Jersey, 1998, pp. 171721.
HarrellGhoshBo
wden: Simulation
Using ProModel,
Second Edition
I. Study
Chapters
5. Getting
Started
The
McGrawHill
Companies,
GETTING STARTED
For which of you, intending to build a tower, sitteth not down rst, and
counteth the cost, whether he have sufcient to nish it? Lest haply, after he
hath laid the foundation, and is not able to nish it, all that behold it begin to
mock him, Saying, This man began to build, and was not able to nish.
Luke 14:2830
5.1 Introduction
In this chapter we look at how to begin a simulation project. Specically, we
discuss how to select a project and set up a plan for successfully completing it.
Simulation is not something you do simply because you have a tool and a
process to which it can be applied. Nor should you begin a simulation without
forethought and preparation. A simulation project should be carefully planned
following basic project management principles and practices. Questions to be
answered in this chapter are
While specic tasks may vary from project to project, the basic procedure
for doing simulation is essentially the same. Much as in building a house, you
are better off following a time-proven methodology than approaching it
haphazardly. In this chapter, we present the preliminary activities for preparing
to conduct a simulation study. We then cover the steps for successfully
completing a simulation project. Subsequent chapters elaborate on these steps.
Here we focus primarily
103
104
on the rst step: dening the objective, scope, and requirements of the study.
Poor planning, ill-dened objectives, unrealistic expectations, and unanticipated
costs can turn a simulation project sour. For a simulation project to succeed, the
objec- tives and scope should be clearly dened and requirements identied and
quantied for conducting the project.
105