0% found this document useful (0 votes)
14 views55 pages

Simulation Output Analysis

The document discusses methods for analyzing output from discrete-event computer simulations, focusing on finite-horizon and steady-state simulations. It highlights issues such as initialization bias and the importance of statistical techniques to estimate system performance accurately. Various methods, including independent replications and truncation of output, are presented to address these challenges and improve the validity of simulation results.

Uploaded by

Kalyan Jorige
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views55 pages

Simulation Output Analysis

The document discusses methods for analyzing output from discrete-event computer simulations, focusing on finite-horizon and steady-state simulations. It highlights issues such as initialization bias and the importance of statistical techniques to estimate system performance accurately. Various methods, including independent replications and truncation of output, are presented to address these challenges and improve the validity of simulation results.

Uploaded by

Kalyan Jorige
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 55

SIMULATION OUTPUT ANALYSIS

Dave Goldsman

School of ISyE, Georgia Tech, Atlanta, Georgia, USA

May 26, 2010

1 / 55
Outline
1 Introduction
2 Finite-Horizon Simulation
3 Initialization Problems
4 Steady-State Analysis
Batch Means
Independent Replications
Overlapping Batch Means
Other Methods
5 Comparison of Systems
Classical Confidence Intervals
Common Random Numbers
Antithetic Random Numbers
Ranking, Selection, and Multiple Comparisons Methods
6 What Lies Ahead?
2 / 55
Introduction

Introduction

Steps in a Simulation Study:


Preliminary Analysis of the System
Model Building
Verification & Validation
Experimental Design & Simulation Runs
Statistical Analysis of Output Data
Implementation

3 / 55
Introduction

Input processes driving a simulation are random variables (e.g.,


interarrival times, service times, and breakdown times).
Must regard the output from the simulation as random.
Runs of the simulation only yield estimates of measures of
system performance (e.g., the mean customer waiting time).
These estimators are themselves random variables, and are
therefore subject to sampling error.
Must take sampling error must be taken into account to make
valid inferences concerning system performance.

4 / 55
Introduction

Problem: simulations almost never produce raw output that is


independent and identically distributed (i.i.d.) normal data.
Example: Customer waiting times from a queueing system. . .
(1) Are not independent — typically, they are serially correlated.
If one customer at the post office waits in line a long time, then
the next customer is also likely to wait a long time.
(2) Are not identically distributed. Customers showing up early
in the morning might have a much shorter wait than those who
show up just before closing time.
(3) Are not normally distributed — they are usually skewed to
the right (and are certainly never less than zero).

5 / 55
Introduction

Thus, it’s difficult to apply “classical” statistical techniques to the


analysis of simulation output.
Our purpose: Give methods to perform statistical analysis of
output from discrete-event computer simulations.
Why all the fuss?
You have to be careful — improper statistical analysis can
invalidate all results
Tremendous applications if you can get it right
Lots of cool research problems out there

6 / 55
Introduction

Types of Simulations
To facilitate the presentation, we identify two types of
simulations with respect to output analysis: Finite-Horizon
(Terminating) and Steady-State simulations.
Finite-Horizon Simulations: The termination of a
finite-horizon simulation takes place at a specific time or is
caused by the occurrence of a specific event. Examples are:
Mass transit system between during rush hour.
Distribution system over one month.
Production system until a set of machines breaks down.
Start-up phase of any system — stationary or
nonstationary

7 / 55
Introduction

Steady-state simulations: The purpose of a steady-state


simulation is the study of the long-run behavior of a system. A
performance measure is called a steady-state parameter if it is
a characteristic of the equilibrium distribution of an output
stochastic process. Examples are:
Continuously operating communication system where the
objective is the computation of the mean delay of a packet
in the long run.
Distribution system over a long period of time.

8 / 55
Introduction

Techniques to analyze output from terminating simulations are


based on the method of indep. replications (discussed in §2).
Additional problems arise for steady-state simulations. . .
Must now worry about the problem of starting the simulation —
how should it be initialized at time zero (§3), and
How long must it be run before data representative of steady
state can be collected?
§4 deals with point and confidence interval estimation for
steady-state simulation performance parameters.
§5 concerns the problem of comparing a number of competing
systems.

9 / 55
Finite-Horizon Simulation

Outline
1 Introduction
2 Finite-Horizon Simulation
3 Initialization Problems
4 Steady-State Analysis
Batch Means
Independent Replications
Overlapping Batch Means
Other Methods
5 Comparison of Systems
Classical Confidence Intervals
Common Random Numbers
Antithetic Random Numbers
Ranking, Selection, and Multiple Comparisons Methods
6 What Lies Ahead?
10 / 55
Finite-Horizon Simulation

Here we simulate some system of interest over a finite time


horizon.
For now, assume we obtain discrete simulation output
Y1 , Y2 , . . . , Ym , where the number of observations m can be a
constant or a random variable.
Example: The experimenter can specify the number m of
customer waiting times Y1 , Y2 , . . . , Ym to be taken from a
queueing simulation.
Or m could denote the random number of customers observed
during a specified time period [0, T ].

11 / 55
Finite-Horizon Simulation

Alternatively, we might observe continuous simulation output


{Y (t)|0 ≤ t ≤ T } over a specified interval [0, T ].
Example: If we are interested in estimating the time-averaged
number of customers waiting in a queue during [0, T ], the
quantity Y (t) would be the number of customers in the queue
at time t.

12 / 55
Finite-Horizon Simulation

Easiest Goal: Estimate the expected value of the sample mean


of the observations,
θ ≡ E[Ȳm ],
where the sample mean in the discrete case is
m
1 X
Ȳm ≡ Yi
m
i=1

(with a similar expression for the continuous case).


Example: We might be interested in estimating the expected
average waiting time of all customers at a shopping center
during the period 10 a.m. to 2 p.m.

13 / 55
Finite-Horizon Simulation

Although Ȳm is an unbiased estimator for θ, a proper statistical


analysis requires that we also provide an estimate of Var(Ȳm ).
Since the Yi ’s are not necessarily i.i.d. random variables, it is
may be that Var(Ȳm ) 6= Var(Yi )/m, a case not covered in
elementary statistics textbooks.
For this reason, the familiar sample variance,
m
X
2
S ≡ (Yi − Ȳm )2 /(m − 1),
i=1

is likely to be highly biased as an estimator of mVar(Ȳm ).


In fact, if the Yi ’s are positively correlated, then it my very well
be the case that E[S 2 ] ¿ mVar(Ȳm ).

14 / 55
Finite-Horizon Simulation

One should not use S 2 /m to estimate Var(Ȳm ).


So what happens if you dare to use it?
Here’s a typical 100(1 − α)% confidence interval for the mean µ
of i.i.d. normal observations with unknown variance:
p
µ ∈ Ȳm ± tn−1,α/2 S 2 /m,

where tn−1,α/2 is a t-distribution quantile, and 1 − α is the


desired coverage level.
Since E[S 2 /m] ¿ Var(Ȳm ), the confidence interval will have
true coverage ¿ 1 − α! Oops!

15 / 55
Finite-Horizon Simulation

The way around the problem is via the method of independent


replications (IR).
IR estimates Var(Ȳm ) by conducting b independent simulation
runs (replications) of the system under study, where each
replication consists of m observations.
It is easy to make the replications independent — just
re-initialize each replication with a different pseudo-random
number seed.

16 / 55
Finite-Horizon Simulation

Notation and Stuff.


Denote the sample mean from replication i by
m
1 X
Zi ≡ Yi,j ,
m
j=1

where Yi,j is observation j from replication i, for i = 1, 2, . . . , b


and j = 1, 2, . . . , m.
If each run is started under the same operating conditions (e.g.,
all queues empty and idle), then the replication sample means
Z1 , Z2 , . . . , Zb are i.i.d. random variables.

17 / 55
Finite-Horizon Simulation

Then the obvious point estimator for Var(Ȳm ) = Var(Zi ) is


b
1 X
V̂R ≡ (Zi − Z̄b )2 ,
b−1
i=1

where the grand mean is defined as


b
1X
Z̄b ≡ Zi .
b
i=1

Note that the forms of V̂R and S 2 /m resemble each other. But
since the replicate sample means are i.i.d., V̂R is usually much
less biased for Var(Ȳm ) than is S 2 /m.

18 / 55
Finite-Horizon Simulation

In light of the above, we see that V̂R /b is a reasonable estimator


for Var(Z̄b ).
If the number of observations per replication, m, is large
enough, a central limit theorem tells us that the replicate
sample means are approximately i.i.d. normal.
Then we have an approximate 100(1 − α)% two-sided
confidence interval (CI) for θ,
q
θ ∈ Z̄b ± tα/2,b−1 V̂R /b . (1)

19 / 55
Finite-Horizon Simulation

Example: Suppose we want to estimate the expected average


waiting time for the first 5000 customers in a certain queueing
system. We will make five independent replications of the
system, with each run initialized empty and idle and consisting
of 5000 waiting times. The resulting replicate means are:

i 1 2 3 4 5
Zi 3.2 4.3 5.1 4.2 4.6

Then Z̄5 = 4.28 and V̂R = 0.487. For level α = 0.05, we have
t0.025,4 = 2.78, and (1) gives [3.41, 5.15] as a 95% CI for the
expected average waiting time for the first 5000 customers.

20 / 55
Finite-Horizon Simulation

Independent replications can be used to calculate variance


estimates for statistics other than sample means.
Then the method can be used to get CI’s for quantities other
than E[Ȳm ], e.g., quantiles.
See any of the standard simulation texts for additional uses of
independent replications.
Research Issue: Sequential procedures that deliver a CI of
fixed size.

21 / 55
Initialization Problems

Outline
1 Introduction
2 Finite-Horizon Simulation
3 Initialization Problems
4 Steady-State Analysis
Batch Means
Independent Replications
Overlapping Batch Means
Other Methods
5 Comparison of Systems
Classical Confidence Intervals
Common Random Numbers
Antithetic Random Numbers
Ranking, Selection, and Multiple Comparisons Methods
6 What Lies Ahead?
22 / 55
Initialization Problems

Before a simulation can be run, one must provide initial values


for all of the simulation’s state variables.
Since the experimenter may not know what initial values are
appropriate for the state variables, these values might be
chosen somewhat arbitrarily.
For instance, we might decide that it is “most convenient” to
initialize a queue as empty and idle.
Such a choice of initial conditions can have a significant but
unrecognized impact on the simulation run’s outcome.
Thus, the initialization bias problem can lead to errors,
particularly in steady-state output analysis.

23 / 55
Initialization Problems

Examples of problems concerning simulation initialization.


Visual detection of initialization effects is sometimes
difficult — especially in the case of stochastic processes
having high intrinsic variance such as queueing systems.
How should the simulation be initialized? Suppose that a
machine shop closes at a certain time each day, even if
there are jobs waiting to be served. One must therefore be
careful to start each day with a demand that depends on
the number of jobs remaining from the previous day.
Initialization bias can lead to point estimators for
steady-state parameters having high mean squared error,
as well as CI’s having poor coverage.

24 / 55
Initialization Problems

Since initialization bias raises important concerns, how do we


detect and deal with it? We first list methods to detect it.
Attempt to detect the bias visually by scanning a realization of
the simulated process. This might not be easy, since visual
analysis can miss bias. Further, a visual scan can be tedious.
To make the visual analysis more efficient, one might transform
the data (e.g., take logs or square roots), smooth it, average it
across several indep. replications, or construct CUSUM plots.
Conduct statistical tests for initialization bias. Various
procedures check to see if mean or variance of process
changes over time: ASAP3 (Wilson et al.), change point
detection from statistical literature, etc.

25 / 55
Initialization Problems

If initialization bias is detected, one may want to do something


about it. Two simple methods for dealing with bias. . .
(a) Truncate the output by allowing the simulation to “warm up”
before data are retained for analysis.
Experimenter hopes that the remaining data are representative
of the steady-state system.
Output truncation is probably the most popular method for
dealing with initialization bias; and all of the major simulation
languages have built-in truncation functions.

26 / 55
Initialization Problems

But how can one find a good truncation point? If the output is
truncated “too early,” significant bias might still exist in the
remaining data. If it is truncated “too late,” then good
observations might be wasted.
Unfortunately, all simple rules to determine truncation points do
not perform well in general.
A reasonable practice is to average observations across
several replications, and then visually choose a truncation point
based on the averaged run; see Welch (1983) for a nice
visual/graphical approach.
This is where the new, sophisticated sequential change-point
detection algorithms come into play.

27 / 55
Initialization Problems

(b) Make a very long run to overwhelm the effects of


initialization bias.
This method of bias control is conceptually simple to carry out
and may yield point estimators having lower mean squared
errors than the analogous estimators from truncated data (see,
e.g., Fishman 1978).
However, a problem with this approach is that it can be wasteful
with observations; for some systems, an excessive run length
might be required before the initialization effects are rendered
negligible.

28 / 55
Steady-State Analysis

Outline
1 Introduction
2 Finite-Horizon Simulation
3 Initialization Problems
4 Steady-State Analysis
Batch Means
Independent Replications
Overlapping Batch Means
Other Methods
5 Comparison of Systems
Classical Confidence Intervals
Common Random Numbers
Antithetic Random Numbers
Ranking, Selection, and Multiple Comparisons Methods
6 What Lies Ahead?
29 / 55
Steady-State Analysis

Now assume that we have on hand stationary (steady-state)


simulation output, Y1 , Y2 , . . . , Yn .
Our goal: Estimate some parameter of interest, e.g., the mean
customer waiting time or the expected profit produced by a
certain factory configuration.
In particular, suppose the mean of this output is the unknown
quantity µ. We’ll use the sample mean Ȳn to estimate µ.
As in the case of terminating simulations, we must accompany
the value of any point estimator with a measure of its variance.

30 / 55
Steady-State Analysis

A number of methodologies have been proposed in the


literature for conducting steady-state output analysis: batch
means, independent replications, standardized time series,
spectral analysis, regeneration, ARMA time series modeling,
etc.
Let’s first examine the two most popular: batch means and
independent replications.
(Recall: As discussed earlier, confidence intervals for
terminating simulations usually use independent replications.)

31 / 55
Steady-State Analysis
Batch Means

The method of batch means is often used to estimate Var(Ȳn )


or calculate confidence intervals for µ.
Idea: Divide one long simulation run into a number of
contiguous batches, and then appeal to a central limit theorem
to assume that the resulting batch sample means are
approximately i.i.d. normal.
In particular, suppose that we partition Y1 , Y2 , . . . , Yn into b
nonoverlapping, contiguous batches, each consisting of m
observations (assume that n = bm).

Y1 , . . . , Ym , Ym+1 , . . . , Y2m , . . . , Y(b−1)m+1 , . . . , Ybm


| {z } | {z } | {z }
batch 1 batch 2 batch b

32 / 55
Steady-State Analysis
Batch Means

The ith batch mean is the sample mean of the m observations


from batch i, i = 1, 2, . . . , b,
m
1 X
Zi ≡ Y(i−1)m+j .
m
j=1

Similar to independent replications, we define the batch means


estimator for Var(Zi ) as
b
1 X
V̂B ≡ (Zi − Z̄b )2 ,
b−1
i=1

where
b
1X
Ȳn = Z̄b ≡ Zi
b
i=1

is the grand sample mean.


33 / 55
Steady-State Analysis
Batch Means

If m is large, then the batch means are approximately i.i.d.


normal, and (as for IR) we obtain an approximate 100(1 − α)%
CI for µ, q
µ ∈ Z̄b ± tα/2,b−1 V̂B /b.
This equation is similar to (1). The difference is that batch
means divides one long run into a number of batches, whereas
independent replications uses a number of independent shorter
runs.
Consider the old IR example from §2 with the understanding
that the Zi ’s must now be regarded as batch means (instead of
replicate means); then the same numbers carry through the
example.

34 / 55
Steady-State Analysis
Batch Means

The technique of batch means is intuitively appealing and easy


to understand.
But problems can come up if the Yj ’s are not stationary (e.g., if
significant initialization bias is present), if the batch means are
not normal, or if the batch means are not independent.
If any of these assumption violations exist, poor confidence
interval coverage may result — unbeknownst to the analyst.
To ameliorate the initialization bias problem, the user can
truncate some of the data or make a long run as discussed
in §3.
In addition, the lack of independence or normality of the batch
means can be countered by increasing the batch size m.

35 / 55
Steady-State Analysis
Independent Replications

Of the difficulties encountered when using batch means, the


possibility of correlation among the batch means might be the
most troublesome.
This problem is explicitly avoided by the method of
independent replications, described in the context of
terminating simulations in §2. In fact, the replicate means are
independent by their construction.
Unfortunately, since each of the b reps has to be started
properly, initialization bias presents more trouble when using IR
than when using batch means.
Recommendation: Because of initialization bias in each of the
replications, use batch means over independent reps.
(Alexopoulos and Goldsman, “To Batch or not to Batch?”)

36 / 55
Steady-State Analysis
Overlapping Batch Means

Suppose that we use the following overlapping batches

Y1 Y2 Y3 Y4 · · · Ym
Y2 Y3 Y4 Y5 ··· Ym Ym+1
Y3 Y4 Y5 Y6 ··· Ym Ym+1 Ym+2
.. .. .. .. .. .. ..
. . . . . . .

with batch means


i+m−1
1 X
Zi = Yj i = 1, . . . , n − m + 1.
m
j=i

Turns out: Even though the Zi ’s are highly correlated, we can


deal with it!

37 / 55
Steady-State Analysis
Overlapping Batch Means

The OBM estimator for µ is Ȳn (no surprise), and the OBM
estimator for Var(Ȳn ) is
n−m+1
X
1
V̂O = (Zi − Ȳn )2 .
n−m+1
i=1

Facts: As n and m get large,

E(V̂O ) Var(V̂O ) 2
→ 1 and →
E(V̂B ) Var(V̂B ) 3

So OBM has the same bias as, but lower variance than regular
BM — good! (Meketon and Schmeiser 1984, “Overlapping
Batch Means: Something for Nothing?”)

38 / 55
Steady-State Analysis
Other Methods

Several other methods for obtaining variance estimators for the


sample mean and CI’s for the steady-state process mean µ.
Spectral Estimation. This method estimates Var(Ȳn ) (as well
as the analogous CI’s for µ) in a manner completely different
from that of batch means.
This approach operates in the so-called frequency domain,
whereas batch means uses the time domain.
Spectral estimation sometimes takes a little effort, but it works
well enough to suggest that the reader consult the relevant
references, e.g., Lada and Wilson’s work on WASSP.

39 / 55
Steady-State Analysis
Other Methods

Regeneration. Many simulations can be broken into i.i.d.


blocks that probabilistically “start over” at certain regeneration
points.
Example: An M/M/1 queue’s waiting time process, where the
i.i.d. blocks are defined by groups of customers whose
endpoints have zero waiting times.
Regeneration uses the i.i.d. structure and, under certain
conditions, gives great estimators for Var(Ȳn ) and CI’s for µ.
The method effectively eliminates any initialization problems.
On the other hand, it may be difficult to define natural
regeneration points, and extremely long simulation runs are
often needed to obtain a reasonable number of i.i.d. blocks.

40 / 55
Steady-State Analysis
Other Methods

Standardized Time Series. One often uses the central limit


theorem to standardize i.i.d. random variables into an
(asymptotically) normal random variable.
Schruben and various colleagues generalize this idea in many
ways by using a process central limit theorem to standardize a
stationary simulation process into a Brownian bridge process.
Properties of Brownian bridges are then used to calculate a
number of good estimators for Var(Ȳn ) and CI’s for µ.
This method is easy to apply and has some asymptotic
advantages over batch means.
Research Issue: Combine various strategies together to obtain
even-better variance estimators.

41 / 55
Comparison of Systems

Outline
1 Introduction
2 Finite-Horizon Simulation
3 Initialization Problems
4 Steady-State Analysis
Batch Means
Independent Replications
Overlapping Batch Means
Other Methods
5 Comparison of Systems
Classical Confidence Intervals
Common Random Numbers
Antithetic Random Numbers
Ranking, Selection, and Multiple Comparisons Methods
6 What Lies Ahead?
42 / 55
Comparison of Systems

One of the most important uses of simulation output analysis


regards the comparison of competing systems or alternative
system configurations.
Example: Evaluate two different “re-start” strategies that an
airline can evoke following a major traffic disruption such as a
snowstorm in the Northeast — which policy minimizes a certain
cost function associated with the re-start?
Simulation is uniquely equipped to help the experimenter
conduct this type of comparison analysis.
Many techniques: (i) classical statistical CI’s, (ii) common
random numbers, (iii) antithetic variates, (iv) and ranking and
selection procedures.

43 / 55
Comparison of Systems
Classical Confidence Intervals

With our airline example in mind, let Zi,j be the cost from the
jth simulation replication of strategy i, i = 1, 2, j = 1, 2, . . . , bi .
Assume that Zi,1 , Zi,2 , . . . , Zi,bi are i.i.d. normal with unknown
mean µi and unknown variance, i = 1, 2. Justification?. . .
(a) Get independent data by controlling the random numbers
between replications.
(b) Get identically distributed costs between reps by performing
the reps under identical conditions.
(c) Get approximately normal data by adding up (or averaging)
many sub-costs to get overall costs for both strategies.

44 / 55
Comparison of Systems
Classical Confidence Intervals

Goal: Obtain a 100(1 − α)% CI for the difference µ1 − µ2 .


Suppose that the Z1,j ’s are independent of the Z2,j ’s and define
bi
1X
Z̄i,bi ≡ Zi,j , i = 1, 2,
bi
j=1

and
b
1 X i

Si2 ≡ (Zi,j − Z̄i,bi )2 , i = 1, 2.


bi − 1
j=1

45 / 55
Comparison of Systems
Classical Confidence Intervals

An approximate 100(1 − α)% CI is


s
S12 S22
µ1 − µ2 ∈ Z̄1,b1 − Z̄2,b2 ± tα/2,ν +
b1 b2

where the (approx.) d.f. ν is given in any statistics text.


Suppose (as in airline example) that small cost is good. Then if
the interval lies entirely to the left [right] of zero, then system 1
[2] is better; if the interval contains zero, then the two systems
must be regarded, in a statistical sense, as about the same.

46 / 55
Comparison of Systems
Classical Confidence Intervals

An alternative classical strategy: Use a CI that is analogous to


a paired-t test.
Here take b replications from both strategies and set the
difference Dj ≡ Z1,j − Z2,j for j = 1, 2, . . . , b.
Calculate the sample mean and variance of the differences:
b b
1X 2 1 X
D̄b ≡ Dj and SD ≡ (Dj − D̄b )2 .
b b−1
j=1 j=1

The resulting 100(1 − α)% CI is


q
2 /b.
µ1 − µ2 ∈ D̄b ± tα/2,b−1 SD

These paired-t intervals are very efficient if Corr(Z1,j , Z2,j ) > 0,


j = 1, 2, . . . , b.
47 / 55
Comparison of Systems
Common Random Numbers

Idea behind the above trick: Use common random numbers,


i.e., use the same pseudo-random numbers in exactly the same
ways for corresponding runs of each of the competing systems.
Example: Use the same customer arrival times when
simulating different proposed configurations of a job shop.
By subjecting the alternative systems to identical experimental
conditions, we hope to make it easy to distinguish which
systems are best even though the respective estimators are
subject to sampling error.

48 / 55
Comparison of Systems
Common Random Numbers

Consider the case in which we compare two queueing systems,


A and B, on the basis of their expected customer transit times,
θA and θB — the smaller θ-value corresponds to the better
system.
Suppose we have estimators θ̂A and θ̂B for θA and θB , resp.
We’ll declare A as the better system if θ̂A < θ̂B . If θ̂A and θ̂B are
simulated independently, then the variance of their difference,

Var(θ̂A − θ̂B ) = Var(θ̂A ) + Var(θ̂B ),

could be very large; then our declaration might lack conviction.

49 / 55
Comparison of Systems
Common Random Numbers

If we could reduce Var(θ̂A − θ̂B ), then we could be much more


confident about our declaration.
CRN sometimes induces a high positive correlation between
the point estimators θ̂A and θ̂B .
Then we have

Var(θ̂A − θ̂B ) = Var(θ̂A ) + Var(θ̂B ) − 2Cov(θ̂A , θ̂B )


< Var(θ̂A ) + Var(θ̂B ),

and we obtain a savings in variance.

50 / 55
Comparison of Systems
Antithetic Random Numbers

Antithetic random numbers. Alternatively, if we can induce


negative correlation between two unbiased estimators, θ̂1 and
θ̂2 , for some parameter θ, then the unbiased estimator
(θ̂1 + θ̂2 )/2 might have low variance.
Most simulation texts give advice on how to run the simulations
of the competing systems so as to induce positive or negative
correlation between them.
Consensus: if conducted properly, CRN and ARN can lead to
tremendous variance reductions.

51 / 55
Comparison of Systems
Ranking, Selection, and Multiple Comparisons Methods

Ranking, selection, and multiple comparisons methods form


another class of statistical techniques used to compare
alternative systems.
Here, the experimenter is interested in selecting the best of a
number (≥ 2) of competing processes.
Specify the desired probability of correctly selecting the best
process, especially if the best process is significantly better
than its competitors.
These methods are simple to use, fairly general, and intuitively
appealing. (see Bechhofer, Santner, and Goldsman 1995).

52 / 55
What Lies Ahead?

Outline
1 Introduction
2 Finite-Horizon Simulation
3 Initialization Problems
4 Steady-State Analysis
Batch Means
Independent Replications
Overlapping Batch Means
Other Methods
5 Comparison of Systems
Classical Confidence Intervals
Common Random Numbers
Antithetic Random Numbers
Ranking, Selection, and Multiple Comparisons Methods
6 What Lies Ahead?
53 / 55
What Lies Ahead?

Use of more-sophisticated variance estimators


Automated sequential run-control procedures that control
for initialization bias and deliver valid confidence intervals
of specified length
Change-point detection algorithms for initialization bias
tests
Incorporating combinations of variance reduction tools
Multivariate confidence intervals
Better ranking and selection techniques

54 / 55
What Lies Ahead?

If you like this stuff, here are some General References. . .


Alexopoulos, C. and A. F. Seila. 1998. Output data analysis,
Chapter 7 in Handbook of Simulation: Principles, Methodology,
Advances, Applications and Practice, J. Banks, Ed., John Wiley,
New York.
Goldsman, D. and B. L. Nelson. 1998. Comparing systems via
simulation, Chapter 7 in Handbook of Simulation: Principles,
Methodology, Advances, Applications and Practice, J. Banks,
Ed., John Wiley, New York.
Law, A. M. 2006. Simulation Modeling and Analysis, 4th Ed.,
McGraw-Hill, New York.

55 / 55

You might also like