Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1996, Annals of Statistics
…
29 pages
1 file
The Markov chain simulation method has been successfully used in many problems, including some that arise in Bayesian statistics. We give a self-contained proof of the convergence of this method in general state spaces under conditions that are easy to verify.
1998
Markov chain Monte Carlo (MCMC) methods, including the Gibbs sampler and the Metropolis-Hastings algorithm, are very commonly used in Bayesian statistics for sampling from complicated, high-dimensional posterior distributions. A continuing source of uncertainty is how long such a sampler must be run in order to converge approximately to its target stationary distribution. Rosenthal (1995b) presents a method to compute rigorous theoretical upper bounds on the number of iterations required to achieve a specified degree of convergence in total variation distance by verifying drift and minorization conditions. We propose the use of auxiliary simulations to estimate the numerical values needed in Rosenthal's theorem. Our simulation method makes it possible to compute quantitative convergence bounds for models for which the requisite analytical computations would be prohibitively difficult or impossible. On the other hand, although our method appears to perform well in our example problems, it can not provide the guarantees offered by analytical proof.
2020
Note this is a random variable with expected value π(f) (i.e. the estimator is unbiased) and standard deviation of order O(1/ √ N). Then by CLT, the errorπ(f) − π(f) will have a limiting normal distribution as N → ∞. Therefore we can compute π(f) by computing samples (plus some regression techniques?). But the problem is if π u is complicated, then it is very difficult to simulate i.i.d. random variables from π(•). The MCMC solution is to construct a Markov chain on X which has π(•) as a stationary distribution, i.e. X π(dx)P (x, dy) = π(dy) Then for large n the distribution of X n will be approximately stationary. We can set Z 1 = X n and get Z 2 , Z 3 ,. .. , Z n repeatedly. Remark. In practice instead of starting a fresh Markov chain every time we take the successive X n 's, for example, (N − B) −1 N i=B+1 f (X i). We tend to ignore the dependence problem as many of the mathematical issues are similar in either implementation. Remark. We have other ways of estimation, such as "rejection sampling" and "importance sampling". But MCMC algorithms is applied most widely. 2 MCMC and its construction This section will explain how MCMC algorithm is constructed. Now we introduce reversibility. Definition. A Markov Chain on state space X is reversible with respect to a probability distribution π(•) on X , if π(dx)P (x, dy) = π(dy)P (y, dx), x, y ∈ X Proposition. A Markov Chain is reversible with respect to π(•), then π(•) is the stationary distribution for the chain. Proof. By reversibility, we have x∈X π(dx)P (x, dy) = x∈X π(dy)P (y, dx) = π(dy) x∈X P (x, dy) = π(dy) Now the simplest way to construct a MCMC algorithm which satisfies reversibility is using Metropolis-Hastings algorithm. 2.1 The Metropolis-Hastings Algorithm. Suppose that π(•) has a (possibly unnormalized) density π u. Let Q(x, •) be essentially any other Markov Chain, whose transitions also have a (possibly unnormalized) density, i.e. Q(x, dy) ∝ q(x, y)dy. First choose some X 0. Then given X n , generate a proposal Y n+1 from Q(X n , •). In the meantime we flip a independent bias coin with probability of heads equals to α(X n , Y n+1), where α(x, y) = min 1, π u (y)q(y, x) π u (x)q(x, y) , π(x)q(x, y) = 0 And α(x, y) = 1 when π(x)q(x, y) = 0. Then if the coin is heads, we accept the proposal and set X n+1 = Y n+1. If the coin is tails, then we reject the proposal and set X n+1 = X n. Then we replace n by n + 1 and repeat. The reason we take α(x, y) as above is explain as follow. Proposition. The Metropolis-Hastings Algorithm produces a Markov Chain {X n } which is reversible with respect to π(•). Proof. We want to show for any x, y ∈ X , π(dx)P (x, dy) = π(dy)P (y, dx) whereȲ i = 1 J j Y ij. The Gibbs sampler then proceeds by updating the K + 3 variables according to the above conditional distributions. This is feasible since the conditional distributions are all easily simulated (IG and N).
Given a Markov chain with uncertain transition probabilities modelled in a Bayesian way, we investigate a technique for analytically approximating the mean transition frequency counts over a finite horizon. Conventional techniques for addressing this problem either require the enumeration of a set of generalized process "hyperstates" whose cardinality grows exponentially with the terminal horizon, or axe limited to the two-state case and expressed in terms of hypergeometric series. Our approach makes use of a diffusion approximation technique for modelling the evolution of information state components of the hyperstate process. Interest in this problem stems from a consideration of the policy evaluation step of policy iteration algorithms applied to Markov decision processes with uncertain transition probabilities.
P m (x, ·) − Π(·) T V → 0 as m → ∞.
Mathematics and Computers in Simulation, 2010
We describe a quasi-Monte Carlo method for the simulation of discrete time Markov chains with continuous multi-dimensional state space. The method simulates copies of the chain in parallel. At each step the copies are reordered according to their successive coordinates. We prove the convergence of the method when the number of copies increases. We illustrate the method with numerical examples where the simulation accuracy is improved by large factors compared with Monte Carlo simulation.
The Annals of Statistics, 2011
Adaptive and interacting Markov chain Monte Carlo algorithms (MCMC) have been recently introduced in the literature. These novel simulation algorithms are designed to increase the simulation efficiency to sample complex distributions. Motivated by some recently introduced algorithms (such as the adaptive Metropolis algorithm and the interacting tempering algorithm), we develop a general methodological and theoretical framework to establish both the convergence of the marginal distribution and a strong law of large numbers. This framework weakens the conditions introduced in the pioneering paper by Roberts and Rosenthal [J. Appl. Probab. 44 (2007) 458-475]. It also covers the case when the target distribution π is sampled by using Markov transition kernels with a stationary distribution that differs from π .
Operations Research, 2008
We introduce and study a randomized quasi-Monte Carlo method for the simulation of Markov chains up to a random (and possibly unbounded) stopping time. The method simulates n copies of the chain in parallel, using a (d + 1)-dimensional, highly uniform point set of cardinality n, randomized independently at each step, where d is the number of uniform random numbers required at each transition of the Markov chain. The general idea is to obtain a better approximation of the state distribution, at each step of the chain, than with standard Monte Carlo. The technique can be used in particular to obtain a low-variance unbiased estimator of the expected total cost when state-dependent costs are paid at each step. It is generally more effective when the state space has a natural order related to the cost function.
arXiv: Probability, 2020
This review paper provides an introduction of Markov chains and their convergence rates which is an important and interesting mathematical topic which also has important applications for very widely used Markov chain Monte Carlo (MCMC) algorithm. We first discuss eigenvalue analysis for Markov chains on finite state spaces. Then, using the coupling construction, we prove two quantitative bounds based on minorization condition and drift conditions, and provide descriptive and intuitive examples to showcase how these theorems can be implemented in practice. This paper is meant to provide a general overview of the subject and spark interest in new Markov chain research areas.
Markov Chain Monte Carlo (MCMC) simualtion is a powerful technique to perform numerical integration. It can be used to numerically estimate complex economometric models. In this paper I describe the intuition behind the process, show its flexiblity and applicability. I conclude by demonstrating that these methods are often simpler to implement than many common techniques such as MLE.
Markov Chain Monte Carlo Simulations and Their Statistical Analysis, 2004
This article is a tutorial on Markov chain Monte Carlo simulations and their statistical analysis. The theoretical concepts are illustrated through many numerical assignments from the author's book on the subject. Computer code (in Fortran) is available for all subjects covered and can be downloaded from the web.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
1999
arXiv: Probability, 2020
Journal of Applied Probability, 2002
Mathematics of Operations Research, 2001
Statistica sinica, 1997
arXiv (Cornell University), 2013
Monte Carlo and Quasi-Monte Carlo Methods 2008, 2009
Monte Carlo and Quasi-Monte Carlo Methods 2006, 2008
Bayesian Time Series Models
Methodology and Computing in Applied Probability, 2007
Proceeding of the 2001 Winter Simulation Conference (Cat. No.01CH37304), 2001
Mathematical Methods of Operations Research, 1997
arXiv (Cornell University), 2016
Statistics and Computing, 2015
Electronic Notes in Theoretical Computer Science, 2016
Mathematics and Computers in Simulation, 2009
Journal of the American Statistical Association, 1998
Cornell University - arXiv, 2019