Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2008, Computational Statistics & Data Analysis
In Bertail & Clémençon (2005a) a novel methodology for bootstrapping general Harris Markov chains has been proposed, which crucially exploits their renewal properties (when eventually extended via the Nummelin splitting technique) and has theoretical properties that surpass other existing methods within the Markovian framework (bmoving block bootstrap, sieve bootstrap etc...). This paper is devoted to discuss practical issues related to the implementation of this specific resampling method and to present various simulations studies for investigating the performance of the latter and comparing it to other bootstrap resampling schemes standing as natural candidates in the Markov setting. Résumé : Une nouvelle méthodologie pour "bootstrapper" des chaînes de Markov Harris récurrente à été proposée par Bertail et Clémençon (2005a). Cette méthode utilise de manière cruciale les propriétés de renouvellement des chaînes de Markov (éventuellement en étendant la chaîne via la technique de "splitting" introduite par Nummelin). Elle possède des propriétés asymptotiques (propriétés au second ordre) meilleures que celles obtenues pour les méthodes existantes dans un contexte markovien (bootstrap par block, sieve bootstrap etc...). L'objet de cet article est de discuter les questions pratiques d'implémentation de cette méthode de rééchantillonnage et de présenter diverses simulations pour en étudier les performances à distance finie. Nous comparons ces résultats avec ceux obtenus avec des méthodes concurrentes dans un cadre markovien.
Computational Statistics & Data Analysis, 2006
2003
In this paper, we show how the original Bootstrap method introduced by Datta & McCormick (1993), namely the regenerationbased Bootstrap, for approximating the sampling distribution of sample mean statistics in the atomic Markovian setting may be modified, so as to be second order correct. We prove that the drawback of the original construction mainly relies on a wrong estimation of the skewness of the sampling distribution and that it is possible to correct it by suitable standardization of the regeneration-based bootstrap statistic and recentering of the bootstrap distribution. An asymptotic result establishing the second order accuracy of this bootstrap estimate up to O(n −1 log(n)) (close to the rate obtained in an i.i.d. setting) is also stated under weak moment assumptions.
Bernoulli, 2006
Les documents de travail ne reflètent pas la position de l'INSEE et n'engagent que leurs auteurs.
arXiv (Cornell University), 2016
This paper proposes a new type of recurrence where we divide the Markov chains into intervals that start when the chain enters into a subset , then sample another subset far away from and end when the chain again return to. The length of these intervals have the same distribution and if and are far apart, almost independent of each other. and may be any subsets of the state space that are far apart of each other and such that the movement between the subsets is repeated several times in a long Markov chain. The expected length of the intervals is used in a function that describes the mixing properties of the chain and improves our understanding of Markov chains. The paper proves a theorem that gives a bound on the variance of the estimate for (), the probability for under the limiting density of the Markov chain. This may be used to find the length of the Markov chain that is needed to explore the state space sufficiently. It is shown that the length of the periods between each time is entered by the Markov chain, has a heavy tailed distribution. This increases the upper bound for the variance of the estimate (). The paper gives a general guideline on how to find the optimal scaling of parameters in the Metropolis-Hastings simulation algorithm that implicit determine the acceptance rate. We find examples where it is optimal to have a much smaller acceptance rate than what is generally recommended in the literature and also examples where the optimal acceptance rate vanishes in the limit.
Annals of Statistics, 1996
The Markov chain simulation method has been successfully used in many problems, including some that arise in Bayesian statistics. We give a self-contained proof of the convergence of this method in general state spaces under conditions that are easy to verify.
2020
Note this is a random variable with expected value π(f) (i.e. the estimator is unbiased) and standard deviation of order O(1/ √ N). Then by CLT, the errorπ(f) − π(f) will have a limiting normal distribution as N → ∞. Therefore we can compute π(f) by computing samples (plus some regression techniques?). But the problem is if π u is complicated, then it is very difficult to simulate i.i.d. random variables from π(•). The MCMC solution is to construct a Markov chain on X which has π(•) as a stationary distribution, i.e. X π(dx)P (x, dy) = π(dy) Then for large n the distribution of X n will be approximately stationary. We can set Z 1 = X n and get Z 2 , Z 3 ,. .. , Z n repeatedly. Remark. In practice instead of starting a fresh Markov chain every time we take the successive X n 's, for example, (N − B) −1 N i=B+1 f (X i). We tend to ignore the dependence problem as many of the mathematical issues are similar in either implementation. Remark. We have other ways of estimation, such as "rejection sampling" and "importance sampling". But MCMC algorithms is applied most widely. 2 MCMC and its construction This section will explain how MCMC algorithm is constructed. Now we introduce reversibility. Definition. A Markov Chain on state space X is reversible with respect to a probability distribution π(•) on X , if π(dx)P (x, dy) = π(dy)P (y, dx), x, y ∈ X Proposition. A Markov Chain is reversible with respect to π(•), then π(•) is the stationary distribution for the chain. Proof. By reversibility, we have x∈X π(dx)P (x, dy) = x∈X π(dy)P (y, dx) = π(dy) x∈X P (x, dy) = π(dy) Now the simplest way to construct a MCMC algorithm which satisfies reversibility is using Metropolis-Hastings algorithm. 2.1 The Metropolis-Hastings Algorithm. Suppose that π(•) has a (possibly unnormalized) density π u. Let Q(x, •) be essentially any other Markov Chain, whose transitions also have a (possibly unnormalized) density, i.e. Q(x, dy) ∝ q(x, y)dy. First choose some X 0. Then given X n , generate a proposal Y n+1 from Q(X n , •). In the meantime we flip a independent bias coin with probability of heads equals to α(X n , Y n+1), where α(x, y) = min 1, π u (y)q(y, x) π u (x)q(x, y) , π(x)q(x, y) = 0 And α(x, y) = 1 when π(x)q(x, y) = 0. Then if the coin is heads, we accept the proposal and set X n+1 = Y n+1. If the coin is tails, then we reject the proposal and set X n+1 = X n. Then we replace n by n + 1 and repeat. The reason we take α(x, y) as above is explain as follow. Proposition. The Metropolis-Hastings Algorithm produces a Markov Chain {X n } which is reversible with respect to π(•). Proof. We want to show for any x, y ∈ X , π(dx)P (x, dy) = π(dy)P (y, dx) whereȲ i = 1 J j Y ij. The Gibbs sampler then proceeds by updating the K + 3 variables according to the above conditional distributions. This is feasible since the conditional distributions are all easily simulated (IG and N).
We give a new probabilistic proof of the Markov renewal theorem for Markov random walks with positive drift and Harris recurrent driving chain. It forms an alternative to the one recently given in [1] and follows more closely the probabilistic proofs provided for Blackwell's theorem in the literature by making use of ladder variables, the stationary Markov delay distribution and a coupling argument. A major advantage is that the arguments can be refined to yield convergence rate results.
Monte Carlo and Quasi-Monte Carlo Methods 2008, 2009
We study the convergence behavior of a randomized quasi-Monte Carlo (RQMC) method for the simulation of discrete-time Markov chains, known as array-RQMC. The goal is to estimate the expectation of a smooth function of the sample path of the chain. The method simulates n copies of the chain in parallel, using highly uniform point sets randomized independently at each step. The copies are sorted after each step, according to some multidimensional order, for the purpose of assigning the RQMC points to the chains. In this paper, we provide some insight on why the method works, explain what would need to be done to bound its convergence rate, discuss and compare different ways of realizing the sort and assignment, and report empirical experiments on the convergence rate of the variance and of the mean square discrepancy between the empirical and theoretical distribution of the states, as a function of n, for various types of discrepancies.
Publication interne n 1038 | Juillet 1996 | 15 pages Abstract: Evaluation studies of computer systems deal often with the analysis of random phenomena that arise when contentions on system resources imply an overall behavior which is not predictable in a deterministic fashion. In these cases, the statistical regularities that are nevertheless present allow the construction of a probabilistic model of the observed system. In this paper we address a performance comparison between two stable approaches for computing some steady state measures for Markov chains. Both approaches reveal particularly suitable when the in nitesimal generator of the Markov chain is ill{conditioned. Our analysis is carried out by means of a few numerical case studies, dealing with di erent structures and di erent dimensions of the in nitesimal generators themselves.
arXiv: Probability, 2020
This review paper provides an introduction of Markov chains and their convergence rates which is an important and interesting mathematical topic which also has important applications for very widely used Markov chain Monte Carlo (MCMC) algorithm. We first discuss eigenvalue analysis for Markov chains on finite state spaces. Then, using the coupling construction, we prove two quantitative bounds based on minorization condition and drift conditions, and provide descriptive and intuitive examples to showcase how these theorems can be implemented in practice. This paper is meant to provide a general overview of the subject and spark interest in new Markov chain research areas.
Linear Algebra and its Applications, 2016
Computational procedures for the stationary probability distribution, the group inverse of the Markovian kernel and the mean first passage times of a finite irreducible Markov chain, are developed using perturbations. The derivation of these expressions involves the solution of systems of linear equations and, structurally, inevitably the inverses of matrices. By using a perturbation technique, starting from a simple base where no such derivations are formally required, we update a sequence of matrices, formed by linking the solution procedures via generalized matrix inverses and utilising matrix and vector multiplications. Four different algorithms are given, some modifications are discussed, and numerical comparisons made using a test example. The derivations are based upon the ideas outlined in Hunter,
Mathematical Problems in Engineering, 2013
This paper presents a class of new accelerated restarted GMRES method for calculating the stationary probability vector of an irreducible Markov chain. We focus on the mechanism of this new hybrid method by showing how to periodically combine the GMRES and vector extrapolation method into a much efficient one for improving the convergence rate in Markov chain problems. Numerical experiments are carried out to demonstrate the efficiency of our new algorithm on several typical Markov chain problems.
Lecture Notes in Statistics, 2006
In this paper an attempt is made to present how renewal properties of Harris recurrent Markov chains or of specific extensions of the latter may be practically used for statistical inference in various settings. In the regenerative case, procedures can be implemented from data blocks corresponding to consecutive observed regeneration times for the chain. The main idea for extending the application of these statistical techniques to general Harris chains X consists in generating first a sequence of approximate renewal times for a regenerative extension of X from data X 1 , ..., X n and the parameters of a minorization condition satisfied by its transition probability kernel. Numerous applications of this estimation principle may be considered in both the stationary and nonstationary (including the null recurrent case) frameworks. This article deals with some important procedures based on (approximate) regeneration data blocks, from both practical and theoretical viewpoints, for the following topics: mean and variance estimation, confidence intervals, U -statistics, Bootstrap, robust estimation and statistical study of extreme values.
Monte Carlo Methods and Applications, 2019
The standard coupling from the past (CFTP) algorithm is an interesting tool to sample from exact Markov chain steady-state probability. The CFTP detects, with probability one, the end of the transient phase (called burn-in period) of the chain and consequently the beginning of its stationary phase. For large and/or stiff Markov chains, the burn-in period is expensive in time consumption. In this work, we propose a kind of dual form for CFTP called D-CFTP that, in many situations, reduces the Monte Carlo simulation time and does not need to store the history of the used random numbers from one iteration to another. A performance comparison of CFTP and D-CFTP will be discussed, and some numerical Monte Carlo simulations are carried out to show the smooth running of the proposed D-CFTP.
Lecture Notes in Control and Information Sciences, 2006
For the numerous applications of Markov chains (in particular MCMC methods), the problem of detecting an instant at which the convergence takes place is crucial. The 'cut-off phenomenon', or abrupt convergence, provides an answer to this problem. When a sample of Markov chains, or more generally of exponentially converging processes, is simulated in parallel, it remains far from its stationary distribution until a deterministic instant, and approaches it exponentially fast afterwards. The cut-off instant is explicitly known, and can be detected algorithmically using appropriate stopping times. The technique is illustrated on the Ornstein-Uhlenbeck diffusion.
Numerical Linear Algebra with Applications, 2011
This special issue contains a selection of papers from the Sixth International Workshop on the Numerical Solution of Markov Chains, held in Williamsburg, Virginia on September 16-17, 2010. The papers cover a broad range of topics including perturbation theory for absorbing chains, bounding techniques, steady-state and transient solution methods, multilevel algorithms, preconditioning, and applications.
Probability in the Engineering and Informational Sciences, 2012
A class of Markov chains we call successively lumbaple is specified for which it is shown that the stationary probabilities can be obtained by successively computing the stationary probabilities of a propitiously constructed sequence of Markov chains. Each of the latter chains has a(typically much) smaller state space and this yields significant computational improvements. We discuss how the results for discrete time Markov chains extend to semi-Markov processes and continuous time Markov processes. Finally, we will study applications of successively lumbaple Markov chains to classical reliability and queueing models.
Operations Research Letters, 1985
A standard strategy in simulation, for comparing two stochastic systems, is to use a common sequence of random numbers to drive both systems. Since regenerative output analysis of the steady-state of a system requires that the process be regenerative, it is of interest to derive conditions under which the method of common random numbers yields a regenerative process. It is shown here that if the stochastic systems are positive recurrent Markov chains with countable state space, then the coupled system is necessarily regenerative; in fact, we allow couplings more general than those induced by common random numbers. An example is given which shows that the regenerative property can fail to hold in general state space, even if the individual systems are regenerative. statistical analysis of simulation * Markov processes* renewal processes
2020
We present three classical methods in the study of dynamic and stationary characteristic of processes of Markovian or Semi-Markovian type which possess points of regeneration. Our focus is on the stationary distributions and conditions of its existence and use. The first approach is based on detailed probability analysis of time dependent passages between the states of the process at a given moment. We call this approach Kolmogorov approach. The second approach uses the probability meaning of Laplace-Stieltjes transformation and of the probability generating functions/ Some additional arteficial excrement construction is used to show how derive direct relationships between these functions and how to find them explicitly. The third approach obtains relationships between the stationary characteristics of the process by use of so called "equations of equilibrium". The input flow in each state must be equal to the respective output flow from that state. In such a way no accumulations should happen on each of that states when process gets its equilibrium. In all the illustrations of the these approaches we analyze a dynamic Marshal-Olkin reliability model with dependent components functioning in parallel. Results on this example are new.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.