Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1996, Random Structures and Algorithms
AI
The paper explores methods for sampling nearly uniformly from large sets using Markov chains. It discusses the importance of defining a Markov chain with a uniform distribution as its stationary distribution and examines various techniques for estimating the mixing rate of such chains. The limitations of existing methods, particularly eigenvalue approaches, are highlighted in relation to specific applications such as volume estimation and enumeration problems.
Electronic Communications in Probability, 2015
We explore a method introduced by Chatterjee and Ledoux in a paper on eigenvalues of principle submatrices. The method provides a tool to prove concentration of measure in cases where there is a Markov chain meeting certain conditions, and where the spectral gap of the chain is known. We provide several additional applications of this method. These applications include results on operator compressions using the Kac walk on SO(n) and a Kac walk coupled to a thermostat, and a concentration of measure result for the length of the longest increasing subsequence of a random walk distributed under the invariant measure for the asymmetric exclusion process.
The Electronic Journal of Combinatorics, 2009
Let $G$ be a graph randomly selected from ${\bf G}_{n, p}$, the space of Erdős-Rényi Random graphs with parameters $n$ and $p$, where $p \geq {\log^6 n\over n}$. Also, let $A$ be the adjacency matrix of $G$, and $v_1$ be the first eigenvector of $A$. We provide two short proofs of the following statement: For all $i \in [n]$, for some constant $c>0$ $$\left|v_1(i) - {1\over\sqrt{n}}\right| \leq c {1\over\sqrt{n}} {\log n\over\log (np)} \sqrt{{\log n\over np}}$$ with probability $1 - o(1)$. This gives nearly optimal bounds on the entrywise stability of the first eigenvector of (Erdős-Rényi) Random graphs. This question about entrywise bounds was motivated by a problem in unsupervised spectral clustering. We make some progress towards solving that problem.
Mathematics and Computers in Simulation, 2017
This paper relates the study of random walks on graphs and directed graphs to random walks that arise in Monte Carlo methods applied to optimization problems. Previous results on simple graphs are surveyed and new results on the mixing times for Markov chains are derived. c
Combinatorics, Probability & Computing, 2017
In network modeling of complex systems one is often required to sample random realizations of networks that obey a given set of constraints, usually in form of graph measures. A much studied class of problems targets uniform sampling of simple graphs with given degree sequence or also with given degree correlations expressed in the form of a joint degree matrix. One approach is to use Markov chains based on edge switches (swaps) that preserve the constraints, are irreducible (ergodic) and fast mixing. In 1999, Kannan, Tetali and Vempala (KTV) proposed a simple swap Markov chain for sampling graphs with given degree sequence and conjectured that it mixes rapidly (in poly-time) for arbitrary degree sequences. While the conjecture is still open, it was proven for special degree sequences, in particular, for those of undirected and directed regular simple graphs, of half-regular bipartite graphs, and of graphs with certain bounded maximum degrees. Here we prove the fast mixing KTV conjecture for novel, exponentially large classes of irregular degree sequences. Our method is based on a canonical decomposition of degree sequences into split graph degree sequences, a structural theorem for the space of graph realizations and on a factorization theorem for Markov chains. After introducing bipartite splitted degree sequences, we also generalize the canonical split graph decomposition for bipartite and directed graphs.
Various aspects of the theory of random walks on graphs are surveyed. In particular, estimates on the important parameters of access time, commute time, cover time and mixing time are discussed. Connections with the eigenvalues of graphs and with electrical networks, and the use of these connections in the study of random walks is described. We also sketch recent algorithmic applications of random walks, in particular to the problem of sampling.
Cornell University - arXiv, 2021
We study Markov population processes on large graphs, with the local state transition rates of a single vertex being linear function of its neighborhood. A simple way to approximate such processes is by a system of ODEs called the homogeneous mean-field approximation (HMFA). Our main result is showing that HMFA is guaranteed to be the large graph limit of the stochastic dynamics on a finite time horizon if and only if the graph-sequence is quasi-random. Explicit error bound is given and being of order 1 √ N plus the largest discrepancy of the graph. For Erdős Rényi and random regular graphs we show an error bound of order the inverse square root of the average degree. In general, diverging average degrees is shown to be a necessary condition for the HMFA to be accurate. Under special conditions, some of these results also apply to more detailed type of approximations like the inhomogenous mean field approximation (IHMFA). We pay special attention to epidemic applications such as the SIS process.
Stochastic Processes and their Applications, 2007
The paper presents two results. The first one provides separate conditions for the upper and lower estimate of the distribution of the exit time from balls of a random walk on a weighted graph. The main result of the paper is that the lower estimate follows from the elliptic Harnack inequality. The second result is an off-diagonal lower bound for the transition probability of the random walk.
We prove tail estimates for variables of the form $\sum_i f(X_i)$, where $(X_i)_i$ is a sequence of states drawn from a reversible Markov chain, or, equivalently, from a random walk on an undirected graph. The estimates are in terms of the range of the function $f$, its variance, and the spectrum of the graph. The purpose of our estimates is to determine the number of chain/walk samples which are required for approximating the expectation of a distribution on vertices of a graph, especially an expander. The estimates must therefore provide information for fixed number of samples (as in [Gillman]) rather than just asymptotic information. Our proofs are more elementary than other proofs in the literature, and our results are sharper. We obtain Bernstein and Bennett-type inequalities, as well as an inequality for subgaussian variables.
2010
Estimating characteristics of large graphs via sampling is a vital part of the study of complex networks. Current sampling methods such as (independent) random vertex and random walks are useful but have drawbacks. Random vertex sampling may require too many resources (time, bandwidth, or money). Random walks, which normally require fewer resources per sample, can suffer from large estimation errors in the presence of disconnected or loosely connected graphs. In this work we propose a new m-dimensional random walk that uses m dependent random walkers. We show that the proposed sampling method, which we call Frontier sampling, exhibits all of the nice sampling properties of a regular random walk. At the same time, our simulations over large real world graphs show that, in the presence of disconnected or loosely connected components, Frontier sampling exhibits lower estimation errors than regular random walks. We also show that Frontier sampling is more suitable than random vertex sampling to sample the tail of the degree distribution of the graph.
2000
We analyze several random random walks on one-dimensional lat- tices using spectral analysis and probabilistic methods. Through our analysis, we develop insight into the pre-asymptotic convergence of Markov chains.
Journal of Mathematical Physics, 2012
We study the linear eigenvalue statistics of large random graphs in the regimes when the mean number of edges for each vertex tends to infinity. We prove that for a rather wide class of test functions the fluctuations of linear eigenvalue statistics converges in distribution to a Gaussian random variable with zero mean and variance which coincides with "non gaussian" part of the Wigner ensemble variance.
Random walks and discrete potential …, 1999
Abstract. We observe that the spectral measure of the Markov operator depends continuously on the graph in the space of graphs with uniformly bounded degree. We investigate the behaviour of the largest eigenvalue and the density of eigenvalues for ...
ESAIM: Probability and Statistics, 1999
We study the convergence to equilibrium of n−samples of independent Markov chains in discrete and continuous time. They are defined as Markov chains on the n−fold Cartesian product of the initial state space by itself, and they converge to the direct product of n copies of the initial stationary distribution. Sharp estimates for the convergence speed are given in terms of the spectrum of the initial chain. A cutoff phenomenon occurs in the sense that as n tends to infinity, the total variation distance between the distribution of the chain and the asymptotic distribution tends to 1 or 0 at all times. As an application, an algorithm is proposed for producing an n−sample of the asymptotic distribution of the initial chain, with an explicit stopping test.
Journal of Physics A: Mathematical and Theoretical
We consider discrete-time Markov chains and study large deviations of the pair empirical occupation measure, which is useful to compute fluctuations of pure-additive and jump-type observables. We provide an exact expression for the finite-time moment generating function, which is split in cycles and paths contributions, and scaled cumulant generating function of the pair empirical occupation measure via a graph-combinatorial approach. The expression obtained allows us to give a physical interpretation of interaction and entropic terms, and of the Lagrange multipliers, and may serve as a starting point for sub-leading asymptotics. We illustrate the use of the method for a simple two-state Markov chain.
Revista Matemática Iberoamericana, 2000
This paper studies the on-and off-diagonal upper estimate and the two-sided transition probability estimate of random walks on weighted graphs.
Computing Research Repository, 2009
Many applications in network analysis require algorithms to sample uniformly at random from the set of all graphs with a prescribed degree sequence. We present a Markov chain based approach which converges to the uniform distribution of all realizations for both the directed and undirected case. It remains an open challenge whether these Markov chains are rapidly mixing. For the
Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 2009
The aim of this article is to discuss some of the notions and applications of random walks on finite graphs, especially as they apply to random graphs. In this section we give some basic definitions, in Section 2 we review applications of random walks in computer science, and in Section 3 we focus on walks in random graphs. Given a graph G = (V, E), let d G (v) denote the degree of vertex v for all v ∈ V. The random walk W v = (W v (t), t = 0, 1,. . .) is defined as follows: W v (0) = v and given x = W v (t), W v (t + 1) is a randomly chosen neighbour of x. When one thinks of a random walk, one often thinks of Polya's Classical result for a walk on the d-dimensional lattice Z d , d ≥ 1. In this graph two vertices x = (x 1 , x 2 ,. .. , x d) and y = (y 1 , y 2 ,. .. , y d) are adjacent iff there is an index i such that (i) x j = y j for j = i and (ii) |x i − y i | = 1. Polya [33] showed that if d ≤ 2 then a walk starting at the origin returns to the origin with probability 1 and that if d ≥ 3 then it returns with probability p(d) < 1. See also Doyle and Snell [22]. A random walk on a graph G defines a Markov chain on the vertices V. If G is a finite, connected and non-bipartite graph, then this chain has a stationary distribution π given by π v = d G (v)/(2|E|). Thus if P (t) v (w) = Pr(W v (t) = w), then lim t→∞ P (t) v (w) = π w , independent of the starting vertex v. In this paper we only consider finite graphs, and we will focus on two aspects of a random walk: The Mixing Time and the Cover Time.
Springer Series in Synergetics, 2011
Journal of Physics A: Mathematical and General, 2005
Random walks on graphs are widely used in all sciences to describe a great variety of phenomena where dynamical random processes are affected by topology. In recent years, relevant mathematical results have been obtained in this field, and new ideas have been introduced, which can be fruitfully extended to different areas and disciplines. Here we aim at giving a brief but comprehensive perspective of these progresses, with a particular emphasis on physical aspects. Contents 1 Introduction 2 Mathematical description of graphs 3 The random walk problem 4 The generating functions 5 Random walks on finite graphs 6 Infinite graphs 7 Random walks on infinite graphs 8 Recurrence and transience: the type problem 9 The local spectral dimension 10 Averages on infinite graphs 11 The type problem on the average 1 12 The average spectral dimension 21 13 A survey of analytical results on specific networks 23 13.1 Renormalization techniques. .
We study the linear eigenvalue statistics of large random graphs in the regimes when the mean number of edges for each vertex tends to infinity. We prove that for a rather wide class of test functions the fluctuations of linear eigenvalue statistics converges in distribution to a Gaussian random variable with zero mean and variance which coincides with "non-gaussian" part of the Wigner ensemble variance. C 2012 American Institute of Physics. [http://dx.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.