Epidemic-style (gossip-based) techniques have recently emerged as a scalable class of protocols f... more Epidemic-style (gossip-based) techniques have recently emerged as a scalable class of protocols for peer-to-peer reliable multicast dissemination in large process groups. These protocols provide probabilistic guarantees on relia-bility and scalability. However, popular implementations of epidemic-style dissemination are reputed to suffer from two major drawbacks: (a) (Network Overhead) when de-ployed on a WAN-wide or VPN-wide scale they generate a large number
We consider a queueing system with multiple heterogeneous servers serving a multiclass population... more We consider a queueing system with multiple heterogeneous servers serving a multiclass population. The classes are distinguished by the time costs. All customers have i.i.d. service requirements. Arriving customers do not see the instantaneous queue occupancy. Arrivals are randomly routed to one of the servers and the routing probabilities are determined centrally to optimize the expected waiting cost. This is, in general, a difficult optimization problem and we obtain the structure of the routing matrix. Next we consider a system in which each queue charges an admission price. The arrivals are routed randomly to minimize an individual objective function that includes the expected waiting cost and the admission price. Once again, we obtain the structure of the equilibrium routing matrix for this case. Finally, we determine the admission prices to make the equilibrium routing probability matrix equal to a given optimal routing probability matrix.
Gossip-based protocols have received considerable attention for broadcast applications due to the... more Gossip-based protocols have received considerable attention for broadcast applications due to their attractive scalability and reliability properties. The reliability of probabilistic gossip schemes studied so far depends on each user having knowledge of the global membership and choosing gossip targets uniformly at random. The requirement of global knowledge is undesirable in largescale distributed systems. In this paper, we present a novel peer-to-peer membership service which operates in a completely decentralized manner in that nobody has global knowledge of membership. However, membership information is replicated robustly enough to support gossip with high reliability. Our scheme is completely self-organizing in the sense that the size of local views naturally converges to the 'right' value for gossip to succeed. This 'right' value is a function of system size, but is achieved without any node having to know the system size. We present the design, theoretical analysis and preliminary evaluation of SCAMP. Simulations show that its performance is comparable to that of previous schemes which use global knowledge of membership at each node.
Contents. 3.1 Queues with Correlated Inputs 3.2 Queues with Many Sources and Power-Law Scalings 3... more Contents. 3.1 Queues with Correlated Inputs 3.2 Queues with Many Sources and Power-Law Scalings 3.3 Queues with Large Buffers and Power-Law Scalings
In recent work, Jon Kleinberg considered a small-world network model consisting of a d-dimensiona... more In recent work, Jon Kleinberg considered a small-world network model consisting of a d-dimensional lattice augmented with shortcuts. The probability of a shortcut being present between two points decays as a power, r, of the distance r between them. Kleinberg showed that greedy routing is e cient if = d and that there is no e cient decentralised routing algorithm if 6 = d. The results were extended to a continuum model by Franceschetti and Meester. In our work, we extend the result to more realistic models constructed from a Poisson point process, wherein each point is connected to all its neighbours within some xed radius, as well as possessing random shortcuts to more distant nodes as described above.
Let (X k ) k2IN be a sequence of i.i.d. random variables taking values in a set , and consider th... more Let (X k ) k2IN be a sequence of i.i.d. random variables taking values in a set , and consider the problem of estimating the law of X 1 in a Bayesian framework. We prove that under mild conditions on the support of the prior, the sequence of posterior distributions satis es a moderate deviation principle.
In this paper, we analyze the performance of random load resampling and migration strategies in p... more In this paper, we analyze the performance of random load resampling and migration strategies in parallel server systems. Clients initially attach to an arbitrary server, but may switch servers independently at random instants of time in an attempt to improve their service rate. This approach to load balancing contrasts with traditional approaches where clients make smart server selections upon arrival (e.g., Join-the-Shortest-Queue policy and variants thereof). Load resampling is particularly relevant in scenarios where clients cannot predict the load of a server before being actually attached to it. An important example is in wireless spectrum sharing where clients try to share a set of frequency bands in a distributed manner.
Stationary tail probabilities in exponential server tandems with renewal arrivals
Queueing Systems, 1996
... with renewal arrivals A. Ganesh a and V. Anantharam b'* aDepartment of Computer Scie... more ... with renewal arrivals A. Ganesh a and V. Anantharam b'* aDepartment of Computer Science, University of Edinburgh, Edinburgh EH9 3JZ, UK E-mail: [email protected] bEECS Department, University of California, Berkeley, CA 94720, USA E-mail: [email protected] ...
Call admission in ATM networks involves a trade-off between ensuring an adequate quality of servi... more Call admission in ATM networks involves a trade-off between ensuring an adequate quality of service to users and exploiting the scale efficiencies of statistical multiplexing. Achieving a good trade-off requires some knowledge of the source traffic. Its effective bandwidth has been proposed as a measure that captures characteristics which are relevant to quality of service provisioning. The effective bandwidth of a source is not known a priori, but needs to be estimated from an observation of its output. We show that direct estimators that have been proposed for this purpose are biased when the source traffic is autocorrelated. By explicitly computing the bias for auto-regressive and Markov sources, we devise a bias correction scheme that does not require knowledge of the model parameters. This is achieved by exploiting a scaling property of the bias that is insensitive to model parameters, and that has the same form for both auto-regressive and Markov sources. This leads us to conjecture that the scaling property may be valid in greater generality and can be used to obtain unbiased effective bandwidth estimates for real traffic. Use of our bias correction technique enables us to obtain accurate estimates of effective bandwidths using relatively short block lengths. The latter is important both because the variance of the estimator increases with the block length, and because real traffic may well be non-stationary, requiring that estimates be obtained from short data records.
This paper studies the performance of contention based medium access control (MAC) protocols. In ... more This paper studies the performance of contention based medium access control (MAC) protocols. In particular, a simple and accurate technique for estimating the throughput of the IEEE 802.11 DCF protocol is developed. The technique is based on a rigorous analysis of the Markov chain that corresponds to the time evolution of the back-off processes at the contending nodes. An extension of the technique is presented to handle the case where service differentiation is provided with the use of heterogeneous protocol parameters, as, for example, in IEEE 802.11e EDCA protocol. Our results provide new insights into the operation of such protocols. The techniques developed in the paper are applicable to a wide variety of contention based MAC protocols.
We study the efficacy of patching and filtering countermeasures in protecting a network against s... more We study the efficacy of patching and filtering countermeasures in protecting a network against scanning worms. Recent work has addressed the question of detecting worm scans and generating self-certifying alerts, specifically in order to combat zero-day worms. Alerts need to be propagated in the network, and this is typically done using an overlay of dedicated servers. Alerted servers are used for filtering worm traffic and for generating and distributing patches to end hosts within their subnet. Can alerts and patches be propagated fast enough to limit the spread of the worm? The answer will depend on the speeds of the different processes, namely, worm spread, alert spread, and downloading of patches from servers. We characterize the interplay between them and establish fundamental limits on the effectiveness of these countermeasures. Specifically, we show that (i) the number of nodes eventually infected grows approximately exponentially in the ratio of infection rate to patch rate, and (ii) the patch rate required to ensure a bound on the final number of infectives grows only logarithmically with the number of servers in the overlay. (iii) We introduce the concept of minimum broadcast curve as an abstraction of the alert dissemination process on overlays, which unifies the analytical treatment of a variety of overlay networks. The results provide engineering guidelines for the design of alert propagation and patching systems. In particular, they specify the required frequency of automatic updates, and suggest that automatic patching is feasible provided that scan rates are limited to reasonable values. The results are obtained analytically, supplemented by simulations. The simulations demonstrate the accuracy of the analytical framework established in this paper.
We study how the spread of computer viruses, worms, and other self-replicating malware is affecte... more We study how the spread of computer viruses, worms, and other self-replicating malware is affected by the logical topology of the network over which they propagate. We consider a model in which each host can be in one of 3 possible states -susceptible, infected or removed (cured, and no longer susceptible to infection). We characterise how the size of the population that eventually becomes infected depends on the network topology. Specifically, we show that if the ratio of cure to infection rates is larger than the spectral radius of the graph, and the initial infected population is small, then the final infected population is also small in a sense that can be made precise. Conversely, if this ratio is smaller than the spectral radius, then we show in some graph models of practical interest (including power law random graphs) that the final infected population is large. These results yield insights into what the critical parameters are in determining virus spread in networks.
We consider a discrete-time queue with general service distribution and characterize a class of a... more We consider a discrete-time queue with general service distribution and characterize a class of arrival processes that possess a large deviation rate function that remains unchanged in passing through the queue. This invariant rate function corresponds to a kind of exponential tilting of the service distribution. We establish a large deviations analogue of quasireversibility for this class of arrival processes. Finally, we prove the existence of stationary point processes that have a probability law that is preserved by the queueing operator and conjecture that they have large deviation rate functions which belong to the class of invariant rate functions described above.
Epidemic-style (gossip-based) techniques have recently emerged as a scalable class of protocols f... more Epidemic-style (gossip-based) techniques have recently emerged as a scalable class of protocols for peer-to-peer reliable multicast dissemination in large process groups. These protocols provide probabilistic guarantees on relia-bility and scalability. However, popular implementations of epidemic-style dissemination are reputed to suffer from two major drawbacks: (a) (Network Overhead) when de-ployed on a WAN-wide or VPN-wide scale they generate a large number
We consider a queueing system with multiple heterogeneous servers serving a multiclass population... more We consider a queueing system with multiple heterogeneous servers serving a multiclass population. The classes are distinguished by the time costs. All customers have i.i.d. service requirements. Arriving customers do not see the instantaneous queue occupancy. Arrivals are randomly routed to one of the servers and the routing probabilities are determined centrally to optimize the expected waiting cost. This is, in general, a difficult optimization problem and we obtain the structure of the routing matrix. Next we consider a system in which each queue charges an admission price. The arrivals are routed randomly to minimize an individual objective function that includes the expected waiting cost and the admission price. Once again, we obtain the structure of the equilibrium routing matrix for this case. Finally, we determine the admission prices to make the equilibrium routing probability matrix equal to a given optimal routing probability matrix.
Gossip-based protocols have received considerable attention for broadcast applications due to the... more Gossip-based protocols have received considerable attention for broadcast applications due to their attractive scalability and reliability properties. The reliability of probabilistic gossip schemes studied so far depends on each user having knowledge of the global membership and choosing gossip targets uniformly at random. The requirement of global knowledge is undesirable in largescale distributed systems. In this paper, we present a novel peer-to-peer membership service which operates in a completely decentralized manner in that nobody has global knowledge of membership. However, membership information is replicated robustly enough to support gossip with high reliability. Our scheme is completely self-organizing in the sense that the size of local views naturally converges to the 'right' value for gossip to succeed. This 'right' value is a function of system size, but is achieved without any node having to know the system size. We present the design, theoretical analysis and preliminary evaluation of SCAMP. Simulations show that its performance is comparable to that of previous schemes which use global knowledge of membership at each node.
Contents. 3.1 Queues with Correlated Inputs 3.2 Queues with Many Sources and Power-Law Scalings 3... more Contents. 3.1 Queues with Correlated Inputs 3.2 Queues with Many Sources and Power-Law Scalings 3.3 Queues with Large Buffers and Power-Law Scalings
In recent work, Jon Kleinberg considered a small-world network model consisting of a d-dimensiona... more In recent work, Jon Kleinberg considered a small-world network model consisting of a d-dimensional lattice augmented with shortcuts. The probability of a shortcut being present between two points decays as a power, r, of the distance r between them. Kleinberg showed that greedy routing is e cient if = d and that there is no e cient decentralised routing algorithm if 6 = d. The results were extended to a continuum model by Franceschetti and Meester. In our work, we extend the result to more realistic models constructed from a Poisson point process, wherein each point is connected to all its neighbours within some xed radius, as well as possessing random shortcuts to more distant nodes as described above.
Let (X k ) k2IN be a sequence of i.i.d. random variables taking values in a set , and consider th... more Let (X k ) k2IN be a sequence of i.i.d. random variables taking values in a set , and consider the problem of estimating the law of X 1 in a Bayesian framework. We prove that under mild conditions on the support of the prior, the sequence of posterior distributions satis es a moderate deviation principle.
In this paper, we analyze the performance of random load resampling and migration strategies in p... more In this paper, we analyze the performance of random load resampling and migration strategies in parallel server systems. Clients initially attach to an arbitrary server, but may switch servers independently at random instants of time in an attempt to improve their service rate. This approach to load balancing contrasts with traditional approaches where clients make smart server selections upon arrival (e.g., Join-the-Shortest-Queue policy and variants thereof). Load resampling is particularly relevant in scenarios where clients cannot predict the load of a server before being actually attached to it. An important example is in wireless spectrum sharing where clients try to share a set of frequency bands in a distributed manner.
Stationary tail probabilities in exponential server tandems with renewal arrivals
Queueing Systems, 1996
... with renewal arrivals A. Ganesh a and V. Anantharam b'* aDepartment of Computer Scie... more ... with renewal arrivals A. Ganesh a and V. Anantharam b'* aDepartment of Computer Science, University of Edinburgh, Edinburgh EH9 3JZ, UK E-mail: [email protected] bEECS Department, University of California, Berkeley, CA 94720, USA E-mail: [email protected] ...
Call admission in ATM networks involves a trade-off between ensuring an adequate quality of servi... more Call admission in ATM networks involves a trade-off between ensuring an adequate quality of service to users and exploiting the scale efficiencies of statistical multiplexing. Achieving a good trade-off requires some knowledge of the source traffic. Its effective bandwidth has been proposed as a measure that captures characteristics which are relevant to quality of service provisioning. The effective bandwidth of a source is not known a priori, but needs to be estimated from an observation of its output. We show that direct estimators that have been proposed for this purpose are biased when the source traffic is autocorrelated. By explicitly computing the bias for auto-regressive and Markov sources, we devise a bias correction scheme that does not require knowledge of the model parameters. This is achieved by exploiting a scaling property of the bias that is insensitive to model parameters, and that has the same form for both auto-regressive and Markov sources. This leads us to conjecture that the scaling property may be valid in greater generality and can be used to obtain unbiased effective bandwidth estimates for real traffic. Use of our bias correction technique enables us to obtain accurate estimates of effective bandwidths using relatively short block lengths. The latter is important both because the variance of the estimator increases with the block length, and because real traffic may well be non-stationary, requiring that estimates be obtained from short data records.
This paper studies the performance of contention based medium access control (MAC) protocols. In ... more This paper studies the performance of contention based medium access control (MAC) protocols. In particular, a simple and accurate technique for estimating the throughput of the IEEE 802.11 DCF protocol is developed. The technique is based on a rigorous analysis of the Markov chain that corresponds to the time evolution of the back-off processes at the contending nodes. An extension of the technique is presented to handle the case where service differentiation is provided with the use of heterogeneous protocol parameters, as, for example, in IEEE 802.11e EDCA protocol. Our results provide new insights into the operation of such protocols. The techniques developed in the paper are applicable to a wide variety of contention based MAC protocols.
We study the efficacy of patching and filtering countermeasures in protecting a network against s... more We study the efficacy of patching and filtering countermeasures in protecting a network against scanning worms. Recent work has addressed the question of detecting worm scans and generating self-certifying alerts, specifically in order to combat zero-day worms. Alerts need to be propagated in the network, and this is typically done using an overlay of dedicated servers. Alerted servers are used for filtering worm traffic and for generating and distributing patches to end hosts within their subnet. Can alerts and patches be propagated fast enough to limit the spread of the worm? The answer will depend on the speeds of the different processes, namely, worm spread, alert spread, and downloading of patches from servers. We characterize the interplay between them and establish fundamental limits on the effectiveness of these countermeasures. Specifically, we show that (i) the number of nodes eventually infected grows approximately exponentially in the ratio of infection rate to patch rate, and (ii) the patch rate required to ensure a bound on the final number of infectives grows only logarithmically with the number of servers in the overlay. (iii) We introduce the concept of minimum broadcast curve as an abstraction of the alert dissemination process on overlays, which unifies the analytical treatment of a variety of overlay networks. The results provide engineering guidelines for the design of alert propagation and patching systems. In particular, they specify the required frequency of automatic updates, and suggest that automatic patching is feasible provided that scan rates are limited to reasonable values. The results are obtained analytically, supplemented by simulations. The simulations demonstrate the accuracy of the analytical framework established in this paper.
We study how the spread of computer viruses, worms, and other self-replicating malware is affecte... more We study how the spread of computer viruses, worms, and other self-replicating malware is affected by the logical topology of the network over which they propagate. We consider a model in which each host can be in one of 3 possible states -susceptible, infected or removed (cured, and no longer susceptible to infection). We characterise how the size of the population that eventually becomes infected depends on the network topology. Specifically, we show that if the ratio of cure to infection rates is larger than the spectral radius of the graph, and the initial infected population is small, then the final infected population is also small in a sense that can be made precise. Conversely, if this ratio is smaller than the spectral radius, then we show in some graph models of practical interest (including power law random graphs) that the final infected population is large. These results yield insights into what the critical parameters are in determining virus spread in networks.
We consider a discrete-time queue with general service distribution and characterize a class of a... more We consider a discrete-time queue with general service distribution and characterize a class of arrival processes that possess a large deviation rate function that remains unchanged in passing through the queue. This invariant rate function corresponds to a kind of exponential tilting of the service distribution. We establish a large deviations analogue of quasireversibility for this class of arrival processes. Finally, we prove the existence of stationary point processes that have a probability law that is preserved by the queueing operator and conjecture that they have large deviation rate functions which belong to the class of invariant rate functions described above.
Uploads
Papers by A. Ganesh