Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1991, Lecture Notes in Computer Science
A self stabilizing protocol for constructing a rooted spanning tree in an arbitrary asynchronous network of processors that communicate through sha~ed memory is presented. The processors have unique identifiers but are otherwise identical. The network topology is assumed to be dynamic, that is, edges can join or leave the computation before it eventually stabilizes.
Lecture Notes in Computer Science, 2009
A self-stabilizing protocol can eventually recover its intended behavior even when started from an arbitrary configuration. Most of self-stabilizing protocols require every pair of neighboring processes to communicate with each other repeatedly and forever even after converging to legitimate configurations. Such permanent communication impairs efficiency, but is necessary in nature of self-stabilization: if we allow a process to stop its communication with other processes, the process may initially start and remain forever at a state inconsistent with the states of other processes. So it is challenging to minimize the number of process pairs communicating after convergence. We investigate possibility of communication-efficient self-stabilization, which allows only O(n) pairs of neighboring processes to communicate repeatedly after convergence. For spanningtree construction, we show the following results: (a) communication-efficiency is attainable when a unique root is designated a priori, (b) communication-efficiency is impossible to attain when each process has a unique identifier but without a designated unique root, and (c) communication-efficiency becomes attainable with process identifiers if each process initially knows an upper bound of the network size.
2015 IEEE International Parallel and Distributed Processing Symposium, 2015
Routing protocols are at the core of distributed systems performances, especially in the presence of faults. A classical approach to this problem is to build a spanning tree of the distributed system. Numerous spanning tree construction algorithms depending on the optimized metric exist (total weight, height, distance with respect to a particular process,. . .) both in fault-free and faulty environments. In this paper, we aim at optimizing the diameter of the spanning tree by constructing a minimum diameter spanning tree. We target environments subject to transient faults (i.e. faults of finite duration). Hence, we present a self-stabilizing algorithm for the minimum diameter spanning tree construction problem in the state model. Our protocol has the following attractive features. It is the first algorithm for this problem that operates under the unfair and distributed adversary (or daemon). In other words, no restriction is made on the asynchronous behavior of the system. Second, our algorithm needs only O(log n) bits of memory per process (where n is the number of processes), that improves the previous result by a factor n. These features are not achieved to the detriment of the convergence time, which stays polynomial.
2005
Awareness of the need for robustness in distributed systems increases as distributed systems become integral parts of day-to-day systems. Self-stabilizing while tolerating ongoing Byzantine faults are wishful properties of a distributed system. Many distributed tasks (e.g. clock synchronization) possess ecient non-stabilizing solutions tolerating Byzantine faults or conversely non-Byzantine but self-stabilizing solutions. In contrast, designing algorithms that self-stabilize while at the same time tolerating an eventual fraction of permanent Byzantine failures present a special challenge due to the ambition of malicious nodes to hamper stabilization if the systems tries to recover from a corrupted state. This diculty might be indicated by the remarkably few algorithms that are resilient to both fault models. We present the rst scheme that takes a Byzantine distributed algorithm and produces its self-stabilizing Byzantine counterpart, while having a relatively low overhead of O(f ) communication rounds, where f is the number of actual faults. Our protocol is based on a tight Byzantine self-stabilizing pulse synchronization procedure. The synchronized pulses are used as events for initializing Byzantine agreement on every node's local state. The set of local states is used for global predicate detection. Should the global state represent an illegal system state then the target algorithm is reset.
HAL (Le Centre pour la Communication Scientifique Directe), 2018
We propose a general scheme, called Algorithm STlC, to compute spanning-tree-like data structures on arbitrary networks. STlC is self-stabilizing and silent and, despite its generality, is also efficient. It is written in the locally shared memory model with composite atomicity assuming the distributed unfair daemon, the weakest scheduling assumption of the model. Its stabilization time is in O(nmaxCC) rounds, where nmaxCC is the maximum number of processes in a connected component. We also exhibit polynomial upper bounds on its stabilization time in steps and process moves holding for large classes of instantiations of Algorithm STlC. We illustrate the versatility of our approach by proposing several such instantiations that efficiently solve classical problems such as leader election, as well as, unconstrained and shortest-path spanning tree constructions.
We assume a link-register communication model under read/write atomicity, where every process can read from but cannot write into its neighbours' registers. The paper presents two self-stabilizing protocols for basic fair and reliable link communication primitives. The rst primitive guarantees that any process writes a new value in its register(s) only after all its neighbours have read the previous value, whatever the initial scheduling of processes' actions. The second primitive implements a weak rendezvous communication mechanism by using an alternating bit protocol: whenever a process consecutively writes n values (possibly the same ones) in a register, each neighbour is guaranteed to read each value from the register at least once.
Proceedings of the 3rd International Conference on Internet of Things, Big Data and Security, 2018
Consensus algorithms have a set of network nodes converge asymptotically to a same state which depends on some function of their initial states. At the core of these algorithms is a linear iterative scheme where each node updates its current state based on its previous state and the state of its neighbors in the network. In this paper we review a proposal from control theory which uses linear iterative schemes of asymptotic consensus and observability theory to compute consensus states in a finite number of iterations. This proposal has low communication requirements which makes it attractive to address consensus problems in a limited resource environment such as edge computing. However it assumes static networks contrary to wireless edge computing networks which are often dynamic and prone to attacks. The main purpose of this paper is to assess the network conditions and attack scenarios where this algorithm can still be considered useful in practice to address consensus problems in IoT applications. We introduce some new lower and exact bounds which further improve the communication performance of the algorithm. We also have some technical contributions on how to speed up mitigation of malicious activities and handling network instabilities. Numerical results confirm the communication performance of the algorithm and the existence of scenarios where the system can be considered cost effective resilient to errors injected intentionally or unintentionally.
Journal of Parallel and Distributed Computing, 2002
We define a finite-state message-passing model using guarded commands. This model is particularly appropriate for defining and reasoning about self-stabilizing protocols, due to the well-known result that self-stabilizing protocols on unbounded-channel models must have infinitely many legitimate states. We argue that our model is more realistic than other models, and demonstrate its use with a simple example. We conclude by discussing how self-stabilizing protocols defined on this model might be implemented directly on actual networks.
Many real-world networks show a scale-free degree distribution, a structure that is known to be very stable in case of random failures. Unfortunately, the very same structure makes the network very sensitive against targeted attacks on their high-degree vertices. Under attack it is instead preferrable to have a Poissonor normal degree distribution. The paper adresses the question of whether it is possible to design a network protocol that enables the vertices of a decentralized network to switch its topology according to whether it is attacked or just suffers of random failures. We further require that this protocol is oblivious of the kind of removal scenario, i.e., that is does not contain any kind of attack detection. In this article we show how to design such a protocol that is oblivious, needs only local information, and that is capable of switching between the two structures reasonably fast. The protocol is easy to implement and keeps the average degree in the graph constant. After an analytical and empirical evaluation of the result the paper is concluded by possible applications to internet-based communication networks.
Lecture Notes in Computer Science, 2010
Self-stabilization is a versatile approach to fault-tolerance since it permits a distributed system to recover from any transient fault that arbitrarily corrupts the contents of all memories in the system. Byzantine tolerance is an attractive feature of distributed systems that permits to cope with arbitrary malicious behaviors. We consider the well known problem of constructing a breadth-first spanning tree in this context. Combining these two properties proves difficult: we demonstrate that it is impossible to contain the impact of Byzantine nodes in a strictly or strongly stabilizing manner. We then adopt the weaker scheme of topology-aware strict stabilization and we present a similar weakening of strong stabilization. We prove that the classical min + 1 protocol has optimal Byzantine containment properties with respect to these criteria.
SIAM Journal on Computing, 1997
Self-stabilizing message driven protocols are defined and discussed. The class weakexclusion that contains many natural tasks such as-exclusion and token-passing is defined, and it is shown that in any execution of any self-stabilizing protocol for a task in this class, the configuration size must grow at least in a logarithmic rate. This last lower bound is valid even if the system is supported by a time-out mechanism that prevents communication deadlocks. Then we present three self-stabilizing message driven protocols for token-passing. The rate of growth of configuration size for all three protocols matches the aforementioned lower bound. Our protocols are presented for two processor systems but can be easily adapted to rings of arbitrary size. Our results have an interesting interpretation in terms of automata theory.
2004
Maintaining spanning trees in a distributed fashion is central to many networking applications. In this paper, we propose a self-stabilizing algorithm for maintaining a spanning tree in a distributed fashion for a completely connected topology. Our algorithm requires a node to process O(1) messages on average in one cycle as compared to previous algorithms which need to process messages from every neighbor, resulting in O(n) work in a completely connected topology. Our algorithm also stabilizes faster than the previous approaches. Our approach demonstrates a new methodology which uses the idea of core and non-core states for developing self-stabilizing algorithms. The algorithm is also useful in security related applications due to its unique design.
Corr, 2010
Self-stabilization is a versatile approach to fault-tolerance since it permits a distributed system to recover from any transient fault that arbitrarily corrupts the contents of all memories in the system. Byzantine tolerance is an attractive feature of distributed systems that permits to cope with arbitrary malicious behaviors. Combining these two properties proved difficult: it is impossible to contain the spatial impact of Byzantine nodes in a self-stabilizing context for global tasks such as tree orientation and tree construction. We present and illustrate a new concept of Byzantine containment in stabilization. Our property, called Strong Stabilization enables to contain the impact of Byzantine nodes if they actually perform too many Byzantine actions. We derive impossibility results for strong stabilization and present strongly stabilizing protocols for tree orientation and tree construction that are optimal with respect to the number of Byzantine nodes that can be tolerated in a self-stabilizing context.
Parallel Processing Letters, 2008
We provide self-stabilizing algorithms to obtain and maintain a maximal matching, maximal independent set or minimal dominating set in a given system graph. They converge in linear rounds under a distributed or synchronous daemon. They can be implemented in an ad hoc network by piggy-backing on the beacon messages that nodes already use.
ArXiv, 2018
We study the problem of privately emulating shared memory in message-passing networks. The system includes clients that store and retrieve replicated information on N servers, out of which e are malicious. When a client access a malicious server, the data field of that server response might be different than the value it originally stored. However, all other control variables in the server reply and protocol actions are according to the server algorithm. For the coded atomic storage (CAS) algorithms by Cadambe et al., we present an enhancement that ensures no information leakage and malicious fault-tolerance. We also consider recovery after the occurrence of transient faults that violate the assumptions according to which the system is to behave. After their last occurrence, transient faults leave the system in an arbitrary state (while the program code stays intact). We present a self-stabilizing algorithm, which recovers after the occurrence of transient faults. This addition to C...
Dijkstra: it is the property f o r a system t o eventually recover by itself a legitimate state after any perturbation modifying the m e m o r y state. This paper proposes a dynamic automatic selfstabilizing protocol. This algorithm runs i n the fully asynchronous message-passing model in which messages can also be corrupted. The principle of the algorithm is t o compute regularly a global state and if necessary t o generate a global reset. W h e n the system is stabilized, the message complexity is O(max(6 * m,n2)) where 6 is the degree of the communication graph, m the number of links and n the number of processus. This complexity allows a possable implementation.
Lecture Notes in Computer Science, 1994
We describe a method for transforming asynchronous network protocols into protocols that can sustain any transient fault, i.e., become self-stabilizing. We combine the known notion of local checking with a new notion of internal reset, and prove that given any self-stabilizing internal reset protocol, any locally-checkable protocol can be made self-stabilizing. Our proof is constructive in the sense that we provide explicit code. The method applies to many practical network problems, including spanning tree construction, topology update, and virtual circuit setup.
2019
We present results on the last topic we collaborate with our late friend, Professor Ajoy Kumar Datta (1958-2019). In this work, we shed new light on a self-stabilizing wave algorithm proposed by Colette Johnen in 1997. This algorithm constructs a BFS spanning tree in any connected rooted network. Nowadays, it is still the best existing self-stabilizing BFS spanning tree construction in terms of memory requirement, {\em i.e.}, it only requires $\Theta(1)$ bits per edge. However, it has been proven assuming a weakly fair daemon. Moreover, its stabilization time was unknown. Here, we study the slightly modified version of this algorithm, still keeping the same memory requirement. We prove the self-stabilization of this variant under the distributed unfair daemon and show a stabilization time in $O(D.n^2)$ rounds, where $D$ is the network diameter and $n$ the number of processes.
2012
Recent years have seen significant interest in designing networks that are self-healing in the sense that they can automatically recover from adversarial attacks. Previous work shows that it is possible for a network to automatically recover, even when an adversary repeatedly deletes nodes in the network. However, there have not yet been any algorithms that self-heal in the case where an adversary takes over nodes in the network. In this paper, we address this gap. In particular, we describe a communication network over n nodes that ensures the following properties, even when an adversary controls up to t <= (1/8 - \epsilon)n nodes, for any non-negative \epsilon. First, the network provides a point-to-point communication with bandwidth and latency costs that are asymptotically optimal. Second, the expected total number of message corruptions is O(t(log* n)^2) before the adversarially controlled nodes are effectively quarantined so that they cause no more corruptions. Empirical results show that our algorithm can reduce the bandwidth cost by up to a factor of 70.
Proceedings of the twenty-fifth annual ACM symposium on Principles of distributed computing, 2006
Byzantine agreement algorithms typically assume implicit initial state consistency and synchronization among the correct nodes and then operate in coordinated rounds of information exchange to reach agreement based on the input values. The implicit initial assumptions enable correct nodes to infer about the progression of the algorithm at other nodes from their local state. This paper considers a more severe fault model than permanent Byzantine failures, one in which the system can in addition be subject to severe transient failures that can temporarily throw the system out of its assumption boundaries. When the system eventually returns to behave according to the presumed assumptions it may be in an arbitrary state in which any synchronization among the nodes might be lost, and each node may be at an arbitrary state. We present a self-stabilizing Byzantine agreement algorithm that reaches agreement among the correct nodes in optimal time, by using only the assumption of bounded message transmission delay. In the process of solving the problem, two additional important and challenging building blocks were developed: a unique self-stabilizing protocol for assigning consistent relative times to protocol initialization and a Reliable Broadcast primitive that progresses at the speed of actual message delivery time.
Lecture Notes in Computer Science, 2005
Maintaining spanning trees in a distributed fashion is central to many networking applications and selfstabilizing algorithms provide an elegant way of doing it in fault-prone environments. In this paper, we propose a self-stabilizing algorithm for maintaining a spanning tree in a distributed fashion for a completely connected topology. Our algorithm requires a node to process O(1) messages on average in one asynchronous round as compared to previous algorithms which need to process messages from every neighbor, resulting in O(n) work in a completely connected topology. Our algorithm also stabilizes faster than the previous approaches. Our approach demonstrates a new methodology which uses the idea of core and non-core states for developing self-stabilizing algorithms. The algorithm is also useful in security related applications due to its unique design.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.