Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2005
…
40 pages
1 file
Abstract Data replication is a key technology in distributed systems that enables higher availability and performance. This article surveys optimistic replication algorithms. They allow replica contents to diverge in the short term to support concurrent work practices and tolerate failures in low-quality communication links. The importance of such techniques is increasing as collaboration through wide-area and mobile networks becomes popular.
2002
Replication maintains replicas (copies) of critical data on multiple computers and allows access to any one of them. It is the critical enabling technology of distributed services, improving both their performance and availability. Availability is improved by allowing access to the data, even when some of the replicas or network links are unavailable. Performance is improved in two ways. First, users can access nearby replicas and avoid expensive remote network access and reduce latency.
Lecture Notes in Computer Science, 2003
In this paper, a protocol is proposed that provides the advantages of lazy approaches, forestalling their traditionally found disadvantages. Thus, our approach reduces the abortion rates, and improves the performance of the system. It can also use a dynamic computation of the protocol threshold, approximating its results to the optimal ones. In addition, fault tolerance has been included in the algorithm, using a pseudo-optimistic approach, and avoiding to block any local activity, and minimizing the interference over any node in the system. A complete description of these algorithms is presented here. Finally, and empirical validation is also discused.
… Journal of database …, 2010
This paper proposes a new optimistic replication strategy for maintaining the eventual consistency in large-scale mobile distributed database systems. The proposed strategy consists of three components in order to support the characteristics of such systems. These components are: replication architecture, updates propagation protocol, and replication method. The purpose of the replication architecture is to provide a comprehensive infrastructure for distributing replicas among wide areas. The purpose of the propagation protocol is to transfer data updates between the components of the replication architecture in a manner that achieves the eventual consistency of data and improves the availability of recent updates to all replicas. The replication method provides a mechanism for implementing the propagation protocol, and automating the propagation of updates among the different replicas. The effectiveness of the proposed strategy is compared with two baseline replication strategies and shown that it achieves updates propagation delay reduction. Also, the results showed that the horizontal extension provided by the proposed strategy is more suitable than the vertical extension for large scale mobile distributed database systems.
2007
Abstract Replication is attractive for scaling databases up, as it does not require costly equipment and it enables fault tolerance. However, as the latency gap between local and remote accesses continues to widen, maintaining consistency between replicas remains a performance and complexity bottleneck. Optimistic replication (OR) addresses these problems.
Lecture Notes in Computer Science, 2009
In this paper, we propose a reliable group communication solution dedicated to a data replication algorithm to adapt it to unreliable environments. The data replication algorithm, named Adaptive Data Replication (ADR), has already an adaptiveness mechanism encapsulated in its dynamic replica placement strategy. Our extension of ADR to unreliable environments provides a data replication solution that is adaptive both in terms of replica placement and in terms of request routing. At the routing level, this solution takes the unreliability of the environment into account, in order to maximize reliable delivery of requests. At the replica placement level, the dynamically changing origin and frequency of read/write requests are analyzed, in order to define a set of replica that minimizes communication cost. Performance evaluation shows that this original combination of two adaptive strategies makes it possible to ensure high request delivery, while minimizing communication costs in the system.
Bokhari, Syed Mohtashim Abbas, and Oliver Theel. "A flexible hybrid approach to data replication in distributed systems." Intelligent Computing: Proceedings of the 2020 Computing Conference, Volume 1. Springer International Publishing, 2020, 2020
Data replication plays a very important role in distributed computing because a single replica is prone to failure, which is devastating for the availability of the access operations. High availability and low cost for the access operations as well as maintaining data consistency are major challenges for reliable services. Since failures are often inevitable in a distributed paradigm, thereby greatly affecting the availability of services. Data replication mitigates such failures by masking them and makes the system more fault-tolerant. It is the concept by which highly available data access operations can be realized while the cost should be not too high either. There are various state-of-the-art data replication strategies, but there exist infinitely many scenarios which demand designing new data replication strategies. In this regard, this work focuses on this problem and proposes a holistic hybrid approach towards data replication based on voting structures.
1992
Weak-consistency replication protocols can be used to build wide-area services that are scalable, fault-tolerant, and useful for mobile computer systems. We have developed the timestamped antientropy protocol, which provides reliable eventual delivery with a variety of message orderings. Pairs of replicas periodically exchange update messages; in this way updates eventually propagate to all replicas. In this paper we present a detailed analysis of the fault tolerance and the consistency provided by this protocol. The protocol is extremely robust in the face of site and network failure, and it scales well to large numbers of replicas. We are investigating an architecture for building distributed services that emphasizes scalability and fault tolerance. This allows applications to respond gracefully to changes in demand and to site and network failure. It also provides a single mechanism to support wide-area services and mobile computing systems. It uses weak-consistency replication techniques to build a flexible distributed service. We use data replication to meet availability demands and enable scalability. The replication is dynamic in that new servers can be added or removed to accommodate changes in demand. The system is asynchronous, and servers are as independent as possible; it never requires synchronous cooperation of large numbers of sites. This improves its ability to handle both communication and site failures. Eventually or weakly consistent replication protocols do not perform synchronous updates. Instead, updates are first delivered to one site, then propagated asynchronously to others. The value a server returns to a client read request depends on whether that server has observed the update yet. Eventually, every server will observe the update. Several existing information systems, such as Usenet [1] and the Xerox Grapevine system [2], use similar techniques. Delayed propagation means that clients do not wait for updates to reach distant sites, and the faulttolerance of the replicated data cannot be compromised by clients that misbehave. It also allows updates to be sent using bulk transfer protocols, which provide the best efficiency on high-bandwidth high-latency networks. These transfers can occur at off-peak times. Replicas can be disconnected from the network for a period of time, and will be updated once they are reconnected. On the other hand, clients must be able to tolerate some inconsistency, and the application may need to provide a mechanism to reconcile conflicting updates. Large numbers of replicas allow replicas to be placed near clients, and spread query load over more sites. This decreases both the communication latency for client requests and the amount of long-distance traffic that must be carried on backbone network links. Mobile computing systems can maintain a local replica, ensuring that users can use access information even when disconnected from the network. These protocols can be compared with consistent replication protocols, such as voting protocols. Consistent protocols cannot practically handle hundreds or thousands of replicas, while weak-consistency protocols can. Consistent protocols require the synchronous participation of a large number of replicas, which can be impossible when a client resides on a portable system or when the network is partitioned. It is also difficult to share processing load across many replicas. The communication traffic and associated latency are often unacceptably large for a service with replicas scattered over several continents.
Performance Evaluation, 1990
Available copy protocols guarantee the consistency of replicated data objects against any combination of non-Byzantine failures that do not result in partial communication failures. While the original available copy protocol assumed instantaneous detection of failures and instantaneous propagation of this information, more realistic protocols that do not rely on these assumptions have been devised. Two such protocols are investigated in this paper: a naive available copy (NAC) protocol that does not maintain any state information, and an optimistic available copy (OAC) protocol that only maintains state information at write and recovery times. Markov models are used to compare the performance of these two protocols with that of the original available copy protocol. These protocols are shown to perform nearly as well as the original available copy protocol, which is shown to perform much better than quorum consensus protocols.
1994
This research was sponsored by the Air Force Materiel Command (AFMC) and the Advanced Research Projects Agency (ARPA) under contract number F19628-93-C-0193. Additional support was provided by the IBM Corporation, Digital Equipment Corporation, Bellcore, and Intel ...
2019 IEEE 18th International Symposium on Network Computing and Applications (NCA), 2019
Data management has become crucial. Distributed applications and users manipulate large amounts of data. More and more distributed data management solutions arise, e.g. Casandra or Cosmos DB. Some of them propose multiple consistency protocols. Thus, for each piece of data, the developer or the user can choose a consistency protocol adapted to his needs. In this paper we explain why taking the consistency protocol into account is important while replication (especially placing) pieces of data; and we propose CAnDoR, an approach that dynamically adapts the replication according to the data usage (read/write frequencies and locations) and the consistency protocol used to manage the piece of data. Our simulations show that using CAnDoR to place and move data copies can improve the global average access latency by up to 40%.
Lecture Notes in Computer Science, 2006
Database and Expert Systems …, 1996
Proceedings 19th IEEE Symposium on Reliable Distributed Systems SRDS-2000, 2000
International Journal of Current Microbiology and Applied Sciences, 2019
Journal of Information Processing Systems, 2012
Proceedings of the 9th Joint Conference on Information Sciences (JCIS), 2006
Concurrency and Computation: Practice and Experience, 2012
ACM SIGMOD Record, 1999
Lecture Notes in Computer Science, 2005