Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2000, Proceedings 19th IEEE Symposium on Reliable Distributed Systems SRDS-2000
…
10 pages
1 file
Data replication is an increasingly important topic as databases are more and more deployed over clusters of workstations. One of the challenges in database replication is to introduce replication without severely affecting performance. Because of this difficulty, current database products use lazy replication, which is very efficient but can compromise consistency. As an alternative, eager replication guarantees consistency but most existing protocols have a prohibitive cost. In order to clarify the current state of the art and open up new avenues for research, this paper analyses existing eager techniques using three key parameters. In our analysis, we distinguish eight classes of eager replication protocols and, for each category, discuss its requirements, capabilities, and cost. The contribution lies in showing when eager replication is feasible and in spelling out the different aspects a database replication protocol must account for.
Proc. of the 5th …, 2003
Abstract: COPLA is a software tool that provides an object-oriented view of a network of replicated relational databases. It supports a range of consistency protocols, each of which supports different consistency modes. The resulting scenario is a distributed environment where ...
Replication can be a success factor in database systems as well as perhaps being one of the needs of proliferation, expansion, and the rapid progress of databases and distributed technology, despite there being a strong belief among database designers that most existing solutions are not feasible due to their complexity, poor performance and lack of scalability. This paper provides an approach that can help designers in implementing eager and lazy replication mechanisms. The proposed approach contains two phases: In the first phase, the database is designed to have indicator fields that can carry the update status, and to consider the replication concepts by classifying, categorizing and determining the kinds and locations of data objects; in the second phase, the updating methodology is provided to make the implementation of eager and lazy replication mechanisms easier and reliable.
Proceedings of the 18th International Database Engineering & Applications Symposium on - IDEAS '14, 2014
One of the most demanding needs in cloud computing is that of having scalable and highly available databases. One of the ways to attend these needs is to leverage the scalable replication techniques developed in the last decade. These techniques allow increasing both the availability and scalability of databases. Many replication protocols have been proposed during the last decade. The main research challenge was how to scale under the eager replication model, the one that provides consistency across replicas. In this paper, we examine three eager database replication systems available today: Middle-R, C-JDBC and MySQL Cluster using TPC-W benchmark. We analyze their architecture, replication protocols and compare the performance both in the absence of failures and when there are failures.
Eager replication management is known to generate unacceptable performance as soon as the update rate or the number of replicas increases. Lazy replication protocols tackle this problem by decoupling transaction execution from the propagation of new values to replica sites while guaranteeing a correct and more efficient transaction processing and replica maintenance. However, they impose several restrictions on transaction models that are often not valid in practical database settings, e.g., they require that each transaction executes at its initiation site and/or are restricted to full replication schemes. Also, the protocols cannot guarantee that the transactions will always see the freshest available replicas. This paper presents a new lazy replication protocol called PDBREP that is free
2003
COPLA is a software tool that provides an object-oriented view of a network of replicated relational databases. It supports a range of consistency protocols, each of which supports different consistency modes. The resulting scenario is a distributed environment where applications may start multiple database sessions, which may use different consistency modes, according to their needs. This paper describes the COPLA platform, its architecture, its support for database replication and one of the consistency algorithms that have been implemented on it. A system of this kind may be used in the development of applications for companies that have several branch offices, such as banks, hypermarkets, etc. In such settings, several applications typically use on-site generated data in local branches, while other applications also use information generated in other branches and offices. The services provided by COPLA enable an efficient catering for both local and non-local data querying and processing. ¨ § ¨
2007
Abstract Replication is attractive for scaling databases up, as it does not require costly equipment and it enables fault tolerance. However, as the latency gap between local and remote accesses continues to widen, maintaining consistency between replicas remains a performance and complexity bottleneck. Optimistic replication (OR) addresses these problems.
In database replication, primary-copy systems sort out easily the problem of keeping replicate data consistent by allowing only updates at the primary copy. While this kind of systems are very efficient with workloads dominated by read-only transactions, the update-everywhere approach is more suitable for heavy update loads. However, this approach adds a significant overload when working with readonly transactions. We propose a new database replication paradigm, halfway between primary-copy and update-everywhere approaches, which permits improving system performance adapting its configuration to the workload, thanks to a deterministic database replication protocol which ensures that broadcast writesets are always going to be committed.
2000
In this paper, we explore data replication protocols that provide both fault tolerance and good performance without compromising consistency. We do this by combining transactional concurrency control with group communication primitives. In our approach, transactions are executed at only one site so that not all nodes incur in the overhead of producing results. To further reduce latency, we use an optimistic multicast technique that overlaps transaction execution with total order message delivery. The protocols we present in the paper provide correct executions while minimizing overhead and providing higher scalability.
Proceedings 20th IEEE International Conference on Distributed Computing Systems, 2000
Replication is an area of interest to both distributed systems and databases. The solutions developed from these two perspectives are conceptually similar but differ in many aspects: model, assumptions, mechanisms, guarantees provided, and implementation. In this paper, we provide an abstract and "neutral" framework to compare replication techniques from both communities in spite of the many subtle differences. The framework has been designed to emphasize the role played by different mechanisms and to facilitate comparisons. With this, it is possible to get a functional comparison of many ideas that is valuable for both didactic and practical purposes. The paper describes the replication techniques used in both communities, compares them, and points out ways in which they can be integrated to arrive to better, more robust replication protocols.
2003
We describe an operational middleware platform for maintaining the consistency of replicated data objects, called COPla (Common Object Platform). It supports both eager and lazy update propagation for replicated data in networked relational databases. The purpose of replication is to enhance the availability of data objects and services in distributed database networks. Orthogonal to recovery strategies of backed-up snapshots, logs and other measures to alleviate database downtimes, COPla caters for high availability during downtimes of parts of the network by supporting a range of different consistency modes for distributed replications of critical data objects.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
13th Pacific Rim International Symposium on Dependable Computing (PRDC 2007), 2007
International Journal of Cyber and IT Service Management, 2022
Lecture Notes in Computer Science, 2006
2007 International Multi-Conference on Computing in the Global Information Technology (ICCGI'07), 2007
Proceedings of the 2009 EDBT/ICDT Workshops on - EDBT/ICDT '09, 2009
2019 IEEE 18th International Symposium on Network Computing and Applications (NCA), 2019
Database and Expert Systems …, 1996
ISPRS International Journal of Geo-Information, 2020
International Journal on Information Theory, 2014