Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
utgjiu.ro
…
6 pages
1 file
Designing databases in a distributed environment is significantly more complex than designing databases in a centralized environment, largely because of the need to consider network sources, data partitions schemes, redundant data placement alternatives, an replication approaches. In this paper we present two alternatives design for database partitioning at both primary and replicate locations.
International Journal of Cyber and IT Service Management, 2022
Today's computer applications have ever-increasing database system capabilities and performance. The growing amount of data that has to be processed in a business company makes centralized data processing ineffective. This inefficiency shows itself as a long reaction time. This is in direct opposition to the purpose of utilizing databases in data processing, which is to reduce the amount of time it takes to process data. Another database design is required to tackle this problem. Distributed database technology refers to an architecture in which several servers are linked together, and each one may process and fulfill local queries. Each participating server is responsible for serving one or more requests. In a multi-master replication scenario, all sites are main sites, and all main sites communicate with one another. The distributed database system comprises numerous linked computers that work together as a single system.
Innovations and Advances in Computer, Information, …
Distributed databases are an optimal solution to manage important amounts of data. We present an algorithm for replication in distributed databases, in which we improve queries. This algorithm can move information between databases, replicating pieces of data through databases.
Bioscience Biotechnology Research Communications
Replication structures are research areas of all distributed databases. We provide an overview in this paper for comparing the replication strategies for such database systems. The problems considered are data consistency and scalability. These problems preserve continuity with all its replicas spread across multiple nodes between the actual real time event in the external world and the images. A framework for a replicated real time database is discussed and all time constraints are preserved. To broaden the concept of modeling a large database, a general outline is presented which aims to improve the consistency of the data.
Database and Expert Systems …, 1996
In this paper we investigate the performance issues of data replication in a loosely coupled distributed database system, where a set of database servers are connected via a network. A database replication scheme, Replication with Divergence, which allows some degree of divergence between the primary and the secondary copies of the same data object, is compared to other two schemes that, respectively, disallows replication and maintains all replicated copies consistent at all times. The impact of some tunable factors, such as cache size and the update propagation probability, on the performance of Replication with Divergence is also investigated. These results shed light on the performance issues that were not addressed in previous studies on replication of distributed database systems.
2011
The necessity of ever-increasing use of distributed data in computer networks is obvious for all. One technique that is performed on the distributed data for increasing of efficiency and reliablity is data rplication. In this paper, after introducing this technique and its advantages, we will examine some dynamic data replication. We will examine their characteristies for some overus scenario and the we will propose some suggestion for their improvement.
Current Journal of Applied Science and Technology, 2018
Due to the huge amount of computer data stored in databases, one centralized database cannot support and provide good performance and availability when contains huge data which used by large number of users. Thus, the distributed database is a good technique to overcome this problem by fragmenting the database and allocating the right database fragmentation in the right site. Many researches present static optimized algorithms of distributed database fragmentation, allocation and replication (Horizontal/ Vertical) at the initial stage of the distributed database design using different or similar techniques, which affect the performance of database system. Therefore, this study aims at reviewing and comparing the best-presented algorithms from the design perspective, with the aim of identifying the strength and weakness points of each algorithm. Furthermore, this study could be considered as the first study that attempts to identify the most critical criteria that were used for comparing the optimized algorithms that have been proposed and used in distributed database fragmentation and allocation.
Abstract—In this paper, we are proposing a new replica control algorithm NCP (node child protocol) for the management of replicated data in distributed database system. The algorithms impose logical tree structure on a set of copies of an object. The proposed protocols reduce the quorum size of the read and write quorum. With this algorithm read operation is executed by reading one copy in a failure free environment and if the failure occur then also it read single copy .The less number data copies required for the write operation and it provide low write operation cost then the other protocols.
ACM SIGMETRICS Performance Evaluation Review, 1988
The objective of this research is to develop and integrate tools for the design of partially replicated distributed database systems. Many existing tools are inappropriate for designing large-scale distributed databases due to their large computational requirements. Our goal is to develop tools that solve the design problems reasonably quickly, typically by using heuristic algorithms that provide approximate or near-optimal solutions. In developing this design methodology, we assume that information regarding the types of user requests and their rates of arrival into the system is known a priori . The methodology assumes a general model for transaction execution. In this paper we discuss three aspects of the design methodology: the data allocation problem, the use of a static load-balancing scheme in coordination with the allocation scheme, and the design evaluation and review step. Our methodology employs iterative design techniques using performance evaluation as a means to iterate.
Proceedings 19th IEEE Symposium on Reliable Distributed Systems SRDS-2000, 2000
Data replication is an increasingly important topic as databases are more and more deployed over clusters of workstations. One of the challenges in database replication is to introduce replication without severely affecting performance. Because of this difficulty, current database products use lazy replication, which is very efficient but can compromise consistency. As an alternative, eager replication guarantees consistency but most existing protocols have a prohibitive cost. In order to clarify the current state of the art and open up new avenues for research, this paper analyses existing eager techniques using three key parameters. In our analysis, we distinguish eight classes of eager replication protocols and, for each category, discuss its requirements, capabilities, and cost. The contribution lies in showing when eager replication is feasible and in spelling out the different aspects a database replication protocol must account for.
Proceedings 20th IEEE International Conference on Distributed Computing Systems, 2000
Replication is an area of interest to both distributed systems and databases. The solutions developed from these two perspectives are conceptually similar but differ in many aspects: model, assumptions, mechanisms, guarantees provided, and implementation. In this paper, we provide an abstract and "neutral" framework to compare replication techniques from both communities in spite of the many subtle differences. The framework has been designed to emphasize the role played by different mechanisms and to facilitate comparisons. With this, it is possible to get a functional comparison of many ideas that is valuable for both didactic and practical purposes. The paper describes the replication techniques used in both communities, compares them, and points out ways in which they can be integrated to arrive to better, more robust replication protocols.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Bokhari, Syed Mohtashim Abbas, and Oliver Theel. "A flexible hybrid approach to data replication in distributed systems." Intelligent Computing: Proceedings of the 2020 Computing Conference, Volume 1. Springer International Publishing, 2020, 2020
International Journal of Modern Education and Computer Science, 2013
Lecture Notes in Computer Science, 2004
Journal of Knowledge Management, Economics, and Information Technology, 2011
Proceedings of the …, 2003
Proceedings of the 2009 EDBT/ICDT Workshops on - EDBT/ICDT '09, 2009
ISPRS International Journal of Geo-Information, 2020