Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2004
…
59 pages
1 file
Wait-free Linearizable Queue Implementations
We consider wait-free linearizable implementations of shared objects which tolerate crash faults of any number of processes on a distributed message-passing system. We consider the system where each process has a local clock that runs at the same speed as real-time clock and an message delays are in the range $
Proceedings of the 39th Symposium on Principles of Distributed Computing, 2020
In modern operating systems and programming languages adapted to multicore computer architectures, parallelism is abstracted by the notion of execution threads. Multi-threaded systems have two major specificities: 1) new threads can be created dynamically at runtime, so there is no bound on the number of threads participating in a long-running execution. 2) threads have access to a memory allocation mechanism that cannot allocate infinite arrays. This makes it challenging to adapt some algorithms to multi-threaded systems, especially those that assign one shared register per process. This paper explores the synchronization power of shared objects in multi-threaded systems by extending the famous wait-free hierarchy to take these constraints into consideration. It proposes to subdivide the set of objects with an infinite consensus number into five new degrees, depending on their ability to synchronize a bounded, finite or infinite number of processes, with or without the need to allocate an infinite array. It then exhibits one object illustrating each proposed degree. CCS CONCEPTS • Theory of computation → Distributed computing models; • Software and its engineering → Process synchronization; • Computer systems organization → Multicore architectures; Dependable and fault-tolerant systems and networks.
Research Square (Research Square), 2024
Herlihy proved that compare-and-set (CAS) is universal in the classical computing system model composed of an a priori known number of processes. For this, he proposed the first universal construction capable of emulating any data structure with a sequential specification. It has recently been proved that CAS is still universal in the infinite arrival computing model, a model where any number of processes can be created on the fly. This paper explores the complexity issues related to the wait-free CAS-based universal constructions. We first prove that CAS does not allow to implement wait-free and linearizable visible objects in the infinite arrival model with a space complexity bounded by the number of active processes. We then show that this lower bound is tight, in the sense that this dependency can be made as low as desired by proposing a wait-free and linearizable universal construction, using the CAS operation, whose space complexity dependancy on the number of ever issued operations is defined by a parameter that can be linked to any unbounded function. This paper also proves that the lower bound obtained for CAS-based algorithms might be avoided by the use of other synchronization primitives. As an example, we explore algorithms based on the memory-to-memory swap special instruction, that exchanges the content of two shared registers. We propose a universal construction based on memory-to-memory swap whose complexity only depends on the contention, and we illustrate how compare-and-set and memory-to-memory swap can be used jointly within a wait-free queue algorithm.
2012
Disjoint-access parallelism and wait-freedom are two desirable properties for implementations of concurrent objects. Disjoint-access parallelism guarantees that processes operating on different parts of an implemented object do not interfere with each other by accessing common base objects. Thus, disjointaccess parallel algorithms allow for increased parallelism. Wait-freedom guarantees progress for each nonfaulty process, even when other processes run at arbitrary speeds or crash. A universal construction provides a general mechanism for obtaining a concurrent implementation of any object from its sequential code. We identify a natural property of universal constructions and prove that there is no universal construction (with this property) that ensures both disjoint-access parallelism and wait-freedom. This impossibility result also holds for transactional memory implementations that require a process to re-execute its transaction if it has been aborted and guarantee each transaction is aborted only a finite number of times. Our proof is obtained by considering a dynamic object that can grow arbitrarily large during an execution. In contrast, we present a universal construction which produces concurrent implementations that are both wait-free and disjoint-access parallel, when applied to objects that have a bound on the number of data items accessed by each operation they support.
We consider a wait-free linearizable implementation of shared objects on a distributed message-passing system. We assume that the system provides each process with a local clock that runs at the same speed as global time and that all message delays are in the range [d − u, d] where d and u (0 < u ≤ d) are constants known to every process. We present four wait-free linearizable implementations of read/write registers on reliable and unreliable broadcast models. We also present two wait-free linearizable implementations of general objects on a reliable broadcast model. The efficiency of an implementation is measured by the worst-case response time for each operation of the implemented object. Response times of our wait-free implementations of read/write registers on a reliable broadcast model is better than a previously known implementation in which waitfreedom is not taken into account.
Journal of Parallel and Distributed Computing, 2018
This paper studies two approaches to formalize helping in wait-free implementations of shared objects. The first approach is based on operation valency, and it allows us to make the important distinction between trivial and nontrivial helping. We show that a wait-free implementation of a queue from common2 objects (e.g., Test&Set) requires nontrivial helping. In contrast, there is a wait-free implementation of a stack from Common2 objects with only trivial helping. This separation might shed light on the difficulty of implementing a queue from Common2 objects. The other approach formalizes the helping mechanism employed by Herlihy's universal waitfree construction and is based on having an operation by one process restrict the possible linearizations of operations by other processes. We show that objects possessing such universal helping can be used to solve consensus.
Proceedings of the sixteenth annual ACM symposium on Principles of distributed computing - PODC '97, 1997
This paper introduces two new novel tools for the study of distributed computing and shows their utility by using them to exhibit a simple derivation of the Herlihy and Shavit characterization of wait-free shared-memory computation. The first tool is the notion of the iterated version of a given model. We show that the topological structure that corresponds to an iterated model has a nice recursive structure, and that the iterated version of the atomic snapshot memory solves any task solvable by the non-iterated model. The second tool is an iterated explicit simple convergence algorithm. In the Ph.D. Thesis oft he first author these tool were used to characterize models more complex than read-write shared-memory.
ACM Transactions on Programming Languages and Systems, 1990
A concurrent object is a data object shared by concurrent processes. Linearizability is a correctness condition for concurrent objects that exploits the semantics of abstract data types. It permits a high degree of concurrency, yet it permits programmers to specify and reason about concurrent objects using known techniques from the sequential domain. Linearizability provides the illusion that each operation applied by concurrent processes takes effect instantaneously at some point between its invocation and its response, implying that the meaning of a concurrent object's operations can be given by pre- and post-conditions. This paper defines linearizability, compares it to other correctness conditions, presents and demonstrates a method for proving the correctness of implementations, and shows how to reason about concurrent objects, given they are linearizable.
Science of Computer Programming, 1992
We de ne a class of operations called pseudo read-modify-write (PRMW) operations, and show that nontrivial shared data objects with such operations can be implemented in a bounded, wait-free manner from atomic registers. A PRMW operation is similar to a \true" read-modify-write (RMW) operation in that it modi es the value of a shared variable based upon the original value of that variable. However, unlike a n RMW operation, a PRMW operation does not return the value of the variable that it modi es. We consider a class of shared data objects that can either be read, written, or modi ed by a n a s s o c i a t i v e, commutative PRMW operation, and show that any object in this class can be implemented without waiting from atomic registers. The implementations that we present are polynomial in both space and time and thus are an improvement o ver previously published ones, all of which h a ve u n bounded space complexity.
2013
A universal construction is a method to execute sequential code in an asynchronous sharedmemory system. To ensure fault-tolerance and enhance performance, universal constructions are designed to be wait-free and disjoint-access-parallel. In a previous paper we proved that no universal construction can ensure both wait-freedom and disjoint-access parallelism. To circumvent this impossibility, while still achieving enhanced parallelism, we propose a weaker version of disjoint-access parallelism, called timestamp-ignoring disjoint-access parallelism. It allows two operations to access a common timestamp object, even if they are working on disjoint parts of a data structure.We present a universal construction that ensures wait-freedom and timestamp-ignoring disjoint-access parallelism. * Supported by the European Commission under the 7th Framework Program through the TransForm (FP7-MC-ITN-238639) project and by the ARISTEIA Action of the Operational Programme Education and Lifelong Learning which is co-funded by the European Social Fund (ESF) and National Resources through the GreenVM project.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
ACM SIGPLAN Notices, 2012
Distributed Computing, 2009
arXiv (Cornell University), 2023
SIAM Journal on Computing, 2012
Proceeding of the 29th ACM SIGACT-SIGOPS symposium on Principles of distributed computing - PODC '10, 2010
Future Generation Computer Systems, 2003
Lecture Notes in Computer Science, 2006
ACM SIGPLAN Notices, 2014
Electronic Notes in Theoretical Computer Science, 2005
Conference on Software Engineering and Formal Methods, 2004
2015 IEEE International Parallel and Distributed Processing Symposium, 2015
Proceedings of the twentieth annual ACM symposium on Principles of distributed computing, 2001
Theoretical Computer Science, 2010
Proceedings of the 16th ACM symposium on Principles and practice of parallel programming, 2011
Proceedings of the 2015 International Conference on Distributed Computing and Networking - ICDCN '15, 2015
2011
Proceedings of the twenty-third annual ACM symposium on Principles of distributed computing - PODC '04, 2004
Lecture Notes in Computer Science, 2011