Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1982, Information Processing Letters
…
4 pages
1 file
This research addresses communication protocols for distributed systems with local memories, focusing on the execution of indivisible operations by processes. The paper formulates a solution to the challenges of ensuring resource access and maintaining consistency in such systems. It introduces a ticket-based protocol that guarantees no process is indefinitely blocked when competing for resources, emphasizing the need for exclusive control over required resources during execution.
Lecture Notes in Computer Science, 2001
Acta Informatica
Resource allocation is the problem that a process may enter a critical section CS of its code only when its resource requirements are not in conflict with those of other processes in their critical sections. For each execution of CS, these requirements are given anew. In the resource requirements, levels can be distinguished, such as e.g. read access or write access. We allow infinitely many processes that communicate by reliable asynchronous messages and have finite memory. A simple starvation-free solution is presented. Processes only wait for one another when they have conflicting resource requirements. The correctness of the solution is argued with invariants and temporal logic. It has been verified with the proof assistant PVS.
Lecture Notes in Computer Science, 2014
In a wait-free model any number of processes may crash. A process runs solo when it computes its local output without receiving any information from other processes, either because they crashed or they are too slow. While in wait-free shared-memory models at most one process may run solo in an execution, any number of processes may have to run solo in an asynchronous wait-free message-passing model. This paper is on the computability power of models in which several processes may concurrently run solo. It first introduces a family of round-based wait-free models, called the d-solo models, 1 ≤ d ≤ n, where up to d processes may run solo. The paper gives then a characterization of the colorless tasks that can be solved in each d-solo model. It also introduces the (d, ǫ)-solo approximate agreement task, which generalizes ǫ-approximate agreement, and proves that (d, ǫ)-solo approximate agreement can be solved in the d-solo model, but cannot be solved in the (d + 1)-solo model. The paper studies also the relation linking d-set agreement and (d, ǫ)-solo approximate agreement in asynchronous wait-free message-passing systems.
1983
With the rapid decrease in the cost of hardware distributed computing is finding wider application. The parallelism inherent in distributed processing makes it much more difficult to design reliable systems. Many software development techniques such as hierarchical design and exaustive testing used for large sequential programs are no longer adequate because of the high degree of nondeterminism present in parallelism. This thesis addresses the two aspects of correctness and performance in the design of distributed and concurrent systems. In chapters 2 through 5 we consider different temporal logics and their extensions, as formal systems for reasoning about concurrent programs. In chapter 2 we investigate the complexity of decision procedures for different versions of Propositional Linear Temporal Logics(PTL). We present a space efficient decision procedure for the full logic. We also present optimal decision procedures for other restricted versions of this logic. We investigate the...
Proceedings of the fifth annual ACM symposium on Principles of distributed computing - PODC '86, 1986
The importance of the notion of knowledge in reasoning about distributed systems has been recently pointed out by several works. It has been argued that a distributed computation can be understood and analyzed by considering how it affects the state of knowledge of the system. We show that there are a variety of definitions which can reasonably be applied to what a process can know about the global state. We also move beyond the semantic definitions, and present the first proof methods for proving knowledge assertions. Both shared memory and message passing models are considered.
Proceedings of the twentieth annual ACM symposium on Principles of distributed computing, 2001
We study walt-free computation using (read/write) shared memory under a range of assumptions on the arrival pattern of processes. We distinguish first between bounded and infinite arrival patterns, and further distinguish these models by restricting the number of arrivals minus departures, the concurrency. Under the condition that no process takes infinitely many steps without terminating, for any finite bound k > 0, we show that bounding concurrency reveals a strict hierarchy of computational models: a model in which concurrency is bounded by k + 1 is strictly weaker than the model in which concurrency is bounded by k, for all k > 1. A model in which concurrency is bounded in each run, but no bound holds for all runs, is shown to be weaker than a k-bounded model for any k. The unbounded model is shown to be weaker still-in this model, finite prefixes of runs have bounded concurrency, but runs are admitted for which no finite bound holds over all prefixes. Hence, as the concurrency grows, the set of solvable problems strictly shrinks. Nevertheless, on the positive side, we demonstrate that many interesting problems (collect, snapshot, renaming) are solvable even in the infinite arrival, unbounded concurrency model. This investigation illuminates relations between notions of wait-free solvability distinguished by arrival pattern, and notions of adaptive, one-shot, and long-lived solvability.
1979
A simple model of concurrent computations is presented in which disjoint instructions /processes/ of program are executed concurrently by processors /in a sufficiently large number/ under a shared memory environment. The semantics of such a program specifies the tree of configuration sequences which are acceptable as possible computations of the program. We do not agree with the existing literature /e.g. [2]/ that every sharing one processor among processes can be conceived as a concurrency. We claim that the other meaning of concurrency can be defined as well. The difference between these two meanings turns out to be essential. We do not assume that each configuration is obtained from its predecessor in the computation by exactly one processor performing an atomic step /assignment or test/ in a process. On the contrary, we assume that a processor cannot be delayed during his activities. The length of a step is indefinite, it must be finite only. This reflects various speeds of processors. Hence, for the configuration in which several processors are able to start the execution of their subsequent steps, a maximal number of atomic steps will be started, the choice being nondeterministic. We discuss semantical phenomena of concurrent computations. It is argued that they can be expressed in the language of an algorithmic logic. The problem of complete axiomatization of the latter remains open. The comparison with another model of concurrency — Petri nets — is given and, we hope, it is interesting. For, our approach offers a structured /algebraic/ restriction of the language of nets and new variants of semantics. From the results obtained in the theory of vector addition systems we learn an important property of concurrent computations — there is no faithful one processor simulation of them.
Journal of the ACM, 1995
Springer eBooks, 1997
We will study systems for which a maximal number of concurrently executing (time consuming) actions is statically xed. Two possible mechanisms of executions of actions are studied. Either the executions can be interrupted or they cannot. In the former case we show that equivalent processes remain equivalent when the number of resources (processors) which they have at disposal is changed. In the latter case, equivalences parameterized by the number of resources create a strictly decreasing hierarchy.
b Communicating Sequential Processes (CSP) is a paradigm for communication and synchronization among distributed processes. The alternative construct is a 0key feature of CSP that allows nondeterministic selection of one among several possible communicants, A generalized version of Hoare's original alternative construct that allows output commands to be included in guards has been proposed. Previous algorithms for this construct assume a message passing 0architecture and are not appropriate for multiprocessor systems that feature shared memory. This naper describes a distributed algorithm for the generalized alternative construct that exploits the capabilities of a parallel computer with shared memory. A correctness proof of the proposed algorithm is presented to show that the algorithm conforms to some satefy and liveness criteria. Extensions to allow termination of processes and to ensure fairness in guard selection are also given.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Proceedings of the 2014 ACM symposium on Principles of distributed computing - PODC '14, 2014
Logical Methods in …, 2011
Bulletin of The European Association for Theoretical Computer Science, 2013
Foundations of Computer Science, 1990. …, 1990
Journal of Systemics, Cybernetics and Informatics, 2018
Sadhana, 1992
Journal of Computer and System Sciences, 1998
IEEE Transactions on Robotics and Automation, 1995
18th International Parallel and Distributed Processing Symposium, 2004. Proceedings., 2004
arXiv (Cornell University), 2014