Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2009, The Journal of Object Technology
In this paper, we define a novel "write-key" operational semantics for a kernel language with fork-join parallelism, synchronization and "volatile" fields. We prove that programs that never suffer write-key errors are exactly those that are "data race free" and also those that are "correctly synchronized" in the Java memory model. This 3-way equivalence is proved using Twelf.
Electronic Proceedings in Theoretical Computer Science, 2012
We propose a novel, operational framework to formally describe the semantics of concurrent programs running within the context of a relaxed memory model. Our framework features a "temporary store" where the memory operations issued by the threads are recorded, in program order. A memory model then specifies the conditions under which a pending operation from this sequence is allowed to be globally performed, possibly out of order. The memory model also involves a "write grain," accounting for architectures where a thread may read a write that is not yet globally visible. Our formal model is supported by a software simulator, allowing us to run litmus tests in our semantics.
Java has integrated multithreading to a far greater extent than most programming languages. It is also one of the only languages that specifies and requires safety guarantees for improperly synchronized programs. It turns out that understanding these issues is far more subtle and difficult than was previously thought. The existing specification makes guarantees that prohibit standard and proposed compiler optimizations; it also omits guarantees that are necessary for safe execution of much existing code. Some guarantees that are made (e.g., type safety) raise tricky implementation issues when running unsynchronized code on SMPs with weak memory models.
Lecture Notes in Computer Science, 2000
We propose a high-level language based on rst order logic for expressing synchronization in concurrent object-oriented programs. The language allows the programmer to declaratively state the system safety properties as temporal constraints on speci c program points of interest. Higher-level synchronization constraints on methods in a class may be de ned using these temporal constraints. The constraints are enforced by the run-time environment. We illustrate our language by expressing synchronization of Java programs. However, the general underlying synchronization model we present i s language independent in that it allows the programmer to glue together separate concurrent threads regardless of their implementation language and application code.
Proceedings of the 2001 joint ACM-ISCOPE conference on Java Grande - JGI '01, 2001
Java has integrated multithreading to a far greater extent than most programming languages. It is also one of the only languages that specifies and requires safety guarantees for improperly synchronized programs. It turns out that understanding these issues is far more subtle and difficult than was previously thought. The existing specification makes guarantees that prohibit standard and proposed compiler optimizations; it also omits guarantees that are necessary for safe execution of much existing code. Some guarantees that are made (e.g., type safety) raise tricky implementation issues when running unsynchronized code on SMPs with weak memory models. This paper reviews those issues. It proposes a new core semantics for Java that allows for aggressive compiler optimization and addresses the safety and multithreading issues. Due to space constraints, certain side issues are addressed only in the full version of the semantics [8].
2016
In this work, we present a family of operational semantics that gradually approximates the realistic program behaviors in the C/C++11 memory model. Each semantics in our framework is built by elaborating and combining two simple ingredients: viewfronts and operation buffers. Viewfronts allow us to express the spatial aspect of thread interaction, i.e., which values a thread can read, while operation buffers enable manipulation with the temporal execution aspect, i.e., determining the order in which the results of certain operations can be observed by concurrently running threads. Starting from a simple abstract state machine, through a series of gradual refinements of the abstract state, we capture such language aspects and synchronization primitives as release/acquire atomics, sequentially-consistent and non-atomic memory accesses, also providing a semantics for relaxed atomics, while avoiding the Out-of-Thin-Air problem. To the best of our knowledge, this is the first formal and e...
29th Annual IEEE/NASA Software Engineering Workshop, 2005
The safe and reliable use of concurrency in multi-threaded systems has emerged as a fundamental engineering concern. We recently developed a model of synchroniztion contracts to address this concern in programs written in object-oriented languages. Programs written using our model comprise modules that declare access requirments in module interfaces in lieu of using low-level synchroniztion primitives in module implementations. At run time, these contracts are negotiated to derive schedules that guarantee freedom from data races while avoiding a large class of deadlock situations.
2011
In this paper we show how to statically detect memory violations and data races in a concurrent language, using a substructural type system based on linear capabilities. However, in contrast to many similar type-based approaches, our capabilities are not only linear, providing full access to a memory location but unshareable; they can also be read-only, thread-exclusive, and unrestricted, all providing restricted access to memory but extended shareability in the program source. Our language features two new operators, let! and lock, which convert between the various types of capabilities.
2003
Effort sponsored in part through the High Dependability Computing Program from NASA Ames cooperative agree-ment NCC-2-1298, in part by the Defense Advanced Research Projects Agency (DARPA) and Air Force Research Lab-oratory (AFRL), Air Force Materiel Command, USAF, under agreement number F30602-99-2-0522, and in part through the Carnegie Mellon School of Computer Science Siebel Scholars Program.
Lecture Notes in Computer Science, 2018
Synchronous Programming (SP) is a universal computational principle that provides deterministic concurrency. The same input sequence with the same timing always results in the same externally observable output sequence, even if the internal behaviour generates uncertainty in the scheduling of concurrent memory accesses. Consequently, SP languages have always been strongly founded on mathematical semantics that support formal program analysis. So far, however, communication has been constrained to a set of primitive clock-synchronised shared memory (csm) data types, such as data-flow registers, streams and signals with restricted read and write accesses that limit modularity and behavioural abstractions. This paper proposes an extension to the SP theory which retains the advantages of deterministic concurrency, but allows communication to occur at higher levels of abstraction than currently supported by SP data types. Our approach is as follows. To avoid data races, each csm type publishes a policy interface for specifying the admissibility and precedence of its access methods. Each instance of the csm type has to be policy-coherent, meaning it must behave deterministically under its own policy-a natural requirement if the goal is to build deterministic systems that use these types. In a policy-constructive system, all access methods can be scheduled in a policy-conformant way for all the types without deadlocking. In this paper, we show that a policy-constructive program exhibits deterministic concurrency in the sense that all policyconformant interleavings produce the same input-output behaviour. Policies are conservative and support the csm types existing in current SP languages. Technically, we introduce a kernel SP language that uses arbitrary policy-driven csm types. A big-step fixed-point semantics for this language is developed for which we prove determinism and termination of constructive programs.
ACM Transactions on Programming Languages and Systems, 1990
A concurrent object is a data object shared by concurrent processes. Linearizability is a correctness condition for concurrent objects that exploits the semantics of abstract data types. It permits a high degree of concurrency, yet it permits programmers to specify and reason about concurrent objects using known techniques from the sequential domain. Linearizability provides the illusion that each operation applied by concurrent processes takes effect instantaneously at some point between its invocation and its response, implying that the meaning of a concurrent object's operations can be given by pre- and post-conditions. This paper defines linearizability, compares it to other correctness conditions, presents and demonstrates a method for proving the correctness of implementations, and shows how to reason about concurrent objects, given they are linearizable.
Information Processing Letters, 1999
ACM Transactions on Computer …, 2000
The Java Language Specification (JLS) provides an operational definition for the consistency of shared variables. This definition remains unchanged in the JLS 2nd edition, currently under peer review. The definition, which relies on a specific abstract machine as its underlying model, is very complicated. Several subsequent works have tried to simplify and formalize it. However, these revised definitions are also operational, and thus have failed to highlight the intuition behind the original specification. In this work we provide a complete nonoperational specification for Java and for the JVM, excluding synchronized operations. We provide a simpler definition, in which we clearly distinguish the consistency model that is promised to the programmer from that which should be implemented in the JVM. This distinction, which was implicit in the original definition, is crucial for building the JVM. We find that the programmer model is strictly weaker than that of the JVM, and precisely define their discrepancy. Moreover, our definition is independent of any specific (or even abstract) machine, and can thus be used to verify JVM implementations and compiler optimizations on any platform. Finally, we show the precise range of consistency relaxations obtainable for the Java memory model when a certain compiler optimization-called prescient stores in JLS-is applicable. Abstract Memory System (AMS)
Proceedings of the 20th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, 2015
Making threaded programs safe and easy to reason about is one of the chief difficulties in modern programming. This work provides an efficient execution model for SCOOP, a concurrency approach that provides not only data-race freedom but also pre/postcondition reasoning guarantees between threads. The extensions we propose influence both the underlying semantics to increase the amount of concurrent execution that is possible, exclude certain classes of deadlocks, and enable greater performance. These extensions are used as the basis of an efficient runtime and optimization pass that improve performance 15× over a baseline implementation. This new implementation of SCOOP is, on average, also 2× faster than other well-known safe concurrent languages. The measurements are based on both coordination-intensive and data-manipulation-intensive benchmarks designed to offer a mixture of workloads.
cs.utoronto.ca
Static Analysis, 2017
Data race free (DRF) programs constitute an important class of concurrent programs. In this paper we provide a framework for designing and proving the correctness of data flow analyses that target this class of programs, and which are in the same spirit as the "sync-CFG" analysis originally proposed in [9]. To achieve this, we first propose a novel concrete semantics for DRF programs called L-DRF that is thread-local in nature with each thread operating on its own copy of the data state. We show that abstractions of our semantics allow us to reduce the analysis of DRF programs to a sequential analysis. This aids in rapidly porting existing sequential analyses to scalable analyses for DRF programs. Next, we parameterize the semantics with a partitioning of the program variables into "regions" which are accessed atomically. Abstractions of the region-parameterized semantics yield more precise analyses for region-race free concurrent programs. We instantiate these abstractions to devise efficient relational analyses for race free programs, which we have implemented in a prototype tool called RATCOP. On the benchmarks, RATCOP was able to prove upto 65% of the assertions, in comparison to 25% proved by a version of the analysis from [9].
Electronic Proceedings in Theoretical Computer Science, 2014
In the past decades, many different programming models for managing concurrency in applications have been proposed, such as the actor model, Communicating Sequential Processes, and Software Transactional Memory. The ubiquity of multi-core processors has made harnessing concurrency even more important. We observe that modern languages, such as Scala, Clojure, or F#, provide not one, but multiple concurrency models that help developers manage concurrency. Large end-user applications are rarely built using just a single concurrency model. Programmers need to manage a responsive UI, deal with file or network I/O, asynchronous workflows, and shared resources. Different concurrency models facilitate different requirements. This raises the issue of how these concurrency models interact, and whether they are composable. After all, combining different concurrency models may lead to subtle bugs or inconsistencies.
We show how to use a wide class of Structural Operational Semantics specifications for Truly Concurrent Semantics, whereas the classical approach is rather based on interleaving models. The models we consider are Asynchronous Transition Systems and their behaviors are defined by means of a classical truly concurrent bisimulation : ST-bisimulation. However, the definition of ST-bisimulation over Asynchronous Transition Systems is new and makes this equivalence better understood. Moreover, we explain how our result holds of many other truly concurrent bisimulations of the literature.
Lecture Notes in Computer Science, 2008
Transactional memory (TM) has recently emerged as an effective tool for extracting fine-grain parallelism from declarative critical sections. In order to make STM systems practical, significant effort has been made to integrate transactions into existing programming languages. Unfortunately, existing approaches fail to provide a simple implementation that permits lock-based and transaction-based abstractions to coexist seamlessly. Because of the fundamental semantic differences between locks and transactions, legacy applications or libraries written using locks can not be transparently used within atomic regions. To address these shortcomings, we implement a uniform transactional execution environment for Java programs in which transactions can be integrated with more traditional concurrency control constructs. Programmers can run arbitrary programs that utilize traditional mutual-exclusion-based programming techniques, execute new programs written with explicit transactional constructs, and freely combine abstractions that use both coding styles.
Tocs, 2000
The Java Language Specification (JLS) [Gosling et al. 1996] provides an operational definition for the consistency of shared variables. This definition remains unchanged in the JLS 2nd edition, currently under peer review. The definition, which relies on a specific abstract machine as its underlying model, is very complicated. Several subsequent works have tried to simplify and formalize it. However, these revised definitions are also operational, and thus have failed to highlight the intuition behind the original specification. In this work we provide a complete nonoperational specification for Java and for the JVM, excluding synchronized operations. We provide a simpler definition, in which we clearly distinguish the consistency model that is promised to the programmer from that which should be implemented in the JVM. This distinction, which was implicit in the original definition, is crucial for building the JVM. We find that the programmer model is strictly weaker than that of the JVM, and precisely define their discrepancy. Moreover, our definition is independent of any specific (or even abstract) machine, and can thus be used to verify JVM implementations and compiler optimizations on any platform. Finally, we show the precise range of consistency relaxations obtainable for the Java memory model when a certain compiler optimization-called prescient stores in JLS-is applicable.
2010
The most intuitive memory model for shared-memory multithreaded programming is sequential consistency (SC), but it disallows the use of many compiler and hardware optimizations thereby impacting performance. Data-race-free (DRF) models, such as the proposed C++0x memory model, guarantee SC execution for datarace-free programs. But these models provide no guarantee at all for racy programs, compromising the safety and debuggability of such programs. To address the safety issue, the Java memory model, which is also based on the DRF model, provides a weak semantics for racy executions. However, this semantics is subtle and complex, making it difficult for programmers to reason about their programs and for compiler writers to ensure the correctness of compiler optimizations.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.