Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1993
Building the hardware for a high-performance distributed computer system is a lot easier than building its software. In this paper we describe a model for programming distributed systems based on abstract data types that can be replicated on all machines that need them. Read operations are done locally, without requiring network traffic. Writes can be done using a reliable broadcast algorithm if the hardware supports broadcasting; otherwise, a point-to-point protocol is used. We have built such a system based on the Amoeba microkernel, and implemented a language, Orca, on top of it. For Orca applications that have a high ratio of reads to writes, we have measured good speedups on a system with 16 processors.
Computer, 2000
Parallel computers come in two varieties: those with shared memory and those without. The former are hard to build; the latter are hard to program. In this paper we propose a hybrid form that combines the best properties of each. The basic idea is to allow programmers to define objects upon-which userdefined operations are performed, in effect, abstract data types. Each object is replicated on each machine that needs it. Reads are done locally, with no network traffic. Writes are done by a reliable broadcast algorithm. A language for parallel programming, Orca, based on distributed shared objects has been designed, implemented, and used for some applications. Its implementation uses the reliable broadcast mechanism. For applications with a high read/write ratio to the shared objects, we show that our approach can frequently achieve close to linear speedup with up to 16 processors. BLURB FOR CENTER COLUMN Parallel computers come in two varieties: those with shared memory and those without. The former are hard to build; the latter are hard to program. In this paper we propose a hybrid form that combines the best properties of each.
Computer Networks, Architecture and Applications, 1995
Computation intensive programs can utilise idle workstations in a cluster by exploiting the parallelism inherent in the problem being solved. A programming language for distributed computing offers advantages like early detection of type mismatch in communication and offers structured mechanisms to specify possible overlap in communication and computation, and exception handling for catching run time errors. Success of a language depends on its ease of use, expressibility and efficient implementation of its constructs. EC is a super set of C supporting process creation, a message passing mechanism, and exception handling. The pipelined communication constructs and multiple process instances help in expressing concurrency between computation and communication. Data driven activation of EC processes is used for scheduling. EC has been implemented in a Sun-3 workstation cluster. An inter-node message passing mechanism has been built on top of the socket interface using the TCP protocol, and intra-node message passing is done by passing pointers to improve efficiency. However, message_type variables hide the implementation details, improve type safety and location transparency of a program.
IEEE Transactions on Software Engineering, 1992
Orca is a language for implementing parallel applications on loosely coupled distributed systems. Unlike most languages for distributed programming, it allows processes on different machines to share data. Such data are encapsulated in data-objects, which are instances of user-defined abstract data types. The implementation of Orca takes care of the physical distribution of objects among the local memories of the processors. In particular, an implementation may replicate and/or migrate objects in order to decrease access times to objects and increase parallelism. This paper gives a detailed description of the Orca language design and motivates the design choices. Orca is intended for applications programmers rather than systems programmers. This is reflected in its design goals to provide a simple, easy to use language that is type-secure and provides clean semantics. The paper discusses three example parallel applications in Orca, one of which is described in detail. It also describes one of the existing implementations, which is based on reliable broadcasting. Performance measurements of this system are given for three parallel applications. The measurements show that significant speedups can be obtained for all three applications. Finally, the paper compares Orca with several related languages and systems.
Computing and Informatics / Computers and Artificial Intelligence, 1998
An extensible object-oriented platform NUTS for distributed computing is described which is based on an object-oriented programming environment NUT, is built on top of the Parallel Virtual Machine (PVM), and hides all low-level features of the latter. The language of NUTS is a concurrent object-oriented programming language with coarse- grained parallelism and distributed shared memory communication model imple- mented on
Proceedings. 1990 International Conference on Computer Languages, 1990
Orca is a language for programming parallel applications on distributed computing systems. Although processors in such systems communicate only through message passing and not through shared memory, Orca provides a communication model based on logically shared data. Programmers can define abstract data types and create instances (objects) of these types, which may be shared among processes. All operations on shared objects are executed atomically. Orca's shared objects are implemented by replicating them in the local memories of the processors. Read operations use the local copies of the object, without doing any interprocess communication. Write operations update all copies using an efficient reliable broadcast protocol. In this paper, we briefly describe the language and its implementation and then report on our experiences in using Orca for three parallel applications: the Traveling Salesman Problem, the All-pairs Shortest Paths problem, and Successive Overrelaxation. These applications have different needs for shared data: TSP greatly benefits from the support for shared data; ASP benefits from the use of broadcast communication, even though it is hidden in the implementation; SOR merely requires point-to-point communication, but still can be implemented in the language by simulating message passing. We discuss how these applications are programmed in Orca and we give the most interesting portions of the Orca code. Also, we include performance measurements for these programs on a distributed system consisting of 10 MC68020s connected by an Ethernet. These measurements show that significant speedups are obtained for all three programs. * We will sometimes use the term "object" as a shorthand notation for data-objects. Note, however, that unlike in most parallel object-based systems, objects in our model are purely passive.
1996
Orca is an object-based distributed shared memory system that is designed for writing portable and efficient parallel programs. Orca hides the communication substrate from the programmer by providing an abstract communication model based on shared objects. Mutual exclusion and condition synchronization are cleanly integrated in the model. Orca has been implemented using a layered system, consisting of a compiler, a runtime system, and a virtual machine (Panda). To implement shared objects efficiently on a distributed-memory machine, the Orca compiler generates regular expressions describing how shared objects are accessed. The runtime system uses this information together with runtime statistics to decide which objects to replicate and where to store nonreplicated objects. The Orca system has been implemented on a range of platforms (including Solaris, Amoeba, Parix, and the CM-5). Measurements of several benchmarks and applications across four platforms show that the new Orca system achieves portability with good performance. In addition, the measurements show that performance of the new system is as good as the previous implementation that was specialized for Amoeba.
Computer Languages, 1991
Until recently, at least one thing was clear about parallel programming: tightly coupled (shared memory) machines were programmed in a language based on shared variables and loosely coupled (distributed) systems were programmed using message passing. The explosive growth of research on distributed systems and their languages, however, has led to several new methodologies that blur this simple distinction. Operating system primitives (e.g., problem-oriented shared memory, Shared Virtual Memory, the Agora shared memory) and languages (e.g., Concurrent Prolog, Linda, Emerald) for programming distributed systems have been proposed that support the shared variable paradigm without the presence of physical shared memory. In this paper we will look at the reasons for this evolution, the resemblances and differences among these new proposals, and the key issues in their design and implementation. It turns out that many implementations are based on replication of data. We take this idea one step further, and discuss how automatic replication (initiated by the run time system) can be used as a basis for a new model, called the shared data-object model, whose semantics are similar to the shared variable model. Finally, we discuss the design of a new language for distributed programming, Orca, based on the shared data-object model.
ACM SIGOPS Operating Systems Review, 1981
As hardware prices continue to drop rapidly, building large computer systems by interconnecting substantial numbers of microcomputers becomes increasingly attractive. Many techniques for interconnecting the hardware, such as Ethernet [Metcalfe and Boggs, 1976], ring nets [Farber and Larson, 1972], packet switching, and shared memory are well understood, but the corresponding software techniques are poorly understood. The design of general purpose distributed operating systems is one of the key research issues for the 1980s.
Communications of the ACM, 1979
Programming for distributed and other loosely coupled systems is a problem of growing interest. This paper describes an approach to distributed computing at the level of general purpose programming languages. Based on primitive notions of module, message, and transaction key, the methodology is shown to be independent of particular languages and machines. It appears to be useful for programming a wide range of tasks. This is part of an ambitious program of development in advanced programming languages, and relations with other aspects of the project are also discussed.
This paper describes the architecture and implementation of MIKE-a version of the IK distributed persistent object-oriented programming platform built on top of the Mach microkernel. MIKE's primary goal is to o er a single object-oriented programming paradigm for writing distributed applications. In MIKE an application programmer can use C++ almost as he would in a non-distributed system. The platform supports ne grained objects which can be invoked in a location transparent way and whose references can be exchanged freely as invocation parameters. These objects are potentially persistent. MIKE supports the abstraction of one-level store, persistent objects are transparently loaded on demand when rst invoked and saved to disk when the application terminates. Class objects are special persistent objects which are dynamically linked when needed. The platform also o ers distributed garbage collection of non-persistent objects. This paper discusses how MIKE makes use of Mach's features to o er the functionality described above and the techniques used to achieve good performance. MIKE is compared with the Unix versions of IK to evaluate the bene ts of using Mach abstractions.
1992
This development raises the question of what kind of software will be needed for these new systems. To answer this question, a group under the direction of Prof. Andrew S. Tanenbaum at the Vrije Universiteit (VU) in Amsterdam (The Netherlands) has been doing research since 1980 in the area of distributed computer systems. This research, partlydone in cooperationwith the Centrum voor Wiskunde en Informatica (CWI), has resulted in the development of a new distributed operating system, called Amoeba, designed for an environment consisting of a large number of computers.
ACM Computing Surveys, 1989
When distributed systems first appeared, they were programmed in traditional sequential languages, usually with the addition of a few library procedures for sending and receiving messages. As distributed applications became more commonplace and more sophisticated, this ad hoc approach became less satisfactory. Researchers all over the world began designing new programming languages specifically for implementing distributed applications. These languages and their history, their underlying principles, their design, and their use are the subject of this paper. We begin by giving our view of what a distributed system is, illustrating with examples to avoid confusion on this important and controversial point. We then describe the three main characteristics that distinguish distributed programming languages from traditional sequential languages, namely, how they deal with parallelism, communication, and partial failures. Finally, we discuss 15 representative distributed languages to give ...
Proceedings of the 2nd Annual ASCI Conference, 1996
Abstract. Current paradigms for interprocess communication are not sufficient to describe the exchange of information at an adequate level of abstraction. They are either too low-level, or their implementations cannot meet performance requirements. As an alternative, we propose distributed shared objects as a unifying concept. These objects offer user-defined operations on shared state, but allow for efficient implementations through replication and distribution of state. In contrast to other object-based models, these implementation ...
Computing and control engineering journal, 1993
Computer is a powerful machine for assisting human beings in different applications. Programming languages are the tool that makes these computers usable. Every programming language has its own weaknesses and strengths whether old or modern, high or low level. The choice of programming language to be used in building distributed systems depends on what sort of program and kind of computer the program is to run on. The paper presents how the best programming language is selected for distributed systems via the following criteria for language selection (Scalability, Concurrency, Reliability, Security,Performance, Fle xibility, Portability, High Integrity and Ease of Use). The paper concentrates on comparing Java and C++ languages in respect to programming distributed systems and also presents the main criteria for choosing the proper languages for the right system.
1997
Orca is an object-based distributed shared memory system that is designed for writing portable and efficient parallel programs. Orca hides the communication substrate from the programmer by providing an abstract communication model based on shared objects. The paper describes a new, portable implementation of Orca, using a layered system that consists of a compiler, a runtime system, and a virtual machine (Panda). The Orca system has been implemented on a range of platforms (including Solaris, Amoeba, Parix, and the CM-5). Measurements of several benchmarks and applications across four platforms show that the Orca system achieves portability with good performance. In addition, the measurements show that performance of the system is as good as a previous implementation of Orca that was specialized for Amoeba.
Object-Oriented Technologys, 1998
In this paper we study how the potential advantages of distributed shared memory (DSM) techniques can be applied to concurrent object-oriented languages. We assume a DSM scheme based on an entry consistency memory model and propose an object model that can incorporate that DSM scheme. The object model is characterized by the requirement of explicitly enclosing object invocations between acquire and release operations, and the distinction between command and query operations. Details of a thread-based implementation are discussed, and results show that significant speed-ups can be obtained. We also conclude that using kernel-level threads can lead to better performance, and the overhead versus userlevel threads is negligible.
1998
Much progress has been made in distributed computing in the areas of distribution structure, open computing, fault tolerance, and security. Yet, writing distributed applications remains difficult because the programmer has to manage models of these areas explicitly. A major challenge is to integrate the four models into a coherent development platform. Such a platform should make it possible to cleanly separate an application’s functionality from the other four concerns. Concurrent constraint programming, an evolution of concurrent logic programming, has both the expressiveness and the formal foundation needed to attempt this integration. As a first step, we have designed and built a platform that separates an application’s functionality from its distribution structure. We have prototyped several collaborative tools with this platform, including a shared graphic editor whose design is presented in detail. The platform efficiently implements Distributed Oz, which extends the Oz langu...
Modern computer interconnections typically include a wide variety of systems. They may have different hardware architectures and operating system structures. Normally, they are underutilized, resulting in significant latent power. ARCADE is new architectural basis for distributed computing systems intended to exploit this power. It defines machine-independent abstractions and services which can be used to build distributed applications on such interconnections. They have been successfully implemented as both a stand-alone microkernel and as a set of add-in services for conventional operating systems. The abstractions are based on high-level language models which makes them easily accessible and quite efficient. The services can transparently cross even heterogeneous machine boundaries. The architecture proposes a new approach to distributed shared memory and to global resource identification. The resulting structure has proven to be a powerful set of building blocks for a variety of system services and applications.
Journal of Systems Architecture, 2000
Early distributed shared memory systems used the shared virtual memory approach with ®xed-size pages, usually 1±8 KB. As this does not match the variable granularity of sharing of most programs, recently the emphasis has shifted to distributed object-oriented systems. With small object sizes, the overhead of inter-process communication could be large enough to make a distributed program too inecient for practical use. To support research in this area, we have implemented a user-level distributed programming testbed, DIPC, that provides shared memory, semaphores and barriers. We develop a computationally-ecient model of distributed shared memory using approximate queueing network techniques. The model can accommodate several algorithms including central server, migration and readreplication. These models have been carefully validated against measurements on our distributed shared memory testbed. Results indicate that for large granularities of sharing and small access bursts, central server performs better than both migration and read-replication algorithms. Read-replication performs better than migration for small and moderate object sizes for applications with high degree of read-sharing and migration performs better than readreplication for large object sizes for applications having moderate degree of read-sharing.
IEEE Transactions on Software Engineering, 1989
This paper presents a retrospective view of the Charlotte distributed operating system, a testbed for developing techniques and tools to solve computation-intensive problems with large-grain parallelism. The final version of Charlotte runs on the Crystal multicomputer, a collection of VAX-111750 computers connected by a local-area network. The kernellprocess interface is unique in its support for symmetric, bidirectional communication paths (called links), and synchronous nonblocking communication. Our experience indicates that the goals of simplicity and function are not easily achieved. Simplicity in particular has dimensions that conflict with one another. Although our design decisions produced a high-quality environment for research in distributed applications, they also led to unexpected implementation costs and required high-level language support. We learned several lessons from implementing Charlotte. Links have proven to be a useful abstraction, but our primitives do not seem to be at quite the right level of abstraction. Our implementation employed finite-state machines and a multitask kernel, both of which worked well. It also maintains absolute distributed information, which is more expensive than using hints. The development of high-level tools, particularly the Lynx distributed programming language, has simplified the use of kernel primitives and helps to manage concurrency at the process level.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.