Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1983, Sigplan Notices
This paper presents features of the NIL programming language which support the construction of distributed software systems: (1) a process model in which no pointers or shared data are visible, (2) interprocess communication via synchronous and asynchronous message passing, (3) compile-time typestate checking, guaranteeing module isolation and correct finalization of data, (4) dynamic binding of statically typed ports under the
Lecture Notes in Computer Science, 1985
This paper is a summary of ongoing research activities related to the programming language NIL, a high level language for concurrent and distributed systems developed at IBM Yorktown.We first present a short summary of the major features of NIL. These include the NIL system model, which is a dynamically evolving network of loosely coupled processes, communicating by message passing; the
IBM Systems Journal, 2000
Network Implementation Language (NIL) Is a highlevel programming language currently being used for the Implementation ofprototype communication systems. NIL Is designed for writing executable architecture which can be complied Into emc/ent code for the different machines and run-time environments of a family of communicating products. NIL's distinctive features Include (1) highlevel primitive type families supporting constructs needed for concurrent systems, (2) facilities for decomposition of a system Into modules which can be dynamically Installed and Interconnected, (3) compile-time type.tate checking -a mechanism for enhancing language security without Incurring large execution-time overhead.
1999
Linux is an easily available and powerful operating system, but it is based on a 70s design, making the need for the addition of more modern concepts apparent. This article lists the main characteristics of Distributed Inter-Process Communication (DIPC), a relatively simple system software that provides users of the Linux operating system with both the distributed shared memory and the message passing paradigms of distributed programming. Distributed programs using DIPC can be independent of any physical network characteristics and there is no need for the use of any link libraries or special compilers. DIPC works by making UNIX System V IPC mechanisms (shared memories, messages and semaphores) network transparent, thus integrating neatly with the rest of the system. The underlying network protocol used is TCP/IP. DIPC can work in Wide Area Networks (WANs) and in heterogeneous environments.
1997
Two of the most important issues in distributed systems are the synchronization of concurrentthreads and the application-level data transfers between execution spaces. At the design level, addressingthese issues typically requires analyzing the components under a different perspective thanis required to analyze the functionality. Very often, it also involves analyzing several componentsat the same time, because of the way those two
IEEE Transactions on Software Engineering, 1980
A new language concept-communication port (CP), is introduced for programming on distributed processor networks. Such a network can contain an arbitrary number of processors each with its own private storage but with no memory sharing. The processors must communicate via explicit message passing. Communication port is an encapsulation of two language properties: "communication nondeterminism" and "communication disconnect time." It provides a tool for progranmers to write well-structured, modular, and efficient concurrent programs. A number of examples are given in the paper to demonstrate the power of the new concepts.
1997
The remote service request, a form of remote procedure call, and the global pointer, a global naming mechanism, are two features at the heart of Nexus, a library for building distributed systems. NeXeme is an extension of Scheme that fully integrates both concepts in a mostly-functional framework, hence providing an expressive language for distributed computing. This paper presents a semantics for this Scheme extension, and also describes a NeXeme implementation, including its distributed garbage collector.
1997
Distributed Inter-Process Communication (DIPC) provides the programmers of the Linux operating system with distributed programming facilities, including Distributed Shared Memory (DSM). It works by making UNIX System V IPC mechanisms (shared memory, message queues and semaphores) network transparent, thus integrating neatly with the rest of the system. The underlying network protocol used is TCP/IP. DIPC is targeted to work on WANs (Wide Area Networks) and in heterogeneous environments.
Computer Languages, 1991
Until recently, at least one thing was clear about parallel programming: tightly coupled (shared memory) machines were programmed in a language based on shared variables and loosely coupled (distributed) systems were programmed using message passing. The explosive growth of research on distributed systems and their languages, however, has led to several new methodologies that blur this simple distinction. Operating system primitives (e.g., problem-oriented shared memory, Shared Virtual Memory, the Agora shared memory) and languages (e.g., Concurrent Prolog, Linda, Emerald) for programming distributed systems have been proposed that support the shared variable paradigm without the presence of physical shared memory. In this paper we will look at the reasons for this evolution, the resemblances and differences among these new proposals, and the key issues in their design and implementation. It turns out that many implementations are based on replication of data. We take this idea one step further, and discuss how automatic replication (initiated by the run time system) can be used as a basis for a new model, called the shared data-object model, whose semantics are similar to the shared variable model. Finally, we discuss the design of a new language for distributed programming, Orca, based on the shared data-object model.
Network-Based Parallel Computing. Communication, Architecture, and Applications, 1999
Implicitly parallel programming languages place the burden of exploiting and managing parallelism upon the compiler and runtime system, rather than on the programmer. This paper describes the design of NIP, a runtime system for supporting implicit parallelism in languages which combine both functional and object-oriented programming. NIP is designed for scaleable distributed memory systems including networks of workstations and custom parallel machines. The key components of NIP are: a parallel task execution unit ...
Distributed Systems, 1985
Computer Networks, Architecture and Applications, 1995
Computation intensive programs can utilise idle workstations in a cluster by exploiting the parallelism inherent in the problem being solved. A programming language for distributed computing offers advantages like early detection of type mismatch in communication and offers structured mechanisms to specify possible overlap in communication and computation, and exception handling for catching run time errors. Success of a language depends on its ease of use, expressibility and efficient implementation of its constructs. EC is a super set of C supporting process creation, a message passing mechanism, and exception handling. The pipelined communication constructs and multiple process instances help in expressing concurrency between computation and communication. Data driven activation of EC processes is used for scheduling. EC has been implemented in a Sun-3 workstation cluster. An inter-node message passing mechanism has been built on top of the socket interface using the TCP protocol, and intra-node message passing is done by passing pointers to improve efficiency. However, message_type variables hide the implementation details, improve type safety and location transparency of a program.
The Computer Journal, 1987
A new language concept for high-level distributed programming is proposed. Programs are organised as a collection of concurrently executing processes. Some of these processes, referred to as liaison processes, have a monitor-like structure and contain ports which may be invoked by other processes for the purposes of synchronisation and communication. Synchronisation is achieved by conditional activation of ports and also through port control constructs which may directly specify the execution ordering of ports. These constructs implement a path-expression-like mechanism for synchronisation and are also equipped with options to provide conditional, non-deterministic and priority ordering of ports. The usefulness and expressive power of the proposed concepts are illustrated through solutions of several representative programming problems. Some implementation issues are also considered.
Computing and Informatics / Computers and Artificial Intelligence, 1998
An extensible object-oriented platform NUTS for distributed computing is described which is based on an object-oriented programming environment NUT, is built on top of the Parallel Virtual Machine (PVM), and hides all low-level features of the latter. The language of NUTS is a concurrent object-oriented programming language with coarse- grained parallelism and distributed shared memory communication model imple- mented on
1992
A strategy for simplifying the programming of heterogeneous distributed systems is presented. The approach used is based on integrating a high-level distributed programming model, the process model, directly into programming languages. Distributed applications written in such languages are portable across different environments, are shorter, and are simpler to develop than similar applications developed using conventional approaches. The process model is discussed, and Hermes and Concert/C, two languages that implement this model, are described. Hermes is a secure, representation-independent language designed explicitly around the process model. Concert/C is the C language augmented with a small set of extensions to support the process model while allowing reuse of existing C code. Hermes has been prototyped: an implementation of Concert/C is in development
Proceedings of IADIS International …
This paper presents an Asynchronous Message Passing System (AMPS) that was designed and implemented for a distributed programming language called LIPS. AMPS does not use any message buffers like other systems and is based on a simple architecture and interfaces. Experiments with AMPS have proved that the system is robust and deadlocks are avoidable. In terms of reliability, there is no loss of data and the delay due to lack of message buffers are minimal and acceptable. Even though it was originally intended for LIPS, AMPS can be customised to go with other message passing languages.
ACM Computing Surveys, 1989
When distributed systems first appeared, they were programmed in traditional sequential languages, usually with the addition of a few library procedures for sending and receiving messages. As distributed applications became more commonplace and more sophisticated, this ad hoc approach became less satisfactory. Researchers all over the world began designing new programming languages specifically for implementing distributed applications. These languages and their history, their underlying principles, their design, and their use are the subject of this paper. We begin by giving our view of what a distributed system is, illustrating with examples to avoid confusion on this important and controversial point. We then describe the three main characteristics that distinguish distributed programming languages from traditional sequential languages, namely, how they deal with parallelism, communication, and partial failures. Finally, we discuss 15 representative distributed languages to give ...
1997
Distributed Inter-Process Communication (DIPC) is a software-only solution to enable people to build and program Multi-Computers. Among other things, DIPC provides the programmers with Transparent Distributed Shared Memory. DIPC is not concerned with the incompatible executable code problem. It also does not change user's data, but DIPC has to understand data produced by its own peer processes in different machines. This paper briefly describes the efforts done to make it a heterogeneous system by enabling it to function under the Linux operating system running on both Intel x86 and Motorola 680x0 processors. Here, different parts of the same application could execute in machines with different architectures
New Generation Computing, 1998
Much progress has been made in distributed computing in the areas of distribution structure, open computing, fault tolerance, and security. Yet, writing distributed applications remains difficult because the programmer has to manage models of these areas explicitly. A major challenge is to integrate the four models into a coherent development platform. Such a platform should make it possible to cleanly separate an application’s functionality from the other four concerns. Concurrent constraint programming, an evolution of concurrent logic programming, has both the expressiveness and the formal foundation needed to attempt this integration. As a first step, we have designed and built a platform that separates an application’s functionality from its distribution structure. We have prototyped several collaborative tools with this platform, including a shared graphic editor whose design is presented in detail. The platform efficiently implements Distributed Oz, which extends the Oz language with constructs to express the distribution structure and with basic primitives for open computing, failure detection and handling, and resource control. Oz appears to the programmer as a concurrent object-oriented language with dataflow synchronization. Oz is based on a higher-order, state-aware, concurrent constraint computation model.
1998
Much progress has been made in distributed computing in the areas of distribution structure, open computing, fault tolerance, and security. Yet, writing distributed applications remains difficult because the programmer has to manage models of these areas explicitly. A major challenge is to integrate the four models into a coherent development platform. Such a platform should make it possible to cleanly separate an application’s functionality from the other four concerns. Concurrent constraint programming, an evolution of concurrent logic programming, has both the expressiveness and the formal foundation needed to attempt this integration. As a first step, we have designed and built a platform that separates an application’s functionality from its distribution structure. We have prototyped several collaborative tools with this platform, including a shared graphic editor whose design is presented in detail. The platform efficiently implements Distributed Oz, which extends the Oz langu...
Communications of the ACM, 1979
Programming for distributed and other loosely coupled systems is a problem of growing interest. This paper describes an approach to distributed computing at the level of general purpose programming languages. Based on primitive notions of module, message, and transaction key, the methodology is shown to be independent of particular languages and machines. It appears to be useful for programming a wide range of tasks. This is part of an ambitious program of development in advanced programming languages, and relations with other aspects of the project are also discussed.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.