Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2013, Advances in Parallel Computing
Nowadays the concepts and infrastructures of Cloud Computing are becoming a standard for several applications. Scalability is not only a buzzword anymore, but is being used effectively. However, despite the economical advantages of virtualization and scalability, some factors as latency, bandwidth and processor sharing can be a problem for doing Parallel Computing on the Cloud.
Programming models for cloud computing has become a research focus recently. Cloud computing promises to provide on-demand and flexible IT services, which goes beyond traditional programming models and calls for new ones. Some progress has been made in cloud computing programming models for large-scale data processing, but little was done on models of predictable performance. With the advantages on predictable performance, easily programming and deadlock avoidance, the BSP model has been widely applied in parallel databases, search engines, and scientific computing. This paper targets to adapt the BSP model into cloud environment. The scheduling of computing tasks and the allocation of cloud resources will be integrated into the BSP model. A BSPCloud programming model with predictable performance is proposed.
Lecture Notes in Computer Science, 2001
This paper surveys and places into perspective a number of results concerning the D-BSP (Decomposable Bulk Synchronous Parallel) model of computation, a variant of the popular BSP model proposed by Valiant in the early nineties. D-BSP captures part of the proximity structure of the computing platform, modeling it by suitable decompositions into clusters, each characterized by its own bandwidth and latency parameters. Quantitative evidence is provided that, when modeling realistic parallel architectures, D-BSP achieves higher effectiveness and portability than BSP, without significantly affecting the ease of use. It is also shown that D-BSP avoids some of the shortcomings of BSP which motivated the definition of other variants of the model. Finally, the paper discusses how the aspects of network proximity incorporated in the model allow for a better management of network congestion and bank contention, when supporting a shared-memory abstraction in a distributed-memory environment.
2014
Abstract—Cloud Platforms allow programmers to write ap-plications that run in the cloud, or use services from the Cloud, or both while abstracting the essence of scalability and distributed processing. With the emergence of Clouds as a nascent architecture, we need abstractions that support emerging programming models. In recent years, Cloud Computing has led to the design and developement of diverse programming models for Massive Data Processing and Compute Intensive Applications. We survey different programming models for the Cloud, including the popular MapReduce Model and others which improve upon its shortcomings. Further, we look at models which are promising alternatives to these parallel programming models.
Because of rapid development of computer and network technologies, development and usage of parallel, distributed, and cloud computing have been widely used for the problem solution which requires the high performance computing power. Developments on parallel computing and their applications are summarized. Parallel computing and their applications are briefly explained. Research studies on distributed computing have been increased because of developments on network technologies and increased data transfer rate per unit time (network bandwidth). Research studies have been focused on cloud computing by using many computers recent years. In this study, parallel, distributed, and cloud computing are examined for various parameters.
Proceedings of the 3rd international workshop on Middleware for grid computing - MGC '05, 2005
InteGrade is an object-oriented grid middleware infrastructure whose goal is to leverage existing computational resources in organizations. Rather than relying on dedicated hardware such as reserved clusters, InteGrade focuses on using desktops in users' offices, machines in computer laboratories, shared workstations, as well as dedicated clusters. In this paper, we describe the support for the execution of highly coupled parallel applications on top of InteGrade. The paper describes the implementation of the middleware to support BSP parallel applications (with global synchronization points), and presents experimental results.
Grid computing is a computer network, which every machine's assets are shared with every other machine. The goal is to produce the trickery of a simple (through huge and commanding) self-handling virtual system out of a huge group of linked heterogeneous systems, which sharing numerous groupings resources. Regularization of communications among heterogeneous systems generated and Internet explosion. Developing regularization used for sharing resources, alongside with convenience of upper bandwidth are pouring feasibly alike huge evolutionary phase. Previous limited existences here has stayed a quick exponential rise in system processing power, data storing and communication. However quiet here are numerous difficult and calculation rigorous complications, those can't be unraveled by mainframes. The difficulties can individual encountered through huge variation of unrelated resources. Attractiveness of the Internet, accessibility of high-speed networks take progressively transformed a manner of computing. The fresh technique that sharing resources for large-scale complications can solved through grid computing. This paper designates the theories fundamental grid computing. Keywords-Enter key words or phrases in alphabetical order, separated by colon.
This paper studies a communication model that aims at extending the scope of computational grids by allowing the execution of parallel and/or distributed applications without imposing any programming constraints or the use of a particular communication layer. Such model leads to the design of a communication framework for grids which allows the use of the appropriate middleware for the application rather than the one dictated by the available resources. Such a framework is able to handle any communication middleware -even several at the same time-on any kind of networking technologies. Our proposed dual-abstraction (parallel and distributed) model is organized into three layers: arbitration, abstraction and personalities which are highlighted in the paper. The performance obtained with PadicoTM, our available open source implementation of the proposed framework, show that such functionality can be obtained with still providing very high performance.
Concurrency and Computation: Practice and Experience, 2010
In the recent decades we have witnessed a major revolution in the computer field. The major challenges posed by applications in fields of bioinformatics, earth sciences or weather forecasting, among others, have caused the proliferation of complex solutions, such as grid, cloud and highperformance computing. The common objective of all these disciplines is the sharing of hardware and software resources to provide an infrastructure in which to run efficiently these applications. Particularly, grid computing has been one of the most important computing topics in the last years. Within this context, the GADA workshop arose in 2004 as a forum for researchers in grid computing and its application to data analysis. From then until 2008, GADA became a reference conference for researchers in grid, covering also a broader set of disciplines, although grid computing continued to play a key role in the set of main topics of the conference.
2010
Infrastructure services (Infrastructure-as-a-service), provided by cloud vendors, allow any user to provision a large number of compute instances fairly easily. Whether leased from public clouds or allocated from private clouds, utilizing these virtual resources to perform data/compute intensive analyses requires employing different parallel runtimes to implement such applications.
International Journal of INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING, 2024
The popularity of cloud computing and large-scale distributed systems is rapidly increasing because of the variety of service models and advantages they offer as well as the necessity of individuals and organizations to access their resources easily and efficiently, in addition to the need for more reliable and robust systems. For these reasons, many distributed algorithms have been designed to facilitate the coordination and interconnection among the distributed computational elements to work together in parallel to achieve a common goal. These algorithms are related to various aspects such as consensus, load balancing, scheduling, communication, leader selection and fault tolerance. Many researches have been carried out to investigate and improve the performance of these distributed algorithms. Therefore, this paper studies and compares a variety of research works that has been performed in distributed algorithms for large-scale cloud computing.
2020
One of the most important subject which many researchers depending on it by applying many algorithms and methods is Cloud Computing. Some of these methods were used to enhance performance, speed, and advantage of task-level parallelism and some of these methods used to deal with big data and scheduling. Many others decrease the computation's quantity in the process of implementation; especially decrease the space of the memory. Parallel data processing is one of the common applications of infrastructure, which is classified as a service in cloud computing. The purpose of this paper is to review parallel processing in the cloud. However, the results and methods are inconsistent; therefore, the scheduling concepts give an easy method to use the resources and process the data in parallel and decreasing the overall implementation time of processing algorithms. Overall, this review gives us and open new doors for using a suitable technique in parallel data processing filed. As a result of our work show according to many factors which strategies is better.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2006
The Internet computing model with its ubiquitous networking and computing infrastructure is driving a new class of interoperable applications that benefit both from high computing power and multiple Internet connections. In this context, grids are promising computing platforms that allow to aggregate distributed resources such as workstations and clusters to solve large-scale problems. However, because most parallel programming tools were primarily developed for MPP and cluster computing, to exploit the new environment higher abstraction and cooperative interfaces are required. Rocmeμ is a platform originally designed to support the operation of multi-SAN clusters that integrates application modeling and resource allocation. In this paper we show how the underlying resource oriented computation model provides the necessary abstractions to accommodate the migration from cluster to multicluster grid enabled computing.
Future Generation Computer Systems, 2008
This special issue of Future Generation Computer Systems is devoted to modern applications of distributed and grid computing. A relatively small number of papers were selected in order to cover some important areas of distributed and grid computing. At the same time we do not pretend that all important areas of this fast developing scientific field are covered.
The distributed computing is done on many systems to solve a large scale problem. The growing of high-speed broadband networks in developed and developing countries, the continual increase in computing power, and the rapid growth of the Internet have changed the way. In it the society manages information and information services. Historically, the state of computing has gone through a series of platform and environmental changes. Distributed computing holds great assurance for using computer systems effectively. As a result, supercomputer sites and datacenters have changed from providing high performance floating point computing capabilities to concurrently servicing huge number of requests from billions of users. The distributed computing system uses multiple computers to solve large-scale problems over the Internet. It becomes data-intensive and network-centric. The applications of distributed computing have become increasingly wide-spread.
Cloud computing platforms have the potential to benefit scientific projects on all fields of knowledge. Its virtualized resources and large storage capacity enable any scientist to have access to high performance computing platforms at low costs. In this paper, we present the MyCloud project, a cloud computing infrastructure for the Technological University of Paraná campuses, in southern Brazil. We show how to build and instantiate virtual machines templates for BSP/CGM applications that can be used on private cloud computing platforms.
Cloud computing is a distributed computing technology which is the combination of hardware and software and delivered as a service to store, manage and process data. A new system is proposed to allocate resources dynamically for task scheduling and execution. Virtual machines are introduced in the proposed architecture for efficient parallel data processing in the cloud. Various virtual machines are introduced to automatically instantiate and terminate in execution of job. An extended evaluation of MapReduce is also used in this approach.
Computer Communications and Networks, 2010
Since the appearance of the distributed computing technology, there has been a significant effort in designing and building the infrastructure needed to tackle the challenges raised by complex scientific applications, which require massive computational resources. This increases the awareness to harness the power and the flexibility of Clouds, which have emerged recently as an alternative to data-centers or private clusters. We describe in this chapter an efficient high level Grid and Cloud framework which allows a smooth transition from clusters and Grids to Clouds. The main lever is the ability to move application infrastructure specific information away from the code and manage them in a deployment file. An application can thus easily run on a cluster, a grid or a cloud or any mixture of them without modification.
2013
Through the 1990s to 2012 the internet changed the world of computing drastically. It started its journey with parallel computing after it advanced to distributed computing and further to grid computing. And in present scenario it creates a new world which is pronounced as a Cloud Computing [1]. These all three terms have different meanings. Cloud computing is based on backward computing schemes like cluster computing, distributed computing, grid computing and utility computing. The basic concept of cloud computing is virtualization. It provides virtual hardware and software resources to various requesting programs. This paper gives a detailed description about cluster computing, grid computing and cloud computing and gives an insight of some implementations of the same. We try to list the inspirations for the advent of all these technologies. We also account for some present scenario faults of grid computing and also discuss new cloud computing projects which are being managed by the Government of India for learning. The paper also reviews the existing work and covers (analytically), to some extent, some innovative ideas that can be implemented.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.