Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2010, 2010 IEEE International Conference On Cluster Computing Workshops and Posters (CLUSTER WORKSHOPS)
…
6 pages
1 file
Heterogeneous computing which includes mixed architectures with multi-core CPUs as well as hardware accelerators such as GPU hardware, is needed to satisfy future computational needs and energy requirements. Cloud computing currently offers users whose computational needs vary greatly over time, a cost-effect way to gain access to resources. While the current form of cloud-based systems is suitable for many scenarios, their evolution into a truly heterogeneous computational environments is still not fully developed. This paper describes THOR (Transparent Heterogeneous Open Resources), our framework for providing seamless access to HPC systems composed of heterogeneous resources. Our work focuses on the core module, in particular the policy engine. To validate our approach, THOR has been implemented on a scaled-down heterogeneous cluster within a cloud-based computational environment. Our testing includes an Open CL encryption/decryption algorithm that was tested for several use cases. The corresponding computational benchmarks are provided to validate our approach and gain valuable knowledge for the policy database.
Heterogeneous hardware acceleration architectures are becoming an important tool for achieving high performance in cloud computing environments. Both FPGAs and GPUs provide huge computing capabilities over large amounts of data. At the same time, energy efficiency is a big concern in nowadays computing systems. In this paper we propose a novel architecture aimed at integrating hardware accelerators into a Cloud Computing infrastructure, and making them available as a service. The proposal harnesses advances in FPGA dynamic reconfiguration and efficient virtualization technology to accelerate the execution of certain types of tasks. In particular, applications that can be described as Big Data would greatly benefit from the proposed architecture.
2011
Current cloud computing infrastructure typically assumes a homogeneous collection of commodity hardware, with details about hardware variation intentionally hidden from users. In this paper, we present our approach for extending the traditional notions of cloud computing to provide a cloud-based access model to clusters that contain a heterogeneous architectures and accelerators. We describe our ongoing work extending the OpenStack cloud computing stack to support heterogeneous architectures and accelerators, and our experiences running OpenStack on our local heterogeneous cluster testbed.
Future Generation Computer Systems, 2013
Clouds have provided on-demand, scalable and affordable High Performance Computing (HPC) resources to discipline (e.g., Biology, Medicine, Chemistry) scientists. However, the steep learning curve of preparing a HPC cloud and deploying HPC applications has hindered many scientists to achieve innovative discoveries for which HPC resources must be relied on. With the world moving to web-based tools, scientists are also seeking more web-based technologies to support their research. Unfortunately, the discipline problems of high-performance computational research are both unique and complex, which make the development of web-based tools for this research difficult. This paper presents our work on developing a unified cloud framework that allows discipline users to easily deploy and expose HPC applications in public clouds as services. To provide a proof of concept, we have implemented the cloud framework prototype by integrating three components: (i) Amazon EC2 public cloud for providing HPC infrastructure, (ii) a HPC service software library for accessing HPC resources, and (iii) the Galaxy webbased platform for exposing and accessing HPC application services. This new approach can reduce the time and money needed to deploy, expose and access discipline HPC applications in clouds.
Lecture Notes in Computer Science, 2014
In spite of the rapid growth of Infrastructure-as-a-Service offers, support to run data-intensive and scientific applications large-scale is still limited. On the user side, existing features and programming models are insufficiently developed to express an application in such way that it can benefit from an elastic infrastructure that dynamically adapts to the requirements, which often leads to unnecessary over-provisioning and extra costs. On the provider side, key performance and scalability issues arise when having to deal with large groups of tightly coupled virtualized resources needed by such applications, which is especially challenging considering the multi-tenant dimension where sharing of physical resources introduces interference both inside and across large virtual machine deployments. This paper contributes with holistic vision that imagines a tight integration between programming models, runtime middlewares and the virtualization infrastructure in order to provide a framework that transparently handles allocation and utilization of heterogeneous resources while dealing with performance and elasticity issues.
International Journal of Networking and Computing
The ever-growing demand for compute resources has reached a wide range of application domains, and with that has created a larger audience for compute-intensive tasks. In this paper, we present the CloudCL framework, which empowers users to run compute-intensive tasks without having to face the total cost of ownership of operating an extensive high-performance compute infrastructure. CloudCL enables developers to tap the ubiquitous availability of cloudbased heterogeneous resources using a single-paradigm compute framework, without having to consider dynamic resource management and inter-node communication. In an extensive performance evaluation, we demonstrate the feasibility of the framework, yielding close-to-linear scale-out capabilities for certain workloads.
2012 IEEE 26th International Parallel and Distributed Processing Symposium Workshops & PhD Forum, 2012
Heterogeneity in secondary characteristics of different HPC target platforms is the focus of this paper. Clusters, grids, and (IaaS) clouds may appear straightforward to configure to be interchangeable -but our experiences with mainstream parallel codes for CFD demonstrate that secondary attributes -support software, interconnect type, availability, access, and cost -expose heterogeneous aspects that impact overall effectiveness of application execution. The emergence of clouds as alternatives to grids and local resources for parallel HPC codes portends "computing as a utility" in science and engineering domains. Our experiences provide preliminary insights into characterizing these different types of platforms to which users typically have access -and show where the tradeoffs can be, in terms of deployment effort, actual and nominal costs, application performance, and availability (both in terms of resource size and time to gain access). For our test application, we report that each of the platforms to which we had access had its particular benefits and drawbacks in terms of the above attributes. More generally, our experiences may provide an example preview into what developers and users can expect when selecting a "utility provider" and specific instance thereof for a particular run of their application.
Heterogeneity in secondary characteristics of different HPC target platforms is the focus of this paper. Clusters, grids, and (IaaS) clouds may appear straightforward to configure to be interchangeable -but our experiences with mainstream parallel codes for CFD demonstrate that secondary attributes -support software, interconnect type, availability, access, and cost -expose heterogeneous aspects that impact overall effectiveness of application execution. The emergence of clouds as alternatives to grids and local resources for parallel HPC codes portends "computing as a utility" in science and engineering domains. Our experiences provide preliminary insights into characterizing these different types of platforms to which users typically have access -and show where the tradeoffs can be, in terms of deployment effort, actual and nominal costs, application performance, and availability (both in terms of resource size and time to gain access). For our test application, we report that each of the platforms to which we had access had its particular benefits and drawbacks in terms of the above attributes. More generally, our experiences may provide an example preview into what developers and users can expect when selecting a "utility provider" and specific instance thereof for a particular run of their application.
CERN European Organization for Nuclear Research - Zenodo, 2022
An increasing interest is observed in making a diversity of compute and storage resources, which are geographic spread, available in a federated manner. A common services layer can facilitate easier access, more elasticity as well as lower response times, and improved utilisation of the underlying resources. In this white paper, current trends are analysed both from an infrastructure provider as well as an end-user perspective. Here the focus is on federated e-infrastructures that among others include high-performance computing (HPC) systems as compute resources. Two initiatives, namely Fenix and GAIA-X, are presented as illustrative examples. Based on a more detailed exploration of selected topical areas related to federated e-infrastructures, various R&I challenges are identified and recommendations for further efforts formulated.
Electronic Workshops in Computing, 2010
In this paper, we present a vision of the future of heterogeneous cloud computing. Ours is a cleanslate approach, sweeping away decades of accreted system software. We believe the advent of the latest technology discontinuity-the move to the virtual cloud-makes this a necessary step to take, but one capable of delivering significant benefits in the security, reliability and efficiency of our digital infrastructure at all scales. We motivate this vision by presenting two challenges arising in different fields yet with fundamental commonalities best addressed by a unifying software platform supporting devices ranging from virtual servers in the cloud, through desktops, to mobile smartphones. After drawing out this common ground, we describe our solution and its benefits. We then describe the initial steps we have taken toward our solution, the Mirage framework, as well as ongoing future work.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Concurrency and Computation: Practice and Experience
Future Generation Computer Systems, 2011
EPJ Web of Conferences
Journal of Physics: Conference Series, 2017
2013 IEEE 5th International Conference on Cloud Computing Technology and Science, 2013
Concurrency and Computation: Practice and Experience, 2009
2020 Design, Automation & Test in Europe Conference & Exhibition (DATE), 2020
Proceedings of International Symposium on Grids and Clouds 2018 in conjunction with Frontiers in Computational Drug Discovery — PoS(ISGC 2018 & FCDD)