Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2013, IT Professional
…
3 pages
1 file
Cloud computing revitalizes the concept of time-sharing by providing unprecedented efficiencies and collaboration opportunities through extensive connectivity. Integrating with technologies such as the Internet of Things (IoT), mobile applications, and big data, cloud systems are evolving into cyberphysical systems (CPSs), enhancing their functionality and usefulness. The paper discusses the implications of this integration for systems interoperability and the potential for applying graph theory in solving complex problems in networked environments.
2011
There are gaps in implementing DOE-specific accounting, allocation and security policies in current cloud software stacks. Cloud software solutions will need customization to handle site-specific polices related to resource allocation, security, accounting, and monitoring. DOE resource providers should develop or partner with the developer communities of private cloud software stacks to support sitespecific customization. 7. User-created virtual images are powerful. However, there is also a need for a standard set of base images and simple tools to reduce the entry barrier for scientists. Additionally, scientists often require pre-tuned libraries that need expertise from supercomputing center staff. DOE resource providers should consider providing reference images and tools that simplify using virtualized environments. 8. The cloud exposes a new usage model that necessitates additional investments in training end-users in the use of these resources and tools. Additionally, the new model necessitates a new user-support and collaboration model where trained personnel can help end-users with the additional programming and system administration burden created by these new technologies. DOE resource providers should carefully consider user support challenges before broadly supporting these new models. Science Groups. Cloud computing promises to be useful to scientific applications due to advantages such as on-demand access to resources and control over the user environment. However, cloud computing also has significant impact on application design and development due to challenges related to performance and reliability, programming model, designing and managing images, distributing the work across compute resources, and managing data. We make specific recommendations to science groups that might want to consider cloud technologies or models for their applications. 1. Infrastructure as a Service provides an easy path for scientific groups to harness cloud resources while leveraging much of their existing application infrastructure. However, virtualized cloud systems provide various options for instance types and storage classes (local vs block store vs object store) that have different performance and associated price points. Science groups need to carefully benchmark applications with the different options to find the best performance-cost ratio. 2. Cloud systems provide application developers the ability to completely control their software environments. However, there is currently a limited choice of tools available for workflow and data management in these environments. Scientific groups should consider using standard tools to manage these environments rather than developing custom scripts. Scientists should work with tool developers to ensure that their requirements and workflows are sufficiently captured and understood. vii Magellan Final Report Application developers should consider the potential for variability and failures in their design and implementation. While this is a good practice in general, it is even more critical for applications running in the cloud, since they experience significant failures and performance variations. 4. Cloud technologies allow user groups to manage their own machine images, enabling groups with complex software dependencies to achieve portability. This flexibility comes with the challenge for these groups to ensure they have addressed security concerns. Science groups should attempt to use standardized secure images to prevent security and other configuration problems with their images. Science groups will also need to have an action plan on how to secure the images and keep them up to date with security patches. 5. Cloud technologies such as message queues, tabular storage, and blob or object store provide a number of key features that enable applications to scale without the need to use synchronization where it may not be necessary. These technologies fundamentally change the application execution model. Scientific users should evaluate technologies such as message queues, tabular storage, and object storage during application design phase. Tools development and research. A number of site-level and user-side tools to manage scientific environments at HPC and cloud environments have evolved in the last few years. However, there are significant gaps and challenges in this space. We identify some key areas for tool development and research related to adoption of cloud models and technologies to science. 1. Virtual machines or provisioned bare metal hardware are useful to many application groups. However, scientists need to handle the management of these provisioned resources, including the software stack, job and data coordination. Tool developers should support tools to simplify and smooth workflow and data management on provisioned resources. 2. Investments are needed in tools that enable automated mechanisms to update images with appropriate patches as they become available and a simple way to organize, share and find these images across user groups and communities. Tool developers should consider developing services that will enable organization and sharing of images. 3. Virtualized cloud environments are limited by networking and I/O options available in the virtual machines. Access to high-performance parallel file systems such as GPFS and Lustre, and low-latency, high-bandwidth interconnects such as InfiniBand within a virtual machine would enable more scientific applications to benefit from virtual environments without sacrificing performance or ease of use. System software developers should explore methods to provide HPC capabilities in virtualized environments. 4. There is a need for new methods to monitor and secure private clouds. Further research is required in this area. Sites typically rely on OS-level controls to implement many security policies. Most of these controls must be shifted into the hypervisor or alternative approaches must be developed. Security developers should explore new methods to secure these environments and, ideally, leverage the advanced mechanisms that virtualization provides. 5. MapReduce can be useful for addressing data-intensive scientific applications, but there is a need for MapReduce implementations that account for characteristics of scientific data and analysis methods. Computer science researchers and developers should explore modifications or extensions to frameworks like Hadoop that would enable the frameworks to understand and exploit the data models of data formats typically used in scientific domains. 6. A number of cloud computing concepts and technologies (e.g., MapReduce, schema-less databases) have evolved around the idea of managing "big data" and associated metadata. Cloud technologies address the issues of automatic scaling, fault-tolerance and data locality, all key to success of large-scale systems. There is a need to investigate the use of cloud technologies and ideas to manage scientific data and metadata.
2013
a series of events intended to prospect the applications supported by the new paradigm and validate the techniques and the mechanisms. A complementary target was to identify the open issues and the challenges to fix them, especially on security, privacy, and inter-and intra-clouds protocols. Cloud computing is a normal evolution of distributed computing combined with Service-oriented architecture, leveraging most of the GRID features and Virtualization merits. The technology foundations for cloud computing led to a new approach of reusing what was achieved in GRID computing with support from virtualization. We take here the opportunity to warmly thank all the members of the CLOUD COMPUTING 2013 Technical Program Committee, as well as the numerous reviewers. The creation of such a broad and high quality conference program would not have been possible without their involvement. We also kindly thank all the authors who dedicated much of their time and efforts to contribute to CLOUD COM...
Proceedings of the 2017 Symposium on Cloud Computing
Distributed computing remains inaccessible to a large number of users, in spite of many open source platforms and extensive commercial offerings. While distributed computation frameworks have moved beyond a simple map-reduce model, many users are still left to struggle with complex cluster management and configuration tools, even for running simple embarrassingly parallel jobs. We argue that stateless functions represent a viable platform for these users, eliminating cluster management overhead, fulfilling the promise of elasticity. Furthermore, using our prototype implementation, Py-Wren, we show that this model is general enough to implement a number of distributed computing models, such as BSP, efficiently. Extrapolating from recent trends in network bandwidth and the advent of disaggregated storage, we suggest that stateless functions are a natural fit for data processing in future computing environments. CCS CONCEPTS • Computer systems organization → Cloud computing; • Computing methodologies → Distributed programming languages;
Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 2013
Clouds have become the modern concept of utility computing-not only over the web, but in general. As such, they are the seeming solution for all kind of computing and storage problems, ranging from simple database servers to high performance computing. However, clouds have specific characteristics and hence design specifics which impact on the capability scope of the use cases. This paper shows which subset of computing cases actually meet the cloud paradigm and what is needed to move further applications into the cloud.
• Distributed cloud infrastructure will make use of the network edge in the future. • Two tier applications will be replaced by new multi-tier cloud architectures. • Next generation cloud computing impacts both societal and scientific avenues. • A new marketplace will need to be developed for resources at the network edge. • Security and sustainability are key to architecting future cloud systems. a b s t r a c t The landscape of cloud computing has significantly changed over the last decade. Not only have more providers and service offerings crowded the space, but also cloud infrastructure that was traditionally limited to single provider data centers is now evolving. In this paper, we firstly discuss the changing cloud infrastructure and consider the use of infrastructure from multiple providers and the benefit of decentralising computing away from data centers. These trends have resulted in the need for a variety of new computing architectures that will be offered by future cloud infrastructure. These architectures are anticipated to impact areas, such as connecting people and devices, data-intensive computing, the service space and self-learning systems. Finally, we lay out a roadmap of challenges that will need to be addressed for realising the potential of next generation cloud systems.
Credits for the screenshot images throughout the book are as follows: Screenshots from Amazon.com, Cloudwatch © Amazon.com, Inc.; Screenshots of Nimsoft © CA Technologies; Screenshots of Gomez © Compuware Corp.; Screenshots from Facebook.com © Facebook, Inc.; Screenshots of Google App Engine, Google Docs © Google, Inc.; Screenshots of HP CloudSystem, Cells-as-a-Service, OpenCirrus © Hewlett-Packard Company; Screenshots of Windows Azure © Microsoft Corporation; Screenshots of Gluster © Red Hat, Inc.; Screenshots from Force.com, Salesforce.com © Salesforce.com, Inc.; Screenshots of Netcharts © Visual Mining, Inc.; Screenshots of Yahoo! Pipes, YQL © Yahoo! Inc.
2014
was intended as an event to prospect the applications supported by the new paradigm and validate the techniques and the mechanisms. A complementary target was to identify the open issues and the challenges to fix them, especially on security, privacy, and inter-and intra-clouds protocols. We take here the opportunity to warmly thank all the members of the CLOUD COMPUTING 2014 Technical Program Committee, as well as all of the reviewers. The creation of such a high quality conference program would not have been possible without their involvement. We also kindly thank all the authors who dedicated much of their time and efforts to contribute to CLOUD COMPUTING 2014. We truly believe that, thanks to all these efforts, the final conference program consisted of top quality contributions. Also, this event could not have been a reality without the support of many individuals, organizations, and sponsors. We are grateful to the members of the CLOUD COMPUTING 2014 organizing committee for th...
Cloud computing and big data are arguably the two most disruptive and probably most forces that are rapidly shaping today's information technology sphere. With cloud computing providing an impeccable elastic infrastructure for the rigorous data analytic machine, the two revolutionary technology co-exist to change the very essence of computing and how we interact with data. However, as the world is now on the verge of transitioning to the cloud, it is important to explore key issues behind the two disruptive technologies. This essay will firstly touch on how cloud computing and big data are the two most significant forces in today's IT world. This essay will also explore key issues behind cloud computing and finally analyse the future of cloud computing and big data.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Internet Computing, …, 2009
Principles and Paradigms, 2011
International Journal of Engineering Research and Technology (IJERT), 2020
International Journal for Research in Applied Science & Engineering Technology (IJRASET), 2022
Computer Communication Review, 2008
International journal of scientific research in computer science, engineering and information technology, 2017
Concurrency and Computation: Practice and Experience, 2017