100% found this document useful (3 votes)
3K views372 pages

Software Engineering Frameworks

software

Uploaded by

thangave2000
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (3 votes)
3K views372 pages

Software Engineering Frameworks

software

Uploaded by

thangave2000
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 372

Computer Communications and Networks

Zaigham Mahmood
Saqib Saeed Editors

Software
Engineering
Frameworks for the
Cloud Computing
Paradigm
Computer Communications and Networks

For further volumes:


http://www.springer.com/series/4198
The Computer Communications and Networks series is a range of textbooks, monographs
and handbooks. It sets out to provide students, researchers and non-specialists alike with
a sure grounding in current knowledge, together with comprehensible access to the latest
developments in computer communications and networking.

Emphasis is placed on clear and explanatory styles that support a tutorial approach, so that
even the most complex of topics is presented in a lucid and intelligible manner.
Zaigham Mahmood • Saqib Saeed
Editors

Software Engineering
Frameworks for the Cloud
Computing Paradigm
Editors
Zaigham Mahmood Saqib Saeed
School of Computing and Mathematics Department of Computer Sciences
University of Derby Bahria University
Derby, UK Islamabad, Pakistan

Series Editor
A.J. Sammes
Centre for Forensic Computing
Cranfield University
Shrivenham Campus, Swindon, UK

ISSN 1617-7975
ISBN 978-1-4471-5030-5 ISBN 978-1-4471-5031-2 (eBook)
DOI 10.1007/978-1-4471-5031-2
Springer London Heidelberg New York Dordrecht

Library of Congress Control Number: 2013936798

© Springer-Verlag London 2013


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of
the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information
storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology
now known or hereafter developed. Exempted from this legal reservation are brief excerpts in connection
with reviews or scholarly analysis or material supplied specifically for the purpose of being entered and
executed on a computer system, for exclusive use by the purchaser of the work. Duplication of this
publication or parts thereof is permitted only under the provisions of the Copyright Law of the Publisher’s
location, in its current version, and permission for use must always be obtained from Springer. Permissions
for use may be obtained through RightsLink at the Copyright Clearance Center. Violations are liable to
prosecution under the respective Copyright Law.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
While the advice and information in this book are believed to be true and accurate at the date of
publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for
any errors or omissions that may be made. The publisher makes no warranty, express or implied, with
respect to the material contained herein.

Printed on acid-free paper

Springer is part of Springer Science+Business Media (www.springer.com)


To
Rehana, Zoya, Imran, Hanya and Ozair
For their love and support
Editors

Dr Zaigham Mahmood

Dr Mahmood is a researcher and author. He has an M.Sc. in Mathematics, M.Sc. in


Computer Science and Ph.D. in Modelling of Phase Equilibria. Dr Mahmood is a
Senior Technology Consultant at Debesis Education UK, a researcher at the
University of Derby UK and Professor Extraordinaire at the North West University
in South Africa. He is, currently, also a Foreign Professor at NUST Islamabad
Pakistan.
Professor Mahmood has published over 100 articles in international journals and
conference proceedings in addition to two reference texts on e-government and
three books on cloud computing viz Cloud Computing: Concepts, Technology and
Design; Cloud Computing for Enterprise Architectures and Cloud Computing:
Methods and Practical Approaches. He is also the Editor-in-Chief of the Journal of
E-Government Studies and Best Practices.
Dr Mahmood is an active researcher; he also serves as editorial board member of
several journals, books and conferences; guest editor for journal special issues;
organiser and chair of conference tracks and workshops; and keynote speaker at
conferences. His research interests encompass subject areas including software
engineering, project management, enterprise computing, e-government studies and
cloud computing.
Professor Mahmood can be reached at [email protected]. He welcomes
your views and comments.

Dr Saqib Saeed

Dr Saqib Saeed is an Assistant Professor in the Computer Science department at


Bahria University, Islamabad, Pakistan. He holds a Ph.D. in Information Systems
from the University of Siegen, Germany, and a master’s degree in Software
Technology from Stuttgart University of Applied Sciences, Germany.
vii
viii Editors

Dr Saeed is also a certified Software Quality Engineer from American Society


for Quality. He is a member of advisory boards of several international journals
besides being guest editor or co-editor of several special issues. Dr Saeed’s research
interests lie in the areas of software engineering, human-centred computing and
computer-supported cooperative work.
Preface

Overview

Software engineering is a well-established discipline for the design and development


of large-scale software systems. It is a staged process that follows a software devel-
opment life cycle (SDLC) consisting of requirements, design and development
phases. Many methodologies and frameworks also exist for such developments,
and, depending on the application domain, there are proven function-oriented,
object-oriented and component-based methodologies as well as service-oriented
and agile frameworks. With the emergence of cloud computing, however, there is a
need for the traditional approaches to software construction to be adapted to take
full advantage of the cloud technologies.
Cloud computing is an attractive paradigm for business organisations due to the
enormous benefits it promises, including savings on capital expenditure and avail-
ability of cloud-based services on demand and in real time. Organisations can
self-provision software development platforms, together with infrastructure if so
required, to develop and deploy applications much more speedily. Since the cloud
environment is dynamic, virtualised, distributed and multi-tenant, necessary charac-
teristics that cloud-enabled software must exhibit need to be inherently built into
the software systems. This is especially so if the software is to be deployed in the
cloud environment or made available for access by multiple cloud consumers. In
this context, it is imperative to recognise that cloud SDLC is an accelerated process
and that software development needs to be more iterative and incremental. Also, the
application architecture must provide characteristics to leverage cloud infrastruc-
ture capabilities such as storage connectivity and communications. It is important
that the chosen frameworks are suitable for fast cycle deployments. Methodologies
must also ensure satisfaction of consumer demands of performance, availability,
security, privacy, reliability and, above all, scalability and multi-tenancy. All this
suggests that software architects require a shift in mindset and need to adapt to new
approaches to design and deployment so that software systems are appropriate for
cloud environments.

ix
x Preface

This book, Software Engineering Frameworks for Cloud Computing Paradigm,


aims to capture the state of the art in this context and present discussion and guidance
on the relevant software engineering approaches and frameworks. Twenty-six
researchers and practitioners from around the world have presented their works,
case studies and suggestions for engineering software suitable for deployment in the
cloud environment.

Objectives

The aim of this book is to present current research and development in the field of
software engineering as relevant to the cloud paradigm. The key objectives for this
book include:
• Capturing the state of the art in software engineering approaches for developing
cloud-suitable applications
• Providing relevant theoretical frameworks, practical approaches and current and
future research directions
• Providing guidance and best practices for students and practitioners of cloud-based
application architecture
• Advancing the understanding of the field of software engineering as relevant to
the emerging new paradigm of cloud computing

Organisation

There are 15 chapters in Software Engineering Frameworks for Cloud Computing


Paradigm. These are organised in four parts:
• Part I: Impact of Cloud Paradigm on Software Engineering. This section focuses
on cloud computing paradigm as relevant to the discipline of software engineering.
There are three chapters that look at the impact of Semantic web, discuss cloud-
induced transformation and highlight issues and challenges inherent in cloud-
based software development.
• Part II: Software Development Life Cycle for Cloud Platform. This comprises
five chapters that consider stages of software development life cycle, in particular
the requirements in engineering and testing of cloud-based applications. The chapters
also discuss the design and development of software with virtualisation and
multi-tenant distributed environment in mind.
• Part III: Software Design Strategies for Cloud Adoption. There are five chapters
in this part that focus on feature-driven and cloud-aided software design and
present strategies for cloud adoption and migration. Development of applications
in the hybrid cloud environment and architectural patterns for migration of legacy
systems are also discussed.
Preface xi

• Part IV: Performance of Cloud Based Software Applications. This section consists
of two chapters that focus on efficiency and performance of cloud-based applications.
One chapter discusses the effective practices for cloud-based software engineering,
and the other chapter presents a framework for identifying relationships between
application performance factors.

Target Audience

Software Engineering Frameworks for Cloud Computing Paradigm has been developed
to support a number of potential audiences, including the following:
• Software engineers and application developers who wish to adapt to newer
approaches to building software that is more suitable for virtualised and multi-
tenant distributed environments
• IT infrastructure managers and project leaders who need to clearly understand
the requirement for newer methodologies in the context of cloud paradigm and
appreciate the issues of developing cloud-based applications
• Students and university lecturers of software engineering who have an interest in
further developing their expertise and enhancing their knowledge of the cloud-
relevant tools and techniques to architect cloud-friendly software
• Researchers in the fields of software engineering and cloud computing who wish
to further increase their understanding of the current practices, methodologies
and frameworks

Suggested Uses

Software Engineering Frameworks for Cloud Computing Paradigm can be used


as a primer and textbook on university courses on cloud computing and software
engineering. It can also be used as a reference text by practitioners in the field of
software engineering.
For adoption as a course text, we suggest the following programme of study for
a 12-week teaching semester format:
• Weeks 1–3: Part I
• Weeks 3–7: Part II
• Weeks 7–11: Part III
• Weeks 11–12: Part IV
Acknowledgements

The editors acknowledge the help and support of the following colleagues during
the review and editing phases of this book:
• Dr Wasif Afzal, Bahria University, Islamabad, Pakistan
• Dr Daniel Crichton, Jet Propulsion Laboratory, California Inst Tech, USA
• Dr Ashraf Darwish, Helwan University, Cairo, Egypt
• Dr Shehzad Khalid, Bahria University, Islamabad, Pakistan
• Prof Francisco Milton Mendes, Rural Federal University of the Semi-Arid, Brazil
• Prof Mahmood Niazi, King Fahd University of Petroleum and Minerals, Dhahran
• Dr S. Parthasarathy, Thiagarajar College of Engineering, Madurai, India
• Dr Pethuru Raj, Wipro Technologies, Bangalore, India
• Dr Muthu Ramachandran, Leeds Metropolitan University, Leeds, UK
• Dr C. R. Rene Robin, Jerusalem College of Engineering, Chennai, India
• Dr Lucio Agostinho Rocha, State University of Campinas, Brazil
• Dr Lloyd G. Waller, University of the West Indies, Kingston, Jamaica
• Dr Fareeha Zafar, GC University, Lahore, Pakistan
The editors would also like to thank the contributors of this book; the 26 authors
and co-authors, from academia as well as the industry from around the world, who
collectively submitted 15 chapters. Without their efforts in developing quality
contributions, conforming to the guidelines and meeting often the strict deadlines,
this text would not have been possible.
Grateful thanks are also due to our family members for their support and
understanding.

University of Derby, UK Zaigham Mahmood


January 2013

xiii
Contents

Part I Impact of Cloud Paradigm on Software Engineering

1 Impact of Semantic Web and Cloud Computing Platform


on Software Engineering ........................................................................ 3
Radha Guha
2 Envisioning the Cloud-Induced Transformations
in the Software Engineering Discipline ................................................. 25
Pethuru Raj, Veeramuthu Venkatesh, and Rengarajan Amirtharajan
3 Limitations and Challenges in Cloud-Based
Applications Development ...................................................................... 55
N. Pramod, Anil Kumar Muppalla, and K.G. Srinivasa

Part II Software Development Life Cycle for Cloud Platform

4 Impact of Cloud Services on Software Development Life Cycle........... 79


Radha Krishna and R. Jayakrishnan
5 Cloud-Based Development Using Classic Life Cycle Model ............... 101
Suchitra Ravi Balasubramanyam
6 Business Requirements Engineering for Developing
Cloud Computing Services ..................................................................... 123
Muthu Ramachandran
7 Testing Perspectives for Cloud-Based Applications ............................. 145
Inderveer Chana and Priyanka Chawla
8 Testing in the Cloud: Strategies, Risks and Benefits............................ 165
Olumide Akerele, Muthu Ramachandran, and Mark Dixon

xv
xvi Contents

Part III Software Design Strategies for Cloud Adoption

9 Feature-Driven Design of SaaS Architectures ...................................... 189


Bedir Tekinerdogan and Karahan Öztürk
10 Impact of Cloud Adoption on Agile Software Development ............... 213
Sowmya Karunakaran
11 Technical Strategies and Architectural Patterns
for Migrating Legacy Systems to the Cloud ......................................... 235
Sidharth Subhash Ghag and Rahul Bandopadhyaya
12 Cloud-Aided Software Engineering: Evolving Viable
Software Systems Through a Web of Views ......................................... 255
Colin Atkinson and Dirk Draheim
13 Development of Cloud Applications in Hybrid Clouds
with Support for Multi-scheduling ........................................................ 283
Lucio Agostinho Rocha

Part IV Performance of Cloud Based Software Applications

14 Efficient Practices and Frameworks for Cloud-Based


Application Development ....................................................................... 305
Anil Kumar Muppalla, N. Pramod, and K.G. Srinivasa
15 A Methodology for Identifying the Relationships Between
Performance Factors for Cloud Computing Applications..................... 331
Luis Eduardo Bautista Villalpando, Alain April, and Alain Abran

Index ................................................................................................................. 359


Contributors

Alain Abran Department of Software Engineering and Information Technology,


ETS – University of Quebec, Montreal, Canada
Olumide Akerele School of Computing and Creative Technologies, Faculty of Arts,
Environment and Technology, Leeds Metropolitan University, Leeds, UK
Rengarajan Amirtharajan School of Electrical and Electronics Engineering,
SASTRA University, Thanjavur, Tamil Nadu, India
Alain April Department of Software Engineering and Information Technology,
ETS – University of Quebec, Montreal, Canada
Colin Atkinson Software Engineering Group, University of Mannheim, Mannheim,
Germany
Suchitra Ravi Balasubramanyam Education and Research Unit, Infosys Limited,
Mysore, India
Rahul Bandopadhyaya Infosys Labs, Infosys Limited, Bangalore, India
Inderveer Chana Computer Science and Engineering Department, Thapar University,
Patiala, India
Priyanka Chawla Computer Science and Engineering Department, Thapar
University, Patiala, India
Mark Dixon School of Computing and Creative Technologies, Faculty of Arts,
Environment and Technology, Leeds Metropolitan University, Leeds, UK
Dirk Draheim IT Service Management Division, University of Innsbruck,
Innsbruck, Austria
Sidharth Subhash Ghag Infosys Labs, Infosys Limited, Pune, India
Radha Guha ECE Department, PESIT, Bangalore, India
R. Jayakrishnan Infosys Ltd., Bangalore, India
xvii
xviii Contributors

Sowmya Karunakaran Department of Management Studies, Indian Institute of


Technology (IIT), Madras, India
Radha Krishna Infosys Ltd., Bangalore, India
Anil Kumar Muppalla High Performance Computing Laboratory, Department
of Computer Science and Engineering, M S Ramaiah Institute of Technology,
Bangalore, India
Karahan Öztürk British Sky Broadcasting, London, UK
N. Pramod High Performance Computing Laboratory, Department of Computer
Science and Engineering, M S Ramaiah Institute of Technology, Bangalore, India
Pethuru Raj Wipro Technologies, Bangalore, India
Muthu Ramachandran School of Computing and Creative Technologies, Faculty
of Arts, Environment and Technology, Leeds Metropolitan University, Leeds, UK
Lucio Agostinho Rocha Department of Computer Engineering and Industrial
Automation (DCA) at the School of Electrical and Computer Engineering (FEEC),
State University of Campinas, São Paulo, Brazil
K.G. Srinivasa High Performance Computing Laboratory, Department of Computer
Science and Engineering, M S Ramaiah Institute of Technology, Bangalore, India
Bedir Tekinerdogan Department of Computer Engineering, Bilkent University,
Ankara, Turkey
Veeramuthu Venkatesh School of Electrical and Electronics Engineering,
SASTRA University, Thanjavur, Tamil Nadu, India
Luis Eduardo Bautista Villalpando Department of Electronic Systems, Autonomous
University of Aguascalientes, Aguascalientes, Mexico
Department of Software Engineering and Information Technology, ETS – University
of Quebec, Montreal, Canada
Part I
Impact of Cloud Paradigm on Software
Engineering
Chapter 1
Impact of Semantic Web and Cloud Computing
Platform on Software Engineering

Radha Guha

Abstract  Tim Berners-Lee’s vision of the Semantic Web or Web 3.0 is to trans-
form the World Wide Web into an intelligent Web system of structured, linked data
which can be queried and inferred as a whole by the computers themselves. This
grand vision of the Web is materializing many innovative uses of the Web. New
business models like interoperable applications hosted on the Web as services are
getting implemented. These Web services are designed to be automatically discov-
ered by software agents and exchange data among themselves. Another business
model is the cloud computing platform, where hardware, software, tools, and appli-
cations will be leased out as services to tenants across the globe over the Internet.
There are many advantages of this business model, like no capital expenditure,
speed of application deployment, shorter time to market, lower cost of operation,
and easier maintenance of resources, for the tenants. Because of these advantages,
cloud computing may be the prevalent computing platform of the future. To realize
all the advantages of these new business models of distributed, shared, and self-
provisioning environment of Web services and cloud computing resources, the tradi-
tional way of software engineering has to change as well. This chapter analyzes how
cloud computing, on the background of Semantic Web, is going to impact on the
software engineering processes to develop quality software. The need for changes in
the software development and deployment framework activities is also analyzed to
facilitate adoption of cloud computing platform.

Keywords  Software engineering • Semantic Web • Cloud computing platform


• Agile process model • Extreme Cloud Programming

R. Guha (*)
ECE Department, PESIT, Feet Ring Road, BSK III Stage,
560085, Bangalore, India
e-mail: [email protected]

Z. Mahmood and S. Saeed (eds.), Software Engineering Frameworks for the Cloud 3
Computing Paradigm, Computer Communications and Networks,
DOI 10.1007/978-1-4471-5031-2_1, © Springer-Verlag London 2013
4 R. Guha

1.1  Introduction

Since the inception of the World Wide Web (WWW) in 1990 by Tim Berners-Lee,
there has been a large warehouse of documents on the WWW, and the number of
documents is growing very rapidly. But, unless the information from these docu-
ments can be aggregated and inferred quickly, they do not have much use. Human
readers cannot read and make decisions quickly from large number of mostly irrel-
evant documents retrieved by the old search engines based on keyword searches.
Thus, Tim Berners-Lee’s vision is to transform this World Wide Web into an intel-
ligent Web system or Semantic Web [1–8] which will allow concept searches rather
than keyword searches. First, Semantic Web or Web 3.0 technologies will transform
disconnected text documents on the Web into a global database of structured, linked
data. These large volumes of linked data in global databases will no longer be only
for human consumption but for quick machine processing. Just like a relational
database system can answer a query by filtering out unnecessary data, Semantic
Web technologies will similarly filter out information from the global database.
This capability requires assigning globally accepted explicitly defined semantics to
the data in the Web for linking. Then these linked data in the global database will
collectively produce intelligent information by software agents on behalf of the
human users, and the full potential of the Web can be realized.
Anticipating this transition of the Web where data integration, inference, and
data exchange between heterogeneous applications will be possible, new business
models of application deployment and delivery over the Internet have been concep-
tualized. Applications can be hosted on the Web and accessed via the Internet by
geographically dispersed clients. These XML (eXtensible Markup Language)-
based, interoperable applications are called Web services which can publish their
location, functions, messages containing the parameter list to execute the functions,
and communication protocols for accessing the service using it correctly by all. As
the same service will be catered to multiple clients, they can even be customized
according to clients’ likes. Application architecture and delivery architecture will be
two separate layers for these Web applications for providing this flexibility. XML-­
based Web 2.0 and Web 3.0 protocols like Service-Oriented Architecture (SOA),
Simple Object Access Protocol (SOAP), Web Service Description Language
(WSDL), and Universal Description, Discovery and Integration (UDDI) registry are
designed to discover Web services on the fly and to integrate applications developed
on heterogeneous computing platforms, operating systems, and with varieties of
programming languages. Applications like Hadoop and Mashup [9, 10] can com-
bine data and functionalities from multiple external sources hosted as Web services
and are producing valuable aggregate new information and creating new Web
services. Hadoop and Mashup can support high-performance computing involving
distributed file system with petabytes of data and parallel processing on more than
hundreds to thousands of computers.
In another business model, the application development infrastructure like
processors, storage, memory, operating system, and application development tools
1  Impact of Semantic Web and Cloud Computing Platform on Software Engineering 5

and software can all be delivered as utility to the clients over the Internet. This is
what is dubbed as cloud computing where a huge pool of physical resources hosted
on the Web will be shared by multiple clients as and when required. Because of
the many benefits of this business model like no capital expenditure, speed of
application deployment, shorter time to market, lower cost of operation, and easier
maintenance of resources for the clients, cloud computing may be the prevalent
computing p­ latform of the future.
On the other hand, economies of all developed countries depend on quality software,
and software cost is more than hardware cost. Moreover, because of the involvement
of many parties, software development is inherently a complex process, and most of
the software projects fail because of lack of communication and coordination
between all the parties involved. Knowledge management in software engineering
has always been an issue which affects better software development and its mainte-
nance. There is always some gap in understanding about what the business partners
and stakeholders want, how software designers and managers design the modules,
and how software developers implement the design. As the time passes, this gap in
understanding increases due to the increased complexity of the involvement of
many parties and continuously changing requirements of the software. This is more
so at the later stage when the software has to be maintained and no one has the
comprehensive knowledge about the whole system.
Now, with the inclusion of the Free/Libre/Open Source Software (FLOSS) [11]
pieces, Web services, and cloud computing platform, software development com-
plexity is going to increase manifold because of the synchronization needs with
third-party software and the increased communication and coordination complexity
with the cloud providers. The main thesis of this chapter is that the prevalent soft-
ware process models should involve the cloud providers in every step of decision-­
making of software development life cycle to make the software project a success.
Also, the software developers need to change their software artifacts from plain text
documents to machine-readable structured linked data, to make them Semantic Web
ready. With this semantic transformation knowledge, management in software engi-
neering will be much easier, and compliance checking of various requirements
during project planning, design, development, testing, and verification can be
automated. Semantic artifacts will also give their product a competitive edge for auto-
matic discovery and integration with other applications and efficient maintenance
of their artifacts.
This chapter explores how Semantic Web can reduce software development
work with automatic discovery of distributed open source software components.
Also, Semantic Web techniques are explored that need to be incorporated in soft-
ware development artifacts to make them Semantic Web ready. Then, to realize the
many advantages of the cloud computing business model, how the well-established
software engineering process models have to adapt is analyzed. As the cloud pro-
vider is an external entity or third party, how difficult will be the interactions with
them? How to separate the roles of software engineers and cloud providers? As a
whole, cloud computing paradigm on Semantic Web background makes software
development project more complex.
6 R. Guha

In Sect. 1.2, background literatures on transformation to Semantic Web, cloud


computing platform, and software engineering are surveyed. In Sect. 1.3, first
emphasis is given on the need for producing software artifacts for the Semantic
Web. Secondly, how the software developers are coping with the changing trend of
application development on cloud platform with Web 2.0 and Web 3.0 protocols
and application deployment over the Web is reported. Thirdly, challenges of cloud
computing platform for software engineering are analyzed. In Sect. 1.4, an agile
process model which incorporates interaction with cloud provider is proposed and
analyzed. Section 1.5 concludes the chapter.

1.2  Literature Survey

1.2.1  Transformation to Semantic Web

World Wide Web was invented in 1990 by Tim Barners-Lee. Since then, the trans-
formation of the Web has been marked with Web 1.0, Web 2.0, and Web 3.0 tech-
nologies. In Web 1.0, the HTML (hypertext markup language) tags were added to
plain text documents for displaying the documents in a specific way on Web brows-
ers. Each document on the Web is a source of knowledge or a resource. In the
World Wide Web, with the hypertext transport protocol (HTTP), if the URL
(Universal Resource Locator) of any Web site (document) is known, then that
resource can be accessed or browsed over the Internet. Domain name service
(DNS) registry was developed to discover a machine on the Internet which hosts a
Web page URL. This capability of Web 1.0 published information pages which
were static and read only. HTML’s <href> tag (a form of metadata) links two docu-
ments for human readers to navigate to related topics. In Web 1.0, for quick search
and retrieval, metadata (data about data) that describes the contents of electronic
documents or resources are added in the document itself, which has the same pur-
pose as indexes in a book or catalogues in a library. Search engines like Google and
Yahoo create metadata databases out of those metadata in Web documents to find
the documents quickly. In Web 1.0, the contents of the Web pages are static and the
meanings of the Web pages are deciphered by the people who read them. Web
contents are developed by HTML and user input is captured in Web forms in the
client machine and sent to remote server via a common gateway interface (CGI) for
further processing.
In Web 2.0, XML (eXtensible Markup Language) was designed to give hierar-
chical structure to the document content, to transform it into data, and to transport
the document as data. Where HTML tags prescribe how to display the Web content
in client computer, the XML tags add another layer of metadata to query the
Web document for specific data. XML documents can be read and processed by
computers (by a parser) automatically and can be exchanged between applications
developed on heterogeneous computing platforms, operating systems, and varieties
1  Impact of Semantic Web and Cloud Computing Platform on Software Engineering 7

of programming languages once they all know the XML tags used in the documents.
As for example, in order to use text generated by a Word Processor and data from
spreadsheets and relational databases together, they all need to be transformed into
a common XML format first. This collaboration of applications is possible in a
closed community when all the applications are aware of the common XML tags.
Web 2.0 technologies also enabled pervasive or ubiquitous Web browsing involving
personal computers, mobile phones, and PDA (Personal Digital Assistant) running
different operating systems like Windows, Macintosh, or Linux, connected to the
Internet via wired or wireless connections. Web 2.0 technologies like XML, DHTML,
and AJAX (Asynchronous Java Script and XML) allowed two-way communica-
tions with dynamic Web contents and created social communities like Facebook,
MySpace, and Twitter. Web 2.0 has also seen the revolution of using the Web as the
practical medium for conducting businesses. An increasing number of Web-enabled
e-commerce applications like e-Bay and Amazon have emerged in this trend to buy
and sell products online.
But, for collaboration in the open, ever-expanding World Wide Web by all,
everybody on the Web has to agree on the meaning of the Web contents. XML alone
does not add semantics to the Web content. Thus, in Web 3.0, Resource Description
Framework (RDF) protocol is designed to add another layer of metadata to add
meaning or semantics to the data (text, images, audio, or video) inside the document
with RDF vocabularies understood by machines. As computer memory is not
expensive anymore, this metadata can be verbose even for human understanding
instead of being only for machine understanding. Authors, publishers, and users all
can add metadata about a Web resource in a standardized format. This self-­describing
data inside the document can be individually addressed by HTTP URI (Universal
Resource Identifier) mechanism, processed and linked to other data from other doc-
uments, and inferred by machine automatically. URI is an expansion on the concept
of Universal Resource Locator or URL and can both be a name and location. Search
engines or crawlers will navigate the links and generate query response over the
aggregated linked data. This linked data will encourage reuse of information, reduce
redundancy, and produce more powerful aggregate information.
To this end, we need a standardized knowledge representation system [12, 13].
Modeling a knowledge domain using standard, shared vocabularies will facilitate
interoperability between different applications. Ontology is a formal representation
of knowledge as a set of concepts in a domain. Ontology components are classes,
their attributes, relations, restrictions, rules, and axioms. DublinCore, GFO (General
Formal Ontology), OpenCyc/SUMO (Suggested Upper Merged Ontology), DOLCE
(Descriptive Ontology for Linguistic and Cognitive Engineering), WordNet, FOAF
(Friend of a Friend), SIOC (Semantically Interlinked Online Communities), SKOS
(Simple Knowledge Organization System), DOAP (Description of a Project),
vCard, etc., are the much used well-known ontology libraries of RDF vocabularies.
For example, implementation of DublinCore makes use of XML and a Resource
Description Framework (RDF).
RDF triples describe any data in the form of subject, predicate, and object.
Subject, predicate, and object all are URIs which can be individually addressed in
8 R. Guha

Fig. 1.1  Semantic Web


Wedding Cake [8]

the Web by the HTTP URI mechanism. Subject and object can be URIs from the
same document or from two separate documents or independent data sources linked
by the predicate URI. Object can also be just a string literal or a value. RDF creates
a graph-based data model spanning the entire Web which can be navigated or
crawled following the links by software agents. RDF schema (RDFS), Web ontology
language (OWL), and Simple Knowledge Organization System (SKOS) are devel-
oped to write rules and express hierarchical relations, inference between Web
resources. They vary in their expressiveness, logical thinking, and hierarchical
knowledge organization from being more limited to more powerful in RDFS to
SKOS. For querying the RDF data written in RDFS, OWL, or SKOS, RDF query
language named SPARQL has been developed.
RDF tags can be added automatically or semiautomatically by tools like RDFizers
[7], D2R (Database to RDF), JPEG → RDF, and Email → RDF. Linked data browsers
like Disco, Tabulator, and Marbles are getting designed to browse linked data
Semantic Web. Linked data search engines like Falcon and SWSE (Semantic Web
search engine) are getting designed for human navigation, and Swoogle and Sindice
are getting designed for applications.
Figure 1.1 shows the Semantic Web protocol stacks (Wedding Cake) proposed
by Tim Barners-Lee in 2000. The bottom of the Wedding Cake shows standards that
are well defined and widely accepted, whereas the other protocols are yet to be
implemented in most of the Web sites. Unicode is a 16-bit code word which is large
enough (216) for representing any characters in any languages in the world. URI
(Universal Resource Identifier) is the W3C’s codification for addressing any objects
over the Web. XML is for structuring the documents into data, and RDF is the
mechanism for describing data which can be understood by machines. Ontologies
are vocabularies from specific knowledge domain. Logic refers to making logical
inferences from associated linked data. Proof is keeping track of the steps of logical
inferences. Trust refers to the origin and quality of the data sources. This entire
protocol stack will transform the Web into a Semantic Web global database of
linked data for realizing the full potential of the Web.
1  Impact of Semantic Web and Cloud Computing Platform on Software Engineering 9

Fig. 1.2  Cloud computing platform

1.2.2  Cloud Computing Platform

Cloud computing [14–16] is the most anticipated future trend of computing. Cloud
computing is the idea of renting out server, storage, network, software technologies,
tools, and applications as utility or service over the Internet as and when required in
contrast to owning them permanently. Depending on what resources are shared
and delivered to the customers, there are four types of cloud computing. In cloud
computing terminology, when hardware such as processors, storage, and network
are delivered as a service, it is called infrastructure as a service (IaaS). Examples of
IaaS are Amazon’s Elastic Cloud (EC2) and Simple Storage Service (S3). When
programming platforms and tools like Java, Python, .Net, MySQL, and APIs are
delivered as a service, it is called platform as a service (PaaS). When applications
are delivered as a service, it is called software as a service (SaaS).
Depending on the amount of self-governance or control on resources by the
tenant, there are three types of cloud like internal or private cloud, external or public
cloud, and hybrid cloud (Fig.  1.2). In private cloud, an enterprise owns all the
resources on-site and shares them between multiple applications. In public cloud,
the enterprise will rent the resources from an off-site cloud provider, and these
resources will be shared between multiple tenants. Hybrid cloud is in the middle
where an enterprise owns some resources and rents some other resources from a
third party.
Cloud computing is based on Service-Oriented Architecture (SOA) of Web 2.0
and Web 3.0 and virtualization [16–18] of hardware and software resources
(Fig. 1.3). Because of the virtualization technique, physical resources can be linked
dynamically to different applications running on different operating systems.
Because of the virtualization technique, physical resources can be shared among all
users, and there is efficient resource management which can provide higher resource
utilization and on-demand scalability. Increased resource utilization brings down
10 R. Guha

Fig. 1.3  Virtual infrastructure [13]

the cost of floor space, power, and cooling. Power savings is the most attractive
feature of cloud computing and is the renewed initiative of environment-friendly green
computing or green IT movement of today. Cloud computing not only reduces cost
of usage of resources but also reduces maintenance cost of resources for the user.
Cloud computing can support on-demand scalability. An application with occa-
sional demand for higher resources will pay for the higher resources only the time
it is used instead of leasing all the resources from the very beginning in anticipation
of future need. This fine-grained (hourly) pay-by-use model of cloud computing
is going to be very attractive to the customers. There are many other benefits of
cloud computing. Cloud infrastructure can support multiple protocols and change
in business model for applications more rapidly. It can also handle increased perfor-
mance requirements like service scaling, response time, and availability of the
application, as the cloud infrastructure is a huge pool of resources like servers,
storage, and network and provides elasticity of growth to the end users.
With this business model of catering multiple clients with shared resources,
world’s leading IT companies like Microsoft, Google, IBM, SalesForce, HP, and
Amazon are deploying clouds (Fig. 1.2). Web services and applications like Hadoop
and Mashup can run on these clouds. Though there are many advantages of cloud
computing platform, there are few challenges regarding safety and privacy of
tenant’s information in cloud platform which can threaten the adoption of cloud
computing platform by the masses. If these few challenges can be overcome,
because of many of its advantages, this cloud computing model may be the prevalent
computing model of the future.

1.2.2.1  Safety and Privacy Issues in Cloud Computing Platform

All the resources of the cloud computing platform are shared by multiple tenants
(Fig. 1.4) over the Internet across the globe. In this shared environment, having trust
of data safety and privacy is of utmost importance to customers. Safety of data
means no loss of data pertaining to the owner of the data, and privacy of data means
1  Impact of Semantic Web and Cloud Computing Platform on Software Engineering 11

Fig. 1.4  Shared resources in cloud computing

no unauthorized use of the sensitive data by others. As cloud provider has greater
resource pool, they can easily keep copies of data and ensure safety of user data.
Privacy of data is of more concern in public cloud than in private cloud. In public
cloud environment as data is stored in off-premise machines, users have less control
over the use of their data, and this mistrust can threaten the adoption of cloud
computing platform by the masses. Technology and law enforcement both should
protect privacy concerns of cloud customers [19, 20]. Software engineer must
build their applications as Web services which can guarantee to lessen this risk of
exposure of sensitive data of cloud customers.
Next, we look into the preexisting software development methodologies to develop
quality software products in traditional environment not involving Web services
and cloud computing platform.

1.2.3  Traditional Software Engineering Process

Here, we delve into preexisting software development methodologies first to


develop quality software products in traditional environment not involving Web
services and cloud computing platform. Over the last half-century, rapid advances
12 R. Guha

of hardware technology such as computers, memory, storage, communication networks,


mobile devices, and embedded systems are pushing the need for larger and
more complex software. Software development not only involves many different
hardware technologies, it also involves many different parties like customers, stake-
holders, end users, and software developers. That is why software development is
an inherently complex procedure. Since 1968, software developers had to adopt
the engineering disciplines, i.e., systematic, disciplined, and quantifiable approach
to make software development more manageable to produce quality software
products. The success or quality of a software project is measured by whether it is
developed within time and budget and by its efficiency, usability, dependability, and
maintainability [21, 22].
Software engineering starts with an explicit process model having framework of
activities which are synchronized in a defined way. This process model describes or
prescribes how to build software with intermediate visible work products (documents)
and the final finished product, i.e., the operating software. The whole development
process of software from its conceptualization to operation and retirement is called
the software development life cycle (SDLC). SDLC goes through several framework
activities like requirements gathering, planning, design, coding, testing, deployment,
maintenance, and retirement. Software requirements are categorized as functional,
contractual, safety, procedural, business, and technical specification. Accuracy of
requirements gathering is very important as errors in requirements gathering will
propagate through all other subsequent activities. Requirements arising from differ-
ent sectors need to be well documented, verified to be in compliance with each
other, optimized, linked, and traced. All software engineering process activities are
synchronized in accordance to the process model adopted for a particular software
development. There are many process models to choose from like water fall model,
rapid application development (RAD) model, and spiral model depending on the
size of the project, delivery time requirement, and type of the project. As an example,
development of an avionic embedded system will adopt a different process model
than development of a Web application. Another criterion for choosing a suitable
process model is its ability to arrest errors in requirements gathering.
Even though software engineering takes engineering approach, success of soft-
ware product is more difficult than products from other engineering domain like
mechanical engineering or civil engineering. This is because software is intangible
during its development. Software project managers use a number of umbrella activi-
ties to monitor software framework activities in a more visible way. These umbrella
activities are software project tracking and control, risk management, quality assurance,
measurements, configuration management, work-product or documents generation,
review, and reusability management. CMMI (Capability Maturity Model Integration)
is a software process improvement model for software development companies by
comparing their process maturity with the best practices in the industry to deliver
quality software products.
Even after taking all these measures for sticking to the plan and giving much
importance to document generation for project tracking and control, many software
projects failed. Oftentimes volume of paper documents is too large for aggregating
information by humans. More than 50 % of software projects fail due to various
1  Impact of Semantic Web and Cloud Computing Platform on Software Engineering 13

Fig. 1.5  Economics of software development

reasons like schedule and budget slippage, non-user-friendly interface of the


software, and non-flexibility for maintenance and change of the software. And the
reasons for all these problems are lack of communication and coordination between
all the parties involved.
Requirement changes of a software are the major cause of increased complexity,
schedule, and budget slippage. Incorporating changes at a later stage of SDLC
increases the cost of the project exponentially (Fig. 1.5). Adding more number
of programmers at a later stage does not solve the schedule problem as increased
coordination requirement slows down the project further. It is very important that
requirements gathering, planning, and design of the software are done involving all
the parties from the beginning.
That is the reason why several agile process models like Extreme Programming
(XP) (Fig. 1.6), Scrum, Crystal, and Adaptive have been introduced in mid-1990s to
accommodate continuous changes in requirements during the development of the
software. These agile process models have shorter development cycles where small
pieces of work are “time-boxed,” developed, and released for customer feedback,
verification, and validation iteratively. One time-box takes a few weeks to maximum
a month of time. Agile process model is communication intensive as customer
satisfaction is given the utmost importance. Agile software development is possible
only when the software developers are talented, motivated, and self-organized.
Agile process model eliminates the exponential increase of cost to incorporate
changes as in the waterfall model by keeping the customer involved throughout and
validating small pieces of work by them iteratively. These agile process models
work better for most of the software projects as changes are inevitable, and responding
to the changes is the key to the success of a project.
14 R. Guha

Fig. 1.6  Extreme Programming process model

Figure 1.6 depicts the steps of agile process model named Extreme Programming
(XP) for a traditional software development where the customer owns the develop-
ing platform or software developers develop in-house and deploy the software to the
customer after it is built. XP has many characteristics like user story card and CRC
(class, responsibility, collaboration) card narrated during the requirements gather-
ing stage jointly by the customer and the software engineers. Customer decides the
priority of each story card, and the highest priority card is only considered or “time-
boxed” for the current iteration of software development. Construction of code is
performed by two engineers sitting at the same machine so that there is less scope
of errors in the code. This is called pair programming. Code is continuously re-
factored or improved to make it more efficient.
In the following sections, analysis for the need for producing software develop-
ment artifacts for the Semantic Web and the challenges of the current business
model of application development and deployment involving Web 2.0 and Web 3.0
technologies and cloud computing platform are reported. Finally, methodologies to
develop quality software that will push forward the advances of the cloud computing
platform have been suggested.

1.3  Need for Modification of Software Engineering: Analysis

1.3.1  Need for Semantic Web-Enabled Software Artifacts

Semantic Web effort has just started and not all are aware of it, even the IT profes-
sionals. The linked data initiative [7] that was taken in 2007 by a small group of
academic researchers from universities now has participants of few large companies
like BBC, Thompson Reuters, and Library of Congress who have transformed their
1  Impact of Semantic Web and Cloud Computing Platform on Software Engineering 15

Fig. 1.7  Linking open data cloud diagram giving an overview of published data sets and their
interlinkage relationships [7]

data for the Semantic Web. DBpedia is another community effort to transform the
Wikipedia documents for Semantic Web. Sophisticated queries can be run on
DBpedia data and link to other Semantic Web data. Friend of a Friend (FOAF) is
another project to link social Web sites and their people and describe what they
create or do. Federal and State governments are also taking initiatives to publish
public data online. US Census data is one such semantic data source which can be
queried and linked with other semantic data sources. Unless all government public
data can be transformed for the Semantic Web, they will not be suitable for interop-
erable Web applications.
Figure 1.7 shows the current size of the linked data Web as of March 2009. Today
there are 4.7 billion RDF triples which are interlinked by 142 million RDF links.
Anybody can transform their data in linked data standards and can link to the existing
linked data Web. In Fig. 1.7, the circles are nodes of independent data sources or
Web sites, and the arcs are their relationship with other data sources. The thicker
links specify more connections between the two data sources, and bidirectional
links mean both data sources are linked to each other.
Once the software engineers grasp the Semantic Web technologies and understand
their capabilities and their many advantages like interoperability, adaptability,
integration ability of open and distributed software components with other applications,
they will make their software artifacts Semantic Web ready. Once the software
artifacts are transformed into semantic artifacts software, maintainability will be
16 R. Guha

Fig. 1.8  Service-Oriented Architecture for interoperability of services

much more efficient and cheaper. All requirements can be optimized, linked, and
traced. Aggregating of information from requirements document will be easy, and
impact analysis before actual changes are made can be done more accurately.
Increased maintainability of software will also increase reliability of the software.
Semantic Web services will be easy to discover on the Web, and that will give
a competitive edge to their products. Semantic Web services which can be linked
with other Web services will create new and more powerful software applications,
encourage reuse, and reduce redundancy.

1.3.2  Creating a Web Service

Benefits of Web services [23–26] are code reuse and speedy development of software
projects. But in order to use Web services from the Web, the application must create
a Web client which can interface with the Web services and request for services and
receive services. In Fig.  1.8, the Service-Oriented Architecture (SOA) that has
emerged to deliver software as a service (SaaS) business model is illustrated.
An application programming interface (API) of Web service is first created as
WSDL document using XML tags, for advertising to the world over the Internet. WSDL
documents have five major parts. It describes data types, messages, port, operation
(class and methods), binding (SOAP message), and location (URL). WSDL documents
need not be manually created. There are automatic tools like Apache Axis [25],
which will create the API from a Java programming code. Apache Axis is an open
source, XML-based Web service framework.
1  Impact of Semantic Web and Cloud Computing Platform on Software Engineering 17

After creating the WSDL document, a Web client to consume the Web service is
needed. Web client is created using SOAP to communicate request and response
messages between the two applications. SOAP is an XML messaging format for
exchanging structured data (XML documents) over HTTP transport protocol and
can be used for remote procedure call (RPC). SOAP structure has three parts:
(1) envelop, (2) header, and (3) body. Body defines the message and how to process it.
Software engineers have to master XML language and other Web technologies
like WSDL and SOAP in addition to knowing a programming language like Java or
C++ in order to use or create a Web service.

1.3.3  How SW Engineers Are Coping in Cloud Platform

This section surveys how software development industry is trying to survive in the
era of Web 2.0 and Web 3.0 with Web services and cloud computing. In reference
[27], the authors present framework activities for designing applications based on
discovery of Semantic Web service using software engineering methodologies.
They propose generating semiautomatic semantic description of applications
exploiting the existing methodologies and tools of Web engineering. This increases
design efficiency and reduces manual effort of semantically annotating the new
application composed from Web services of multiple enterprises.
In Reference [28], Salesforce.com finds that agile process model works better on
cloud computing platform. Before cloud computing, release of the software to the
user took time and getting feedback from the customer took more time which
thwarted the very concept of agile development. Whereas now, new releases of the
software can be uploaded on the server and used by the users immediately. Basically
in this chapter, what they have described is the benefits of software as a service
hosted on the Internet and how it complements agile computing methodology. They
have not considered the challenges of cloud computing in developing new business
software.
Cloud computing being the newest hype of the IT industry, the challenges of
software engineering on cloud computing platform have not been studied yet, and
no software development process model for cloud computing platform has been
suggested yet. We analyze the challenges of the cloud computing platform on
software development process and suggest extending the existing agile process
model, named Extreme Programming, to mitigate all the challenges in Sect. 1.3.4.

1.3.4  Impact of Cloud Computing on Software Engineering

In the rapidly changing computing environment with Web services and cloud
platform, software development is going to be very challenging. Software develop-
ment process will involve heterogeneous platforms, distributed Web services, and
18 R. Guha

multiple enterprises geographically dispersed all over the world. Existing software
process models and framework activities are not going to be adequate unless inter-
action with cloud providers is included.
Requirements gathering phase so far included customers, users, and software
engineers. Now it has to include the cloud providers as well, as they will be supplying
the computing infrastructure and maintain them too. As the cloud providers only
will know the size, architectural details, virtualization strategy, and resource utilization
percentage of the infrastructure, planning and design phases of software development
also have to include the cloud providers. The cloud providers can help in answering
these questions about (1) how many developers are needed, (2) component reuse,
(3) cost estimation, (4) schedule estimation, (5) risk management, (6) configuration
management, (7) change management, and (8) quality assurance.
Because of the component reuse of Web services, the size of the software in
number of kilo lines of code (KLOC) or number of function points (FP) to be newly
developed by the software engineer will reduce, but complexity of the project will
increase manyfold because of lack of documentations of implementation details
of Web services and their integration requirements. Only description that will be
available online is the metadata information of the Web services to be processed by
the computers automatically.
Only coding and testing phases can be done independently by the software
engineers. Coding and testing can be done on the cloud platform which is a huge
benefit as everybody will have easy access to the software being built. This will
reduce the cost and time for testing and validation.
However, software developers need to use the Web services and open source
software freely available from the cloud instead of procuring them. Software
developers should have more expertise in building software from readily available
components than writing it all and building a monolithic application. Refactoring of
existing application is required to best utilize the cloud infrastructure architecture in
a cost-effective way. In the latest hardware technology, the computers are multi-core
and networked, and the software engineers should train themselves in parallel and
distributed computing to complement these advances of hardware and network
technology. Software engineers should train themselves in Internet protocols, XML,
Web service standards and layered separation of concerns of SOA architecture
of Internet, and Semantic Web technologies to leverage all the benefits of Web
2.0. Cloud providers will insist that software should be as modular as possible for
occasional migration from one server to another for load balancing as required by
the cloud provider [16].
Maintenance phase should also include the cloud providers. There is a complete
shift of responsibility of maintenance of the infrastructure from software developers
to cloud providers. Now because of the involvement of the cloud provider, the
customer has to sign a contract with them as well so that the “Software Engineering
code of ethics” is not violated by the cloud provider. In addition, protection and
security of the data is of utmost importance which is under the jurisdiction of the
cloud provider now.
Also, occasional demand of higher resource usage of CPU time or network from
applications may thwart the pay-by-use model of cloud computing into jeopardy
1  Impact of Semantic Web and Cloud Computing Platform on Software Engineering 19

Fig. 1.9  Economics vs.


complexity of software

as multiple applications may need higher resource usage all at the same time not
anticipated by the cloud provider in the beginning. Especially when applications are
deployed as “software as a service” or “SaaS” model, they may have occasional
workload surge not anticipated in advance.
Cloud provider uses virtualization of resource technique to cater many customers
on demand in an efficient way. For higher resource utilization, occasional migration
of application from one server to another or from one storage to another may be
required by the cloud provider. This may be a conflict of interest with the customer
as they want dedicated resources with high availability and reliability of their
applications. To avoid this conflict, cloud providers need to introduce quality of
service provisions for higher-priority tenants.
Now we analyze how difficult will be the interaction between cloud providers
and the software engineers. The amount of interactions between software engineers
and cloud providers will depend on the type of cloud like public, private, or hybrid
cloud involvements. In private cloud, there is more control or self-governance by
the customer than in public cloud. Customer should also consider using private
cloud instead of using public cloud to assure availability and reliability of their
high-priority applications. Benefits of private cloud will be less interaction with
cloud provider, self-governance, high security, reliability, and availability of data
(Fig. 1.9). But cheaper computing on public cloud will always outweigh the benefits
of less complexity of SW development on private cloud platform and is going to
be more attractive.

1.4  Proposed SW Process Model for Cloud Platform

Innovative software engineering is required to leverage all the benefits of cloud


computing and mitigate its challenges strategically to push forward its advances.
Here an extended version of Extreme Programming (XP), an agile process model
for cloud computing platform named Extreme Cloud Programming (Fig. 1.10), is
proposed. All the phases like requirements gathering, planning, design, construction,
testing, and deployment need interaction with the representatives from cloud provider.
The roles or activities by the cloud provider and SW developers are separated and
listed in Table 1.1. Resource accounting on cloud platform will be done by the cloud
20 R. Guha

Fig. 1.10  Extreme Cloud Programming development on cloud computing [29]

Table 1.1  Software engineering-role separation [29]


Roles
Activity Software developer Cloud provider
Requirements gathering Elicitation Resource accounting
Virtual machine
Analysis SW modules SW/HW architecture
Design Interface design Component reuse
Data types
Cost estimation
Schedule estimation
Construction Coding Implementation details
Integration of Web services
Testing Unit test Integration test
Integration test
Deployment Operation and maintenance

provider in the requirements gathering phase. Software architecture, software


architecture to hardware architecture mapping, interface design, data types design,
cost estimation, and schedule estimation of the project all should be done in collabo-
ration with the cloud provider. During the construction phase of the application, if
Web services are integrated where many different enterprises are involved, then error
should be mitigated with the mediation of the cloud provider. Maintenance contract
with cloud provider will be according to the Quality of Service agreement.
A software metric is required for effort estimation of SW development using the
new Extreme Cloud Programming process model. This metric is required as
American consultant Tom DeMarco aptly stated in 1997 in his book [30] about
1  Impact of Semantic Web and Cloud Computing Platform on Software Engineering 21

Table 1.2  COCOMO [29]


Software proj. a b c d
Organic 2.4 1.05 2.5 .38
Semidetached 3.0 1.12 2.5 .35
Embedded 3.6 1.2 2.5 .32
Cloud comp. 4 1.2 2.5 .3

managing risk in software projects that “You cannot control what you cannot mea-
sure.” Constructive cost estimation model (COCOMO) is mostly used model for
cost estimation of various software development projects. In COCOMO model
(Table 1.2), three classes of software projects have been considered so far. These
software projects are classified as (1) Organic, (2) Semidetached, (3) Embedded
according to the software team size, their experiences, and development (HW, SW,
and operations) constraints. We extend [29] this cost estimation model with a new
class of software project for cloud computing platform. In basic COCOMO model
effort (man month), development time (months) and number of people required
are given by the following equations.

Effort Applied = a (KLOC)b [ man − months ]


Development Time = c ( Effort Applied ) [ months ]
d

No. of People = Effort Applied / Development Time[ no.]



The typical values of the coefficients a, b, c, d for different classes of software
projects are listed in Table 1.2. In anticipation of additional interaction complexity
with the cloud providers, coefficient a is increased to 4 for cloud computing
platform. Coefficients a, b for cloud computing are determined so that the effort
curve is steeper than the other three classes but is linear like the other three classes.
Similarly, coefficients c, d for cloud computing are determined so that the develop-
ment time curve is less steeper than the other three classes but is linear like the other
three classes. The coefficients a, b, c, d in cloud computing are readjusted to new
values of 4, 1.2, 2.5, and .3.
Because of component reuse, software development with cloud computing will
reduce KLOC (kilo lines of code) significantly. We deduce new KLOC = i * C + 
(KLOC) * C, where C is the % of component reuse and i is the coefficient adjustment
for new interface design effort.
Figure 1.11 plots software effort estimation for project size varying from 10 to
50 KLOC for all four classes of projects. We assumed 30 % component reuse in
cloud computing case. If more percentage of component reuse is possible, it will
mitigate the higher interaction complexity in coefficient a and will be beneficial
for cloud computing platform. Figure  1.12 plots the corresponding software
development time estimation for all four classes of software projects. With 30 %
component reuse possibility, software development on cloud computing platform
will take least amount of time.
22 R. Guha

Fig. 1.11  Extended COCOMO for SW effort estimation [29]

Fig. 1.12  Extended COCOMO for SW dev. time [29]

1.5  Conclusion

The development of Semantic Web or Web 3.0 can transform the World Wide Web
into an intelligent Web system of structured, linked data which can be queried and
inferred as a whole by the computers themselves. This Semantic Web capability is
materializing many innovative use of the Web such as hosting Web services and
cloud computing platform. Web services and cloud computing are paradigm shifts
over traditional way of developing and deploying of software. This will make software
engineering more difficult as software engineers have to master the Semantic Web
1  Impact of Semantic Web and Cloud Computing Platform on Software Engineering 23

skills for using open source software on distributed computing platform and
they have to interact with a third party called the “cloud provider” in all stages of
software processes. Automatic discovery and integration with Web services will
reduce the amount of work in terms of line of code (LOC) or function points (FP)
required for developing software on cloud platform but there will be added semantic
skill requirements and communication and coordination requirements with the
cloud providers which makes software development project more complex.
First, the Semantic Web techniques are explored on what the software developers
need to incorporate in their artifacts in order to be discovered easily on the Web to
give their product a competitive edge and for efficient software integration and
maintenance purposes. Then, the need for changes in the prevalent software process
models is analyzed to suggest that they should incorporate the new dimension of
interactions with the cloud providers and separate roles of software engineers and
cloud providers. A new agile process model is proposed in this chapter which
includes the anticipated interactions requirement with the cloud provider which
will mitigate all the challenges of software development on cloud computing
platform and make it more advantageous to develop and deploy software on the
cloud computing platform.
Cloud computing being the anticipated future computing platform, more soft-
ware engineering process models need to be researched which can mitigate all
its challenges and reap all its benefits. Also, safety and privacy issues of data in
cloud computing platform need to be considered seriously so that cloud computing
is truly accepted by all.

References

1. Barners-Lee, T.: Future of the web. http://dig.csail.mit.edu/2007/03/01 (2007)


2. Guha, R.: Toward the intelligent web systems. In: Proceedings of IEEE CS, First International
Conference on Computational Intelligence, Communication Systems and Network, pp. 459–463.
IEEE, Los Alamitos (2009)
3. Handler, J., Shadbolt, N., Hall, W., Berners-Lee, T., Weitzner, D.: Web science: an interdisci-
plinary approach to understanding the web. Commun. ACM 51(7), 60–69 (2008)
4. Chong, F., Carraro, G.: Architecture Strategies for Catching the Long Tail. Microsoft
Corporation, Redmond (2006)
5. Banerjee, J., Aziz, S.: SOA: the missing link between enterprise architecture and solution
architecture. SETLabs Brief. 5(2), 69–80 (2007)
6. Barners-Lee, T.: Linked data. http://www.w3.org/DesignIssues/LinkedData.html (2012)
7. Bizer, C., Heath, T., Berners-Lee, T.: Linked data – the story so far. Special issue on linked
data. Int. J. Semant. Web Inf. Syst. (IJSWIS). http://tomheath.com/papers/bizer-heath-berners-­
lee-ijswis-linked-data.pdf (2012)
8. Niemann, B., et al.: Introducing Semantic Technologies and the Vision of the Semantic Web,
SICoP White Paper (2005)
9. HADOOP: http://en.wikipedia.org/wiki/Hadoop (2010)
10. Taft, D.: IBM’s M2 Project Taps Hadoop for Massive Mashups. www.eweek.com (2010)
11. Wikipedia: Free and open source software. http://en.wikipedia.org/wiki/Free_and_open-­source_
software. Accessed July 2012
24 R. Guha

12. Wikipedia: Web Ontology Language. http://en.wikipedia.org/wiki/Web_Ontology_Language.


Accessed July 2012
13. Code{4}lib: Library Ontology. http://wiki.code4lib.org/index.php/Library_Ontology Accessed
July 2012
14. Sun Microsystem: Introduction to Cloud Computing Architecture, White Paper, 1st edn. (2009)
15. Sun Microsystem: Open Source & Cloud Computing: On-Demand, Innovative IT on a Massive
Scale (2012)
16. Singh, A., Korupolu, M., Mahapatra, D.: Server-storage virtualization: integration and load
balancing in data centers. In: IEEE/ACM Supercomputing (SC) Conference. IEEE Press,
Piscataway (2008)
17. VMWARE: Virtualization overview. www.vmware.com (2012)
18. Reservoir Consortium: Resources and Services Virtualization Without Barriers. Scientific
Report (2009)
19. Pearson, S.: Taking Account of Privacy when Designing Cloud Computing Services. HP Labs,
Bristol (2009)
20. Jansen, W.A.: Cloud Hooks: Security and Privacy Issues in Cloud Computing. NIST
21. Pressman, R.: Software Engineering: A Practitioner’s Approach, 7th edn. McGraw-Hill Higher
Education, New York (2009)
22. Sommerville, I.: Software Engineering, 8th edn. Pearson Education, Harlow (2006)
23. Cavanaugh, E.: Web services: benefits, challenges, and a unique, visual development solution.
www.altova.com (2006)
24. Nickull, D., et al.: Service Oriented Architecture (SOA) and Specialized Messaging Patterns (2007)
25. Web services-Axis: axis.apache.org/axis (2012)
26. W3C: Web services Description Language (WSDL) Version 2.0 (2012)
27. Brambilla, M. et  al.: A Software Engineering Approach to Design and Development of
Semantic Web Service Applications (2006)
28. Salesforce.com: Agile Development Meets Cloud Computing for Extraordinary Results.
www.salesforce.com (2009)
29. Guha, R., Al-Dabass, D.: Impact of Web 2.0 and cloud computing platform on software engineer-
ing. In: Proceedings of 1st International Symposium on Electronic System Design (ISED) (2010)
30. DeMarco, T., Lister, T.: Waltzing with Bears: Managing Risk on Software Projects. Dorset
House Publishing Company, Incorporated, New York (2003)
Chapter 2
Envisioning the Cloud-Induced Transformations
in the Software Engineering Discipline

Pethuru Raj, Veeramuthu Venkatesh, and Rengarajan Amirtharajan

Abstract The software engineering field is on the move. The contributions of software
solutions for IT-inspired business automation, acceleration, and augmentation are enor-
mous. The business values are also rapidly growing with the constant and consistent
maturity and stability of software technologies, processes, infrastructures, frameworks,
architectural patterns, and tools. On the other hand, the uncertainty in the global econ-
omy has a direct bearing on the IT budgets of worldwide organizations. That is, they
are expecting greater flexibility, responsiveness, and accountability from their IT
division, which is being chronically touted as the cost center. This insists on shorter
delivery cycles and on delivering low-cost yet high-quality solutions. Cloud computing
prescribes a distinguished delivery model that helps IT organizations to provide quality
solutions efficiently in a manner that suits to evolving business needs. In this chapter,
we are to focus how software-development tasks can get greatly simplified and stream-
lined with cloud-centric development processes, practices, platforms, and patterns.

Keywords Cloud computing • Software engineering • Global software development •


Model-driven architecture • MDA • Lean methodology • Distributed computing

2.1 Introduction

The number of pioneering discoveries in the Internet space is quite large. In the
recent past, the availability of devices and tools to access online and on-demand
professional and personal services has increased dramatically. Software has been

P. Raj (*)
Wipro Technologies, Bangalore 560035, India
e-mail: [email protected]
V. Venkatesh • R. Amirtharajan
School of Electrical and Electronics Engineering, SASTRA University,
Thanjavur, Tamil Nadu, India

Z. Mahmood and S. Saeed (eds.), Software Engineering Frameworks for the Cloud 25
Computing Paradigm, Computer Communications and Networks,
DOI 10.1007/978-1-4471-5031-2_2, © Springer-Verlag London 2013
26 P. Raj et al.

pervasive and persuasive. It runs on almost all kinds of everyday devices that are
increasingly interconnected as well as Internet-connected. This deeper and extreme
connectivity opens up fresh possibilities and opportunities for students, scholars,
and scientists. The devices at the ground level are seamlessly integrated with cyber
applications at remote, online, on-demand cloud servers. The hardware and software
infrastructure solutions need to be extremely scalable, nimble, available, high-
performing, dynamic, modifiable, real-time, and completely secure. Cloud computing
is changing the total IT landscape by presenting every single and tangible IT resource
as a service over any network. This strategically sound service enablement decimates
all kinds of dependencies, portability, interoperability issues, etc.
Cloud services and applications are becoming very popular and penetrative these
days. Increasingly, both business and IT applications are being modernized appro-
priately and moved to clouds to be subsequently subscribed and consumed by global
user programs and people directly anytime anywhere for free or a fee. The aspect
of software delivery is henceforth for a paradigm shift with the smart leverage of
cloud concepts and competencies. Now there is a noteworthy trend emerging fast to
inspire professionals and professors to pronounce the role and responsibility of
clouds in software engineering. That is, not only cloud-based software delivery but
also cloud-based software development and debugging are insisted as the need of
the hour. On carefully considering the happenings, it is no exaggeration to say that
the end-to-end software production, provision, protection, and preservation are to
happen in virtualized IT environments in a cost-effective, compact, and cognitive
fashion. Another interesting and strategic pointer is that the number and the type of
input/output devices interacting with remote, online, and on-demand cloud are on
the climb. Besides fixed and portable computing machines, there are slim and sleek
mobile, implantable, and wearable devices emerging to access, use, and orchestrate
a wider variety of disparate and distributed professional as well as personal cloud
services. The urgent thing is to embark on modernizing and refining the currently
used application development processes and practices in order to make cloud-based
software engineering simpler, successful, and sustainable.
In this chapter, we discuss cloud-sponsored transformations for IT and leveraging
clouds for global software development and present a reflection on software
engineering. The combination of agility and cloud infrastructure for next-generation
software engineering, the convergence of service and cloud paradigms, the amalga-
mation of model-driven architecture, and the cloud and various mechanisms for
assisting cloud software development are also discussed. At the end, cloud platform
solutions for software engineering are discussed, and software engineering challenges
with respect to cloud environments are also presented.

2.2 Cloud-Sponsored Transformations for IT

The popularity of the cloud paradigm is surging, and it is overwhelmingly accepted


as the disruptive, transformative, and innovative technology for the entire IT
field. The direct benefits include IT agility through rationalization, simplification,
2 Envisioning the Cloud-Induced Transformations in the Software Engineering… 27

higher utilization, and optimization. This section explores the tectonic and seismic
shifts of IT through the cloud concepts.
• Adaptive IT – There are a number of cloud-inspired innovations in the form of
promising, potential, and powerful deployment; delivery; pricing; and consump-
tion models in order to sustain the IT value for businesses. With IT agility setting
in seamlessly, business agility, autonomy, and adaptivity are being guaranteed
with the adoption and adaption of cloud idea.
• People IT – Clouds support centralized yet federated working model. It
operates at a global level. For example, today there are hundreds of thousands
of smartphone applications and services accumulated and delivered via
mobile clouds. With ultrahigh broadband communication infrastructures
and advanced to compute clouds in place, the vision of the Internet of
devices, services, and things is to see a neat and nice reality. Self-, surroundings-,
and situation-aware services will become common, plentiful, and cheap;
thereby, IT promptly deals with peoples’ needs precisely and delivers on
them directly.
• Green IT – The whole world is becoming conscious about the power energy
consumption and the heat getting dissipated into our living environment. There
are calculated campaigns at different levels for arresting climate change and for
sustainable environment through less greenhouse-gas emission. IT is being
approached for arriving at competent green solutions. Grid and cloud computing
concepts are the leading concepts for green environment. Especially the smart
energy grid and the Internet of Energy (IoE) disciplines are gaining a lot of
ground in order to contribute decisively for the global goal of sustainability.
The much-published and proclaimed cloud paradigm leads to lean compute,
communication, and storage infrastructures, which significantly reduce the
electricity consumption.
• Optimal IT – There are a number of worthwhile optimizations happening in the
business-enabling IT space. “More with less” has become the buzzword for both
business and IT managers. Cloud enablement has become the mandatory thing
for IT divisions as there are several distinct benefits getting accrued out of this
empowerment. Cloud certainly has the wherewithal for the goals behind the IT
optimization drive.
• Next-Generation IT – With a number of delectable advancements in wireless and
wired broadband communication space, the future Internet is being positioned as
the central figure in conceiving and concretizing people-centric discoveries and
inventions. With cloud emerging as the new-generation compute infrastructure,
we will have connected, simplified, and smart IT that offers more influential and
inferential capability to humans.
• Converged, Collaborative, and Shared IT – The cloud idea is fast penetrating
into every tangible domain. Cloud’s platforms are famous for not only software
deployment and delivery but also for service design, development, debugging,
and management. Further on, clouds, being the consolidated, converged, and
centralized infrastructure, are being prescribed and presented as the best bet
for enabling seamless and spontaneous service integration, orchestration, and
28 P. Raj et al.

collaboration. With everything (application, platform, and infrastructure) are


termed and touted as publicly discoverable, network-accessible, self-describing,
autonomous, and multitenant services, clouds will soon become the collaboration
hub. Especially business-aware, process-centric, and service-oriented compos-
ites can be easily realized with the cloud-based collaboration platform.
• Real-Time IT – Data’s variety, volume, and velocity are on the climb. The current
IT infrastructures are insufficient in order to extract actionable insights out of
pouring data. Hence, the emergence of big data computing and analysis
technologies are given due diligence and attention. These fast-maturing technolo-
gies are able to accomplish real-time transition from data to information and to
knowledge. Cloud is the optimized, automated, and virtualized infrastructure for
big data computing and analytics. That is, with the infrastructure support from
clouds, big data computing model is to see a lot of improvements in the days
ahead so that the ultimate goal of real-time analytics can be realized very fluently
and flawlessly.

2.3 Leveraging Clouds for Global Software


Development (GSD)

Globalization and distribution are the two key concepts in the IT field. Software
development goes off nations’ boundaries and tends toward places wherein quality
software engineers and project managers are available in plenty. On-site, off-shoring,
near-shoring, etc., are some of the recent buzzwords in IT circles due to these devel-
opments. That is, even a software project gets developed in different locations as
the project team gets distributed across the globe. With the sharp turnarounds in a
communication field, a kind of tighter coordination and collaboration among team
members are possible in order to make project implementation successful and
sustainable. In-sourcing has paved the way for outsourcing with the maturity of
appropriate technologies. As widely known, software sharply enhances the com-
petitive advantage and edge for businesses. Hence, global software development
(GSD) has become a mandatory thing for the world organizations. Nevertheless,
when embarking on GSD, organizations continue to face challenges in adhering to
the development life cycle. The advent of the Internet has supported GSD by
bringing new concepts and opportunities resulting in benefits such as scalability,
flexibility, independence, reduced cost, resource pools, and usage tracking. It has
also caused the emergence of new challenges in the way software is being delivered
to stakeholders. Application software and data on the cloud are accessed through
services, which follow SOA principles.
GSD is actually the software-development process incorporating teams spread
across the globe in different locations, countries, and even continents. The driver for
this sort of arrangement is by the fact that conducting software projects in multiple
geographical locations is likely to result in benefits such as cost reduction and
2 Envisioning the Cloud-Induced Transformations in the Software Engineering… 29

reduced time to market, access to a larger skill pool, proximity to customer, and
24-h development by following the sun. But, at the same time, GSD brings challenges
to distributed software-development activities due to geographic, cultural, linguistic,
and temporal distance between the project development teams.
Because of the distance between the software-development teams, GSD
encounters certain challenges in terms of collaboration, communication, coordination,
culture, management, organizational, outsourcing, development process, develop-
ment teams, and tools. The real motive for using the cloud for supporting GSD is
that the cloud idea thrives as it is closely related to the service paradigm. That is,
services are created, provisioned, and delivered from cloud-based service platforms.
Since SOA runs a mechanism for development and management of distributed
dynamic systems, and it evolved from the distributed-component-based approach, it
is argued that cloud has the innate potential and strength to successfully cater for the
challenges of GSD where a project is developed across different geographical
locations. GSD challenges can be overcome through SOA. This will contribute to
increased interoperability, diversification, and business and technology alignment.
Cloud as the next-generation centralized and service-oriented infrastructure is capable
of decimating all the internal as well as externally imposed challenges.
• Global Software Development (GSD) in Cloud Platforms [1] – Clouds offer
instant resource provisioning, flexibility, on-the-fly scaling, and high availability
for continuously evolving GSD-related activities. Some of the use cases include.
• Development Environments – With clouds, the ability to acquire, deploy, configure,
and host development environments become “on-demand.” The development
environments are always on and always available to the implementation teams
with fine-grained access control mechanisms. In addition, the development
environments can be purpose-built with support for application-level tools, source
code repositories, and programming tools. After the project is done, these can also
be archived or destroyed. The other key element of these “on-demand” hosting
environments is the flexibility through its quick “prototyping” support. Prototyping
becomes flexible, in that as new code and ideas can be quickly turned into work-
able proof of concepts (PoCs) and tested.
• Developer Tools – Hosting developer tools such as IDEs and simple code editors
in the cloud eliminates the need for developers to have local IDEs and other
associated development tools, which are made available across time zones and
places.
• Content Collaboration Spaces – Clouds make collaboration and coordination
practical, intuitive, and flexible through easy enabling of content collaboration
spaces, modeled after the social software domain tools like Facebook, but centering
on project-related information like invoices, statements, RFPs, requirement doc-
uments, images, and data sets. These content spaces can automate many project-
related tasks such as automatically creating MS Word versions of all imported
text documents or as complex as running work flows to collate information from
several different organizations working in collaboration. Each content space can
be unique, created by composing a set of project requirements. Users can invite
30 P. Raj et al.

internal and external collaborators into this customized environment, assigning


appropriate roles and responsibilities. After the group’s work is “complete,” their
content space can be archived or destroyed. These spaces can be designed to sup-
port distributed version control systems enabling social platform conversations
and other content management features.
• Continuous Code Integration – Compute clouds let “compile-test-change” software
cycle on the fly do continuous builds and integration checks to meet strict quality
checks and development guidelines. They can also enforce policies for custom-
ized builds.
• APIs and Programming Frameworks – Clouds force developers to embrace
standard programming model APIs where ever possible and adhere to style
guides, conventions, and coding standards in meeting the specific project require-
ments. They also force developers to embrace new programming models and
abstractions such as .NET Framework, GWT, Django, Rails, and Spring Framework
for significantly increasing the overall productivity. One more feature of using
clouds is that they enforce constraints, which push developers to address the
critical next-generation programming challenges of multicore computing, parallel
programming, and virtualization. As explained earlier in the chapter, global
software development is picking up fast, and the emergence of clouds is to boost
the GSD activities further.

2.4 A Reflection on Software Engineering

Radha Guha writes in [2] that over the last half-century, there have been robust and
resilient advancements in the hardware engineering domain. That is, there are radical
and rapid improvisations in computers, memory, storage, communication networks,
mobile devices, and embedded systems. This has been incessantly pushing the need
for larger and more complex software. Software development not only involves
many different hardware elements, it also involves many different parties like end
users and software engineers. That is why software development has become such
an inherently complicated task. Software developers are analyzing, articulating, and
adopting the proven and prescribed engineering disciplines. That is, leveraging
systematic, disciplined, and quantifiable approach to make software development
more manageable to produce quality software products. The success or quality of a
software project is measured by whether it is developed within the stipulated time
and agreed budget and by its throughput, user-friendliness, consumability, depend-
ability, and modifiability.
Typically, a software engineering engagement starts off with an explicit and
elegant process model comprising several formally defined and synchronized
phases. The whole development process of software from its conceptualization to
implementation to operation and retirement is called the software-development
life cycle (SDLC). SDLC goes through several sub-activities like requirement’s gath-
ering, planning, design, coding, testing, deployment, maintenance, and retirement.
2 Envisioning the Cloud-Induced Transformations in the Software Engineering… 31

These activities are well synchronized in accordance to the process model adopted
for a particular software development. There are many process models to choose
from like water fall model, rapid application development (RAD) model, and spiral
model depending on the size of the project, delivery time requirement, and type of
the project. The development of an avionic embedded system will adopt a different
process model from development of a Web application.
Even though software engineering [3] takes the engineering approach, the success
of software products is more difficult than products from other engineering domains
like mechanical engineering or civil engineering. This is because software is
intangible during its development. Software project managers use a number of
techniques and tools to monitor the software building activities in a more visible
way. These activities include software project tracking and control, risk management,
quality assurance, measurements, configuration management, work product or
document’s generation, review, and reusability management.
Even after taking all these measures for sticking to the plan and giving much
importance to document generation for project tracking and control, many software
projects failed. More than 50 % of software projects fail due to various reasons
like schedule and budget slippage, non-user-friendly interface of the software, and
non-flexibility for maintenance and change of the software. Therefore, there is a
continued and consistent focus on simplifying and streamlining software implementa-
tion. In this chapter, we are to see some of the critical and crucial improvements in
software engineering process with the availability of cloud infrastructures.
The Evolutions and Revolutions in the Software Engineering Field – There are a
number of desirable and delectable advancements in the field of software engineering
in order to make the tough task of software construction easier and quicker. This
section describes the different levels and layers in which the software engineering
discipline and domain evolve.
At the building-block level, data, procedures, classes, components, agents, aspects,
events, and services are the key abstraction and encapsulation units for building and
orchestrating software modules into various types of specific and generic software.
Services especially contribute in legacy modernization and migration to open
service-oriented platforms (SOPs) besides facilitating the integration of disparate,
distributed, and decentralized applications. In short, building blocks are the key
ingredient enabling software elegance, excellence, and evolution. In the recent past,
formal models in digital format and service composites are evolving fast in order
to further simplify and streamline the tough task of software assembly and imple-
mentation. As software complexity is on the rise, the need for fresh thoughts and
techniques is too on the climb.
On the language level, a bevy of programming languages (open source as well as
proprietary) were produced and promoted by individuals, innovators, and institu-
tions. Even, there are efforts underway in order to leverage fit-for-purpose languages
to build different parts and portions of software applications. Software libraries are
growing in number, and the ideas of software factory and industrialization are
picking up fast lately. Service registry and repository are an interesting phenome-
non for speeding up software realization and maintenance. Programming languages
32 P. Raj et al.

and approaches thrive as there are different programming paradigms such as object
orientation, event- and model-driven concepts, componentization, and service
orientation. Further on, there are script languages in the recent past generating and
getting a lot of attention due to their unique ability of achieving more with less code.
Formal models in digitalized format and service composites are turning out to be
a blessing in disguise for the success and survival of software engineering. There
are domain-specific languages (DSLs) that could cater to the specific demands of
domains quite easily and quickly.
As far as development environments are concerned, there are a number of diverse
application building platforms for halving the software developmental complexity
and cost. That is, there are a slew of integrated development environments (IDEs),
rapid application development (RAD) tools, code generators and cartridges, enabling
CASE tools, compilers, debuggers, profilers, purpose-specific engines, generic and
specific frameworks, best practices, key guidelines, etc. Plug and play mechanism
has gained a lot with the overwhelming adoption of eclipse IDE for inserting and
instantiating different language compilers and interpreters. The long-standing
objectives of platform portability (Java) and language portability (.NET Framework)
are being achieved at a middleware level. There are standards-compliant toolkits
for process modeling, simulation, improvement, investigation, and mapping. Services
as the well-qualified process elements are being discovered, compared, and orches-
trated for partial or full process automation.
At the process level, waterfall is the earliest one, and thereafter there came a
number of delicious variations in software-development methodology with each
one having both pros and cons. Iterations, increments, and integrations are being
touted as the fundamental characteristics for swifter software production. Agile pro-
gramming is gaining a lot of ground as business changes are more frequent than
ever before and software complexity is also growing. Agility and authenticity in
software building are graciously achieved with improved and constant interactions
with customers and with the enhanced visibility and controllability on software
implementation procedures. Agility, being a well-known horizontal technique,
matches, mixes, and merges with other paradigms such as service-oriented program-
ming and model-driven software development to considerably assist in lessening
the workload of software developers and coders. Another noteworthy trend is that
rather than code-based implementation, configuration-based software production
catches up fast.
At the infrastructural level, the cloud idea has brought in innumerable transfor-
mations. The target of IT agility is seeing a neat and nice reality and this in turn
could lead to business agility. Technically, cloud-inspired infrastructures are virtual-
ized, elastic, self-servicing, automated, and shared. Due to the unique capabilities
and competencies of cloud IT infrastructures (in short, clouds), all kinds of enterprise
IT platforms (development, execution, management, governance, and delivery)
are being accordingly manipulated and migrated to be hosted in clouds, which are
extremely converged, optimized, dynamic, lean, and green. Such meteoric movement
decisively empowers application platforms to be multitenant, unified, and central-
ized catering to multiple customers and users with all the enhanced productivity,
2 Envisioning the Cloud-Induced Transformations in the Software Engineering… 33

extensibility, and effectiveness. In other words, cloud platforms are set to rule and
reign the IT world in the days to unfold. In other words, platforms are getting
service-enabled so that any service (application, platform, and infrastructure) can
discover and use them without any barriers. Service enablement actually expresses
and exposes every IT resource as a service so that all kinds of the resource’s incom-
patibilities are decimated completely. That is, resources readily connect, concur,
compose, and collaborate with one another without any externally or internally
imposed constrictions, contradictions, and confusions. In a nutshell, the unassailable
service science has come as a unifying factor for the dilapidated and divergent
IT world.
In summary, the deeply dissected, discoursed, and deliberated software-
development discipline is going through a number of pioneering and positive
changes as described above.

2.5 Combination of Agility and Cloud Infrastructure


for Next-Generation Software Engineering

As indicated previously, there have been many turns and twists in the hot field of
software engineering. It is an unquestionable fact that the cloud paradigm, without
an iota of doubt, has impacted the entire IT elegantly and exceedingly. Besides
presenting a bright future on the aspect of centralized deployment, delivery, and
management of IT resources, the cloud idea has opened up fresh opportunities and
possibilities for cloud-based software design, development, and debugging in a
simplified and systematic fashion. That is, with the overwhelming adoption and
adaption of cloud infrastructures (private, public, community, and hybrid), produc-
ing and preserving enterprise-scale, mission-critical, and value-added software are
going to be definitely distinct. There are four key drivers that unanimously elevate
the software development to be advanced to an accomplished in a cloud. These are:
• Time, Cost, and Productivity – The developer community is being mandated to
do more, quicker, and with fewer resources.
• Distributed Complex Sourcing – Due to various reasons, IT project team
members are geographically dispersed.
• Faster Delivery of Innovation – The focus is on enabling architects and developers
to think ingeniously in order to deliver business value.
• Increasing Complexity – In today’s world, an enterprise-scale project easily
consumes several million lines resulting in more complexity.
In order to reduce complexity, resources, cost, and time considerably, profes-
sionals and professors are vigorously and rigorously striving and searching for
incredibly inventive solutions. Newer concepts, process optimization, best practices,
fresh programming models, state-of-the-art platforms, design patterns and metrics,
and advanced tools are being increasingly unearthed and utilized for lessening the
software development workload. Researchers are continuously at work in order to
34 P. Raj et al.

discover competent and compact methods and mechanisms for simplifying and
streamlining the increasingly multifaceted tasks of constructing and conserving
next-generation software systems. The major benefits of agile methodology over the
traditional methods are:
• Faster time to market
• Quick return on investment
• Shorter release cycles
• Better adaptability and responsiveness to business changing requirements
• Early detection of failure and immediate correction
There are several agile development methods such as Scrum, extreme programming,
test-driven development, and lean software development [4]. With agile models,
business houses expect that services and solutions are being delivered incrementally
earlier rather than later, and delivery cycle time period comes down sharply. That
is, one delivery cycle takes up from 2 to 4 weeks. However, in the midst of these
turnarounds, there arise a number of critical challenges, as mentioned below:
• High effort and cost involved in setting up infrastructures
• Lack of skilled resources
• Lack of ability to build applications from multiple places across the globe
There are a few popular cloud platforms available in order to enable software
development in cloud environments. Google App Engine, salesforce.com, cloud-
foundry.org, cloudbees.com, corenttech.com, heroku.com, windowsazure.com, etc.,
are the leading platforms for cloud-based application development, scaling, and
sustainability.
Collabnet (http://www.collab.net/), a product firm for enabling software devel-
opment in cloud-based platforms, expounds and enlightens on the seamless conver-
gence of the agile programming models, application lifecycle management (ALM)
product, and clouds for a precise and decisive answer for the perpetual software
engineering challenges, changes, and concerns. It convincingly argues that cloud
technologies reduce development barriers by providing benefits in the following
critical areas:
• Availability – Code is centralized and infrastructure is scalable and available on
demand.
• Access – Ensures flexible access to test environments and transparency to project
data for the entire team.
• Overhead – Reduced support overhead, no upgrade latency – teams use an on-
demand model to get what they need, quickly and easily.
Agile processes set the strong and stimulating foundation for distributed teams to
work closely together with all the right and relevant stakeholders to better anticipate
and respond to user expectations. Agile teams today are empowered to clearly
communicate with users to act and react expediently to their feedback. That is, they
are able to collaboratively and cleverly iterate toward the desired state and user
satisfaction. Cloud intrinsically facilitates open collaboration across geographies
2 Envisioning the Cloud-Induced Transformations in the Software Engineering… 35

and time zones with little investment or risk. With more and more development and
test activities moving toward clouds, organizations are able to save time and money
using virtual and shared resources on need basis. Developers could save time
by leaving configuration, upgrades, and maintenance to cloud providers, who usually
employ highly educated and experienced people. Anytime anywhere access is facil-
itated for those with proper authentication and authorization, and assets are
completely centralized and controlled.
Agile and cloud are being positioned together and prescribed as a powerful and
pathbreaking combination for the software-development community. This might
seem counterintuitive to those entrenched in waterfall processes or those comfort-
able with the idea of a daily stand-up and colocated teams. The reality is altogether
different. That is, there are a number of technical and business cases emerging for
using the agile methods in the cloud. The agility concepts make development
teams responsive to the changing needs of businesses and empower them to be
adaptable and flexible. Further on, proven agile processes help to break down all
sorts of barriers and blockages between development and production, allowing
teams to work together to concentrate on meeting stakeholder expectations. The
synchronization of agile and cloud paradigms fully free up developers from all
kinds of difficulties to achieve more with less, to innovate fast, and to ultimately
bring value to the business.

2.6 Convergence of Service and Cloud Paradigms

The service idea has matured and stabilized as the dominant approach for designing,
developing, and delivering open, sustainable, and interoperable service-oriented
systems for enterprise, Web, embedded, and cloud spaces. Even many of the modules
of packaged business software solutions are modified and presented as services.
Services are publicly discoverable and accessible, reusable, and composable
modules for building distinct and specific applications through configuration and
customization, runtime matching, selection and usage of distributed, disparate
and decentralized services, replacement of existing service components through the
substitution of new advanced service components, and service orchestration.
Services as process elements are supporting and sustaining process-oriented systems,
which are generally more flexible. That is, operation and controlling of software
solutions at process level considerably reduce the software development, management,
and maintenance tasks.
Thus, the process propensity of the service paradigm and cloud-centric service-
oriented infrastructures and platforms bring a number of distinct advantages for
software engineering. Services and cloud computing have garnered much attention
from both industry and academia because they enable the rapid and radical devel-
opment of enterprise-scale, mission-critical, high-performance, dynamic, and dis-
tributed applications. Agility, adaptivity, and affordability, the prime characteristics
of next-generation software systems, can be realized with the smart leverage of
36 P. Raj et al.

processes, services, and cloud platforms. Precisely speaking, the service paradigm
is to energize futuristic software design, whereas cloud platforms are being tipped
and touted as the next-generation service-centric platforms for service development,
deployment, management, and delivery.
Service-Oriented Software Development – It is to see a lot of delectable and
decisive shifts with the adoption of cloud platforms. The smooth and seamless
convergence of services and clouds promises shining days for software-development
community. Of course, there are a few challenges that need utmost attention from
scholars, scientists, and students. Security, visibility, controllability, performance,
availability, usability, etc., need to be obviated in order to fast-track service-based
software implementation in clouds.
As widely pronounced, services are being positioned as the most flexible and
fertile component for software production. That is, software solutions are made of
interoperable services. It is all about the dynamic discovery and purposeful interac-
tions among a variety of services that are local or remote, business or IT-centric, and
owned or subscribed from third-party service providers. Services are standards-
compliant, self-describing, and autonomous entities in order to decimate all kinds
of dependencies and incompatibilities, to promote seamless and spontaneous
collaborations, and to share each of their capability and competency with others
over networks. Process and workflow-based service compositions result in dynamic
applications that are highly portable. XML is the key data representation, exchange,
and persistence mechanism facilitating service interoperability. Policies are being
framed and encouraged in order to achieve automated service finding, binding, usage,
monitoring, and governance. The essence of service governance is to explicitly
establish pragmatic policies and enforce them stringently. With a consistent rise in
automation, there is a possibility for deviation and distraction, and hence the service
governance discipline is gaining a lot of ground these days.
As there is a clear distinction between service users and providers, service-level
agreement (SLA) and even operation-level agreement (OLA) are becoming vital for
service-centric business success and survival. Furthermore, there are geographically
distributed several providers providing identical or similar services and hence SLA,
which unambiguously describes runtime requirements that govern a service’s
interactions with different users, has come as a deciding factor for service selection
and utilization. A service contract describes its interface and the associated con-
tractual obligations. Using standard protocols and respective interfaces, application
developers can dynamically search, discover, compose, test, verify, and execute
services in their applications at runtime. In a nutshell, SOA-based application devel-
opment is through service registration, discovery, assessment, and composition,
which primarily involves three stakeholders:
• A service provider is one who develops and hosts the service in cloud platforms.
• A service consumer is a person or program that finds and uses a service to build
an application.
• A service broker mediates between service providers and consumers. It is a
program or professional in helping out providers publishing their unique services
and guiding consumers to identify ideal services.
2 Envisioning the Cloud-Induced Transformations in the Software Engineering… 37

The service science is on the growth trajectory. There are service-oriented


platforms, patterns, procedures, practices, products, and packages. Service manage-
ment has become a niche area of study and research. The knowledge-driven service
era is to dawn with the availability of competent service-centric technologies,
infrastructures and processes, toolsets, architectures, and frameworks. Service
engineering is picking up fast with the sufficient tweaking and tuning of software
engineering principles, techniques, and tips. Everything related to IT is being con-
scientiously manipulated and presented as a service for the outside world setting the
context and case for IT as a service (ITaaS). In other words, any service can connect
and cooperate with other services individually or collectively to make bigger and
better things for the total humanity.
The Synchronization Between Service and Cloud Ideas – As explained and
elucidated above, the service and cloud computing models together signal a sunny
and shining days ahead for software building. A combined framework comprising
the service and the cloud concepts goes a long way in halving the application devel-
opment drudgery. Cloud-centric application development gets a consolidated, cen-
tralized, virtualized, and shared IT infrastructure for efficiently constructing and
preserving applications. Multitenancy, auto-provisioning, and elasticity features are
the strong business and technical cases for embracing the cloud idea.
Now with the concepts of the Inter-cloud that are fast emerging and evolving,
cloud integration and federation aspects are bound to grow significantly. That is,
connected and federated clouds will become the common, casual, and cheap thing
for next-generation enterprise IT. The federation of multiple types of clouds (mobile,
device, sensor, knowledge, information cloud, high-performance cloud, etc.) is to
enable distributed, global, and collaborative software development [5]. The open
and industry-strength interoperability standards of SOA empower service-sponsored
cloud integration and, on the other hand, cloud-hosted service integration. In short,
the cloud grid is not an option but a necessity considering the growing complexity
of IT toward sustaining the business dynamism.
The concept of designing and developing applications using SOA and delivery
through cloud is to explode. Cloud brokerage firms could maintain cloud-hosted
service registry and repository that works out as a single point of contact for global
application developers. The service metadata offers the exact location, interface,
and contract of services being probed for use. Service developers could host their
services in service platforms of worldwide cloud providers, and this enables appli-
cation developers to search and choose right and relevant services based on the
business requirements. Service providers could also host integrated development
environments and rapid application development tools, code generators and car-
tridges, debuggers, simulators, emulators, etc., in their own clouds or in third-party
cloud infrastructures. Furthermore, they could publish software artifacts such as
modifiable and extendible business processes, workflows, application templates,
user interfaces, data schema, and policies to facilitate software development and
generation. Developers can find viable and value-added services from multiple ser-
vice providers and leverage these artifacts in order to come out with service-oriented
38 P. Raj et al.

applications. The fast-maturing federation science is to dictate the future of software


engineering. In short, there are cloud-based components such as:
• Application development artifacts such as templates, processes, and workflows
• Service development environments and tools
• Service registry repository
• SCA-compliant application implementation platforms with service discovery,
integration, and orchestration features and facilities leveraging the application
artifacts
• Application delivery as a service via the Internet as the cheap and open commu-
nication infrastructure
Service-Based Software Design and Development – Development of service
systems remains a quiet big challenge because services are being developed by
different entities and deposited in geographically distributed locations. For an appli-
cation to fructify, diverse services need to be smartly collected and consolidated.
Different services are covered up with disparate policies. Varying capabilities decorate
services. Also application development process is increasingly diversified because
application developers, service brokers, and application service providers are dis-
tributed. The coordination here is very important for the SOA-based IT and business
successes. Standardized protocols, messaging mechanisms, and interfaces are very
essential services to be linked remotely and resiliently.
Software engineering revolves around two main activities: decomposition and
composition. As business problem evolves and enlarges, the act of decomposition
of business problem is required as our mental capability is limited. Once an appro-
priate solution for the business problem is designed, then identify those solution
building blocks and compose them to develop the solution.
Similar to other development methodologies, service-oriented software develop-
ment starts with requirements extraction, elucidation, and engineering. During this
phase, the application developer develops a business model; works with the customer
to articulate, analyze, authenticate, and refine requirements; designs a workflow for
the business model; and finally decomposes the requirements into smaller and
manageable modules. Then the application developer sends each of the disinte-
grated and disengaged requirements to a service brokerage to find suitable services
that satisfy the enshrined requirements. Once the right services are identified for
each of the requirement parts, the application developer simply composes them into
an application. Service component architecture (SCA) is a recent architectural style
enabling application componentization into service modules that in turn get assem-
bled to form a single entity. There are SCA-compliant IDEs from different product
vendors. In some cases, correct services might not be available and hence one has to
develop those services from the scratch.
Cloud-Based Software Delivery – Software engineering encompasses not only
the software-development processes but also the effective delivery of the developed
software to users, which includes software deployment and maintenance. However,
SOA does not prescribe any specific methods for software deployment, manage-
ment, governance, and enhancement. These can be decided and activated by software
service organizations differently. Clouds as the standardized and smart infrastructure
2 Envisioning the Cloud-Induced Transformations in the Software Engineering… 39

come to the rescue here by ensuring effective application delivery. Applications can
be affordably deployed and maintained in advanced cloud platforms. Application
capabilities can be provided as a service. All kinds of non-functional (quality of
service (QoS)) attributes are effortlessly accomplished with clouds. Anytime any-
where resource access is being facilitated. Centralized monitoring and management
are remarkably simplified here. That is, clouds as the next-generation service-
oriented infrastructures (SOIs) have emerged in correct time in order to take the
service idea to greater heights. It is therefore no exaggeration to proclaim that
the software engineering field is greatly and grandiosely empowered by evolving
cloud concepts.
Agile Service Networks (ASNs) [6, 7] – Cloud computing’s high flexibility needs
novel software engineering approaches and technologies to deliver agile, flexible,
scalable, yet secure software solutions with full technical and business gains. One
way is to allow applications to do the computing in cloud, and the other is to allow
users to integrate with the applications. Agile service networks (ASNs) are themselves
an emerging paradigm envisioning collaborative and dynamic service interactions
(network edges) among global service-oriented applications (network nodes). ASNs
can be used as a paradigm for software engineering in the cloud, since they are
indeed able to deliver solutions which are both compliant to the cloud’s needs and
able to harness it, bringing about its full potential.
Context adaptation is used in ASNs to achieve agility. The concept of ASN is
defined as a consequence of “late service binding.” In the context of services’ dyna-
mism, which is achieved through late service binding, ASNs become a perfect
example of how agility can be achieved in SOA systems. Adaptation is presented as
one of the main tenets of SOA. This paradigm regards highly dynamic systems
within a rapidly changing context to which applications must adapt. In this sense,
ASNs are used to exemplify industrial needs for adaptive, context-aware systems.
ASN Key Features – ASNs are dynamic entities. Dynamism is seen as an essential
part of the service interactions within collaborative industries (i.e., industrial value
networks). Dynamism in ASNs is the trigger to service rearrangement and applica-
tion adaptation. For example, an ASN made of collaborative resource brokering
such as distributed stock markets is dynamic in the sense that different partners may
participate actively, others may be dynamically added while brokering is ongoing,
others may retire from the brokering process, and others may dynamically change
their business goals and hence their brokering strategy. ASNs are business-oriented:
ASNs are borne out of business corporative collaborations and represent complex
service applications interacting in a networked business scenario involving multiple
corporations or partners in different sites (i.e., different geo-locations). Within
ASNs, business value can be computed, analyzed, and maximized.
Cloud-Induced Software Engineering Challenges – As widely reported, there
are some important concerns with public clouds. Security, controllability, visibility,
performance, and availability are the major issues. Virtualization, the central
technology for the massive uptake and incontestable success of the cloud idea, has
introduced new security holes. Typically, public clouds are more or less accom-
modating several customers to be economical, and there are real dangers and risks
in a shared environment. If a cloud is not available for a few minutes, the resulting
40 P. Raj et al.

loss would be very enormous necessitating the sharp increment in guaranteeing


cloud availability. Cloud reliability is another central and crucial factor not to be
sidestepped easily. The security of data in rest or in transit has to be infallibly
secure, and cryptography is the major source of inspiration for data security in a
cloud environment. Identity and access management solutions are being conceived
and concretized for the more open and risky cloud systems. Besides, application
and service security and network and physical security aspects are also critical in a
cloud environment.
Smartphone applications are becoming very popular and very large in the number
with the massive production and release of slim and sleek, handy and trendy, yet
multifaceted mobile phones. As there are literally more mobile devices compared to
desktop and other powerful compute machines, application development for the
fastest-growing mobile space is gaining unprecedented importance. Mobile
technologies, application architectures and frameworks, toolsets, service delivery
platforms, hypervisors for mobile devices, unified and integrated application devel-
opment environments, etc., are being produced in plenty by competing parties in
order to score over others in the mind and market shares. There are specific cloud
infrastructures for securely storing a variety of mobile data, content, mails, services,
and applications. Besides cell phones and smartphones, other mobile and portable
devices incessantly capturing the imagination of people are the powerful tablets.
Thus, there are several dimensions and directions in which the nifty and niche
content and application development activities for the mobile landscape are
proceeding.
With cloud emerging as the centralized place for mobile services, the days of
anywhere anytime information and service access and upkeep are bright. Especially
form builder applications for smartphones are being made available so that users
could creatively produce their own forms in order to indulge in commercial and
financial activities on the move. Hundreds of thousands of smartphone applications
are being built, hosted, and subscribed by various smartphone vendors. Games are
the other prominent and dominant entities for the mobile world. Precisely speaking,
mobiles and clouds are increasingly coming closer for context-aware, customer-
centric, and cognitive applications.
In summary, the penetration of cloud idea is simply mesmerizing and momen-
tous. The cloud-based platforms are being positioned as the dynamic, converged,
and fit-for-purpose ones for application engineering not only for enterprise IT
but also for embedded IT, which incidentally includes mobile, wearable, porta-
ble, fixed, nomadic, wireless, implantable, and invisible devices. Extremely and
deeply connected applications and services are bound to rule the IT in the com-
ing days, and the cloud paradigm is the definite and decisive contributor for the
future IT.
Although, the service and cloud concepts have greater affinity in strengthen-
ing software development and delivery, there are some serious issues to be
addressed urgently in order to eliminate all kinds of doubts of in the minds of
enterprise executives in order to reach into the promised land of cloud-sponsored
service era.
2 Envisioning the Cloud-Induced Transformations in the Software Engineering… 41

2.7 Amalgamation of Model-Driven Architecture


and the Cloud Paradigms

Modeling has been a fundamental and foundational activity for ages. Before a
complex system gets formed, a model of the system is created as it could throw
some light about the system’s final structure and behavior. Models could extract
and expose any kind of hidden risks and lacunae in system functioning and give a
bit of confidence for designers and developers to plan and proceed obviating all
kinds of barriers. Models give an overall understanding about the system to be
built. In short, models decompose the system into a collection of smaller and man-
ageable chunks in order to empower engineers to have a firm grip and grasp of the
system under implementation. Modeling is one of the prominent and dominant
complexity-mitigation techniques as systems across domains are fast-growing in
complexity.
As IT systems are growing complexity, formal models are presented as the next-
generation abstraction and encapsulation unit for them. In the recent past, models
have been used as building blocks for having portable, sustainable, and flexible IT
systems. Models are created digitally, stored, refined, and revitalized as per the
changing needs. There are formats such as XML Metadata Interchange (XMI) for
exporting models over the network or any other media to other systems as inputs for
further processing. There are unified and visual languages and standardized notations
emerging and energizing compact and formal model representation, persistence,
manipulation, and exchange. Product vendors and open source software developers
have come out with innumerable software tools for facilitating model creation, trans-
formation, verification, validation, and exporting. For object orientation, unified
modeling language (UML) has been the standard one for defining and describing
models for various constructs and activities. For component-based assembly and ser-
vice-orientation programming, UML profiles have been created in order to keep UML
as the modeling language for software engineering. Further on, there are processing
modeling and execution languages such as BPML and BPEL and notations such as
BPMN in order to develop process-centric applications. That is, process models act as
the building blocks for system engineering.
Model-driven architecture (MDA) is the associated application architecture.
Model-driven software engineering (MDSE) is being presented as the most dynamic
and drastic method for application engineering. Emerging and evolving MDSE
techniques can automate the development of new cloud applications program-
matically. Typically, cloud applications are a seamless union of several unique
services running on different IT platforms. That is, for producing competent cloud
applications, all the right and relevant services from diverse and geographically
distributed servers have to be meticulously found, bound, and linked up in order to
build and sustain modular (loosely coupled and highly cohesive) cloud applications.
Precisely speaking, services have been overwhelmingly accepted as the most
productive and pliable building block for realizing adaptive, mission-critical, and
enterprise-scale applications.
42 P. Raj et al.

For building service-oriented cloud applications, there is a need for modernizing


all the legacy software modules into services. Model-driven reverse engineering
techniques are capable of discovering and generating standardized models out of
legacy software modules. The overall idea is to use such techniques and enabling
frameworks such as MoDisco framework to speed up the task of model creation
from legacy modules. These formal models can be subjected to further transfor-
mation to derive corresponding services that in collaborate with other cloud-
based services in order to craft fresh cloud applications quickly. That is, just
as software as a service (SaaS) paradigm, the notion of modeling as a service
(MaaS) is to see brighter days ahead especially in assisting the formation of
cloud applications out of existing non-cloud applications. As there are billions of
legacy code still contributing extensively for fortune corporations across the
globe, MaaS is to grow exponentially. There will be processes to be defined,
frameworks to be produced, cloud platforms to be immensely utilized, etc.
Reverse engineering of application modules into a PIM and then into one or more
PSMs to automate the service realization out of old software components is the
cleverest and clear-cut approach for the forthcoming cloud era. It is keenly antic-
ipated that similar to SaaS, MaaS will become a pioneering initiative. Here are
some possible applications of MaaS [8]:
• Creation of collaborative and distributed modeling tools to allow the specification
and sharing of software models among team members in real time.
• Definition of modeling mash-ups as a combination of MDSE services from
different vendors.
• Availability of model transformation engines in the cloud to provide platform-
independent model management services.
• Improving Scalability of MDSE – Models of real-life applications (especially
those obtained by reverse engineering of running systems) are usually very large.
Modeling services in the cloud would ensure the scalability of MDSE techniques
in those scenarios.
• Facilitating Model Execution and Evolution – Moving code-generation and
simulation services to cloud would facilitate the deployment and evolution of
software applications (regardless of whether those applications were implemented
as SaaS) and substantially reduce the time to market. The cloud service providers
(CSPs) with their infrastructure administration experts could set up the relevant
infrastructures to compile and deploy the applications quickly.
• Solving Tool Interoperability Problems – Exchanging data (and metadata) among
MDSE tools is one of the major challenges nowadays. So far, the problem is
being addressed by defining bridges among the tools, but MaaS is to offer a more
transparent and global solution to this problem. For instance, bridges could be
defined as services and executed on demand automatically by other services
when incompatibility issues surface.
• Distributed Global Model Management – Complex MDSE projects involve
several models (possibly conforming to different metamodels), model transfor-
mations, model injectors and projectors, etc. The MaaS paradigm is to facilitate
the manipulation of all these modeling artifacts in a distributed environment.
2 Envisioning the Cloud-Induced Transformations in the Software Engineering… 43

Model-Driven and Cloud-Sponsored Legacy Enablement Toward Mainstream


Computing – Long-living software systems [9] constantly undergo a number of
changes during their lifetime. These are triggered by a changing system context
(system usage and technology stacks) and/or changing system requirements. The
changes include functional and/or non-functional attributes, for example, the capability
and capacity of the system to deal with increasing system workload. The latter is
often a direct consequence of providing the access to existing systems over the
Internet, for example, for the integration of the systems into novel service compositions.
Cloud computing brings a new ray of hope of addressing this issue very deftly by
providing almost unlimited amount of compute or storage resources. In order to
utilize this new offer, long-living software systems have to be migrated to cloud. Often
this implies major changes (invasive) to the system structure for which no systematic
engineering process is available today. This vacuum can lead to high risks or even
project failures. There has to be a bridge between the conventional and classic com-
puting and the cloud computing architectures. That is, the age-old architectural styles
and patterns such as three-tier client/server architecture do help in building business
applications. With cloud’s emergence, new-generation architectural styles emerge
for the efficient use of the almost unlimited computational resources in the cloud.
There is a new architectural style (the so-called SPOSAD style: Shared, Polymorphic,
Scalable Application and Data) allowing massive replication of the business logic,
which is enabled by a smart physical data distribution. This evolution in different
directions and dimensions has to be bridged through a systematic engineering
support for facilitating the movement from the old to new architecture. The authors
have focused on supporting performance and scalability predictions.
They have proposed a formal process. First, existing systems have to be reverse-
engineered to obtain a performance prediction model. These models contain both static
as well as dynamic aspects such as contributing components and their interactions.
Second, the software architect has to select a set of potential target architecture styles
or patterns, which have to be appropriately formalized. For example, the architect plans
to evaluate the impact of the classical system architecture movement to MapReduce or
to the SPOSAD style, and, thus, he/she automatically adapts the reverse-engineered
performance prediction models by the selected architectural styles.
Third, the performance of the target architectures is evaluated to get a final ranking
and to come to a recommendation for the migration. Finally, based on the analyzed
target architecture, the system’s implementation has to be adapted. The major
foundations for the sketched process are already in place (software architectural
patterns, software performance engineering, architecture evolution, and model
transformations).

2.8 Mechanisms for Assisting Cloud Software Development

Today, not only development processes but also environments have to be very
agile [10] and anticipative as software development becomes more sophisticated.
Cloud-induced agile IT environments are being presented as the viable and valuable
44 P. Raj et al.

resources for new-generation software construction. The unique capabilities of


clouds are being succinctly indicated below:
• On-demand provisioning and de-provisioning of resources in minutes through a
self-service access.
• Non-function requirements of servers, storages, and network products are being
ensured.
• Implicit support for virtual development environments and multi-tier application
architectures.
• Easier migration of the existing virtual-server images and workloads into the cloud.
Clouds can accelerate the development cycle by creating multiple development
environments that enable several software activities to be carried out simultane-
ously. Testing can be accomplished along with development. The unique on-demand
resource provisioning capability of clouds makes this parallelization possible.
Cloud supports different levels of quality of service (QoS). Developers could choose
the appropriate QoS level as per the applications. This means that a higher level
of performance, security, and availability needs to be assigned to a development
environment for performance and scalability testing. In exchange, the hourly cost of
such environment goes up. The QA process will also benefit from on-demand up
and down scaling of cloud resources, as this finally solves the problem of testing
performance and scalability of applications at a large scale, but without indefinitely
reserving and paying for resources when they are unused.
Cloud virtual machines (VMs) support multi-tier application development and
testing. That is, presentation tier, business logic tier, and data tier are being deployed
in different VMs. When the development in a virtual cloud environment is finished,
the images of virtual servers can be easily transferred to the production environment.
The advantage is to avoid problems related to configuring a new application for
transfer from the development to the production environment, which again affects
the speed of the application time to market.
The Lean Thinking Principles for Cloud Software Development – There are lean
approaches and principles being sincerely and seriously examined and expounded by
professionals and pundits for optimally implementing a variety of industrial systems.
Software engineers are also vigorously following the same line of thinking for produc-
ing high-quality software solutions for a variety of business and societal problems.
The core elements of the lean principle are “eliminate waste, build quality in, create
knowledge, defer commitment, deliver fast, respect people and optimize the whole.”
This set of well-intended tasks definitely creates a sound case for contemporary
cloud enterprises. As corporates are planning and assimilating cloud technologies
as a part of their business transformation initiative, there are other mandatory things
to be accomplished in parallel in order to reap the envisioned advantages.
Here is what a few software companies have achieved by applying lean principles
to their development process [11]:
• Salesforce.com has improved time to market of major software releases by 61 %
and boosted productivity across their R&D organization by 38 % since adopting
agile development.
2 Envisioning the Cloud-Induced Transformations in the Software Engineering… 45

• BT Adastral, the largest telecommunications company in the UK, completed its


first major lean software project 50 % sooner than expected and incorporated
many product changes along the way. The product yielded 80 % ROI in the
first year.
• PatientKeeper, specializing in software for the healthcare industry, puts out
weekly maintenance releases, monthly new feature releases, and quarterly new
application releases. This company completes 45 development cycles in the time
it takes their competitors to do 1 cycle.
• Timberline Software (now part of The Sage Group), serving the construction and
real estate market, estimates that improvements in quality, costs, and time to market
were all greater than 25 % as a result of switching to lean software development.
Lean thinking is important for scaling agile in several ways [12]:
• Lean provides an explanation for why many of the agile practices work. For
example, Agile Modeling’s practices of lightweight, initial requirements envi-
sioning followed by iteration modeling and just-in-time (JIT) model storming
work because they reflect deferment of commitment regarding what needs to be
built until it is actually needed, and the practices help eliminate waste because
we are only modeling what needs to be built.
• Lean offers insight into strategies for improving our software process. For exam-
ple, by understanding the source of waste in IT, we can begin to identify it and
then eliminate it.
• Lean principles provide a philosophical foundation for scaling agile approaches.
• It provides techniques for identifying waste. Value stream mapping, a technique
common within the lean community, whereby we model a process and then
identify how much time is spent on value-added work versus wait time, helps
calculate overall time efficiency of what we are doing. Value stream maps are a
straightforward way to illuminate our IT processes, providing insight into where
significant problems exist.
The lean manufacturing with its emphasis on eliminating waste and empowering
employees shook up the automotive industry. Lean principles are revolutionizing
software development industry as well. Lean developers can build software faster,
better, and cheaper than competitors using traditional bulky and bulging methods.
By adopting agile practices and test-driven development, a software firm can go a
long way toward leaning out its operations and serving its customers better.
Lean Agile Methodologies Accentuate Benefits of Cloud Computing [13] – Lean
and agile are two different production methodologies that are used extensively
in business. The lean approach is derived from the production processes adopted by
Toyota, Japan. It focuses on a demand-driven approach with an emphasis on:
• Building only what is needed
• Eliminating anything that does not add value
• Stopping production if something goes wrong
The agile approach is focused on the notion that software should be developed in
small iterations with frequent releases, because neither the end-user requirements
46 P. Raj et al.

nor the exact amount of efforts can be accurately finalized upfront. Even the end
users themselves cannot fully articulate what they need. Hence, the requirements
must be collaboratively discovered, analyzed, and finalized. Agile processes [14]
involve building software in small segments, testing those segments, and then getting
end-user feedback. The aim is to create a rapid feedback loop between the develop-
ers and the actual users.
Lean agile development methodologies and the cloud model complement each
other very well. Cloud services take pride in meeting user requirements rapidly,
delivering applications whenever and to whatever extent they are needed. Agile
methods give high credence to user collaboration in requirements discovery.
The lean agile system of software development aims to break down project require-
ments into small and achievable segments. This approach guarantees user feedback
on every task of the project. Segments can be planned, developed, and tested
individually to maintain high-quality standards without any major bottlenecks. The
development stage of every component thus becomes a single “iteration” process.
Moreover, lean agile software methods place huge emphasis on developing a
collaborative relationship between application developers and end users. The entire
development process is transparent to the end user and feedback is sought at all
stages of development, and the needy changes are made accordingly then and there.
Using lean agile development in conjunction with the cloud paradigm provides a
highly interactive and collaborative environment. The moment developers finalize
a feature, they can push it as a cloud service; users can review it instantly and
provide valuable feedback. Thus, a lengthy feedback cycle can be eliminated
thereby reducing the probability of misstated or misunderstood requirements. This
considerably curtails the time and efforts for the software development organization
while increasing end-user satisfaction. Following the lean agile approach of
demand-driven production, end users’ needs are integrated in a more cohesive and
efficient manner with software delivery as cloud services. This approach stimulates
and sustains a good amount of innovation, requirement discovery, and validation in
cloud computing.

2.9 Cloud Platform Solutions for Software Engineering

Compared to on-premise applications, cloud-based software as a service (SaaS)


application are delivered through the Web, billed on a subscription basis, and
service providers themselves are responsible for delivering the application at accept-
able service levels. As a consequence, the economics of delivering SaaS is different
from traditional software applications. Companies delivering SaaS/Cloud applica-
tions need to realize economies of scale and keep the application delivery costs
low. These issues have a significant impact on how SaaS applications are archi-
tected, developed, and delivered. For the paradigm of SaaS to succeed, issues like
application scalability, cost of delivery, and application availability had to be
resolved comprehensively. A new set of architectural, development, and delivery
principles have emerged and strengthened the spread of the SaaS model.
2 Envisioning the Cloud-Induced Transformations in the Software Engineering… 47

In order to achieve the acceptable levels of maturity, companies need to address


issues in three core areas [15]:
• They need to build applications that support a multitenant architecture that
enables a single instance of the application to be shared among multiple customers.
Multitenancy has a significant impact on all layers of the application stack and is
challenging to achieve. This architectural principle is a significant contributing
factor in reducing application delivery costs.
• SaaS vendors need to address a significant number of non-functional application
concerns that are essential for the success of the service. For example, traditional
software vendors were not concerned with issues like metadata management,
tenant customization and configuration, scalability, fault tolerance to meet SLAs,
metering, monitoring, robust security in distributed environments, and a host
of other concerns.
• As applications grow and scale, companies need to address automation of
operations and application management. Automation of operations and application
management is among the primary contributing factors in reducing application
delivery costs. Despite emerging automation in areas like the infrastructure
cloud, 75–80 % of the issues arising in operations are best solved at the applica-
tion design and development level. Furthermore, it is difficult and expensive
to achieve operational and administrative automation once the service is designed
and developed. SaaS providers can achieve significant benefits if application
architecture takes automation of operations into account early in the applica-
tion life cycle.
The cloud idea is everywhere and engineers, executives, exponents, and evange-
lists are trying different ways and means of adopting and adapting the cloud con-
cepts as per their organizational needs. Data centers are being pruned and tuned
to be cloud centers, traditional applications are getting modernized and migrated to
local as well as remote cloud environments, centralized delivery and management
of IT resources are being insisted and illustrated, innovative and disruptive ideas get
quickly concretized by renting needed compute and storage servers from public
cloud providers, server systems exclusively for backup and disaster recovery to
guarantee business continuity are being subscribed out of cost-effective cloud
servers, all kinds of customer-centric applications such as collaboration software
are unhesitatingly moved to cloud systems in order to reap their distinct advantages
(technical as well as business), etc. In the recent past, cloud is being prescribed as
the most productive solution for software coding and testing. That is, platform as a
service (PaaS), which has been dormant and dumb for quite a long time, gets a fresh
life with the realization across the globe that cloud-based platforms are much more
effective, simpler, and quicker for software building.
How Azure Helps Cloud Software Development to Be Agile? – Microsoft Azure
is an application platform on the cloud that provides a wide range of core infrastructure
services such as compute and storage along with building blocks that can be con-
sumed for developing high-quality business applications. Azure provides platform
as a service (PaaS) capabilities for assisting application development, hosting,
48 P. Raj et al.

execution, and management within Microsoft cloud centers. Windows Azure is an


open cloud platform that enables to quickly build applications in any language,
tool, or framework. The advantages of Azure cloud are:
• Azure provides staging and production environments on the cloud which provide
resource elasticity on demand, and this agility factor helps a lot for any Windows
application development team.
• Only the development and unit testing is carried out on-premise systems.
• Cloud staging environment can be used to create different test environments on
cloud such as integration, system, and UAT.
• Application source code can be maintained in Azure cloud storage.
• Developers test their application with a production-like environment as setting
up a real production environment for testing involves more investment, planning,
time, and resources. That is, all kinds of infrastructure-intensive software testing
can be accomplished in Azure cloud with high dependability cost-effectively due
to the inherent elastic nature of Azure. This enables application providers to
ensure the SLA to their customers and consumers.
• A couple of integrated development environments such as Visual Studio.NET
are provided by Microsoft in order to simplify and speed up cloud application
development activities.
• Source code can be promoted from one environment to another rather seamlessly
without developers having to write verbose deployment scripts or instruction
manuals to set up the application in the target environments.
How Azure Helps Software Delivery to Be Agile? – Delivery is also facilitated
by Azure cloud. By providing flexible infrastructures just in time, cloud software
delivery is made agile. All kinds of fluctuations of infrastructure needs are being
automatically taken care of Azure cloud. All kinds of plumping works are being
delegated to cloud center experts so that designers, developers, and testers can focus
on their core activities.
As Visual Studio IDE is tightly integrated with the cloud environment, applica-
tion development and deployment happen faster and are hugely simplified. The
cloud provides all the libraries and APIs upfront in order to lessen the developmental
cost and complexity. Further on, in the Azure cloud, deployment and upgrade
processes are completely automated to minimize or eliminate some of the lengthy
and tedious steps while planning and executing the traditionally accomplished
deployment process. Working prototypes built by geographically dispersed devel-
opers and centrally deployed in Azure can be made available and accessible
immediately to prospective customers in order to elicit and extract their feelings and
feedbacks as this arrangement sharply reduces time especially for contemplating
any major or minor corrections to take the products to market quickly.
The Alice Platform [15] – In order to help companies with the challenges of
building and delivering successful SaaS services, the authors have developed
the first open SaaS platform called Alice. As a company focused on developing
cloud-based SaaS services, it became quite evident that traditional JEE, .NET, and
Ruby on Rails platforms were not designed to address base level architectural
2 Envisioning the Cloud-Induced Transformations in the Software Engineering… 49

Fig. 2.1 The architectural diagram of the Alice platform

concerns of large and scalable SaaS applications. While building applications for
our clients, developers had to address multitenancy, data management, security,
scalability, caching, and many other features. Many of the most successful SaaS
companies had themselves built their own platforms and frameworks to address
their specific applications and cost needs. Companies like Salesforce and NetSuite,
first and foremost, built platforms to meet their application needs and lower delivery
costs, rather than building them to be sold as a platform as a service (PaaS).
Release of SaaS application platforms by companies like Salesforce has not
made a significant difference in the development and delivery of commercial
SaaS applications. Currently, many PaaS/SaaS platforms on the market are suitable
for development of only small situational applications, rather than commercial busi-
ness applications that are of interest to startups, independent software vendors
(ISVs), and enterprises. These platforms use proprietary languages, are tied to a
specific hardware/software infrastructures, and do not provide the right abstractions
for developers. Alice was developed to address the above concerns and provide
a robust and open platform for the rapid development of scalable cloud services
applications. Figure 2.1 illustrates the reference architecture of the Alice Platform
for SaaS application development and delivery.
50 P. Raj et al.

2.10 Software Engineering Challenges in Cloud Environments

With the coherent participation of cloud service providers, the software development
complexity is to climb further [3]. In the ensuing cloud era, software develop-
ment process will start to involve heterogeneous platforms, distributed services, and
multiple enterprises geographically dispersed all over the world. Existing software
process models are simply insufficient unless the remote interaction with cloud
providers is a part and parcel of the whole process. Requirements gathering phase
so far included customers, end users, and software engineers. Now it has to include
cloud service providers (CSPs) as well, as they will be supplying the computing
infrastructure, software development, management, maintenance platforms, etc.
As the cloud providers are only conversant with the infrastructure utilization details,
their experts can do the capacity planning, risk management, configuration manage-
ment, quality assurance, etc., well. Similarly, analysis and design activities should
also include CSPs, who can chip in with some decision-enabling details such as
software-development cost, schedule, resource, and time.
Development and debugging can be done on cloud platforms. There is a huge
cost benefit for individuals, innovators, and institutions. This will reduce the cost
and time for verification and validation. Software developers should have gained
more right and relevant expertise in building software from readily available
components than writing them from the scratch. The monolithic applications have
been shunted out and modular application has the future. Revisiting and refactoring
of existing application is required to best utilize the cloud paradigm in a cost-effective
manner. In the recent past, computers are fit with multicore processors. Another
trend is computers are interconnected as well as with the Web. Computers are
becoming communicators and vice versa. Computers are multifaceted, networked,
and shrinking in size, whereas the scope of computing is growing. Therefore,
software engineers should train themselves in parallel and distributed computing
to complement the unprecedented and inescapable advances in hardware and
networking. Software engineers should train themselves in Web protocols, XML,
service orientation, etc. Web is on the growing trajectory as it started with a simple
Web (Web 1.0). Today it is the social Web (Web 2.0) and semantic Web (Web 3.0)
attracting the attention of professionals as well as people. Tomorrow definitely it
will be the smart Web (Web 4.0). The cloud proposition is on the fast track and
thereby there will be a scintillating synchronization between the enlarging Web
concepts and the cloud idea.
Cloud providers also have the appropriate infrastructure and methods in hand in
order for application maintenance [14]. There is a service-level agreement (SLA)
being established as a contract between cloud users (in this case, software engineers)
and cloud providers. Especially the advanced cloud infrastructure ensures non-
functional (scalability, availability, security, sustainability, etc.) requirements. Other
serious challenges confronting the cloud-based software development include
the following. As we see, the development of software is multilateral in a cloud envi-
ronment unlike the collocated and conventional application software development.
2 Envisioning the Cloud-Induced Transformations in the Software Engineering… 51

The difference between these two radical approaches presents some of the noticeable
challenges to software engineering:
• Software Composition – Traditionally, application software engineers develop a
set of coherent and cohesive modules and assemble them to form an application,
whereas in the fast-expanding cloud landscape, finding and composing third-
party software components is a real challenge.
• Query-Oriented Versus API-Oriented Programming – MapReduce, streaming,
and complex event processing require developers to adopt a more functional
query-oriented style of processing to derive information. Rather than a large sur-
face area of OO APIs, these systems use an extension of SQL-like operations
where clients pass in application specific functions which are executed against
associated data sources. Doing complex join queries or function composition
such as MapReduce is a difficult proposition.
• Availability of Source Code – In the current scene, full source of the code is
available. However, in the multilateral software development, there is no source
code available because of third-party components. Therefore, the challenge for
software engineers is the complete comprehension of the system.
• Execution Model – The application software developed generally is executed
on single machine, whereas the multilateral software developed for cloud
environment is often distributed between multiple machines. Therefore, the
challenge for software engineers is the traceability of state of executing entity
and debugging.
• Application Management – The challenges are there as usual when there is an
attempt to embrace newer technologies. Application lifecycle management
(ALM) is quiet straightforward in the traditional setting, whereas globally,
collaborative and cloud-based application management is beset with definite
concerns and challenges.
The need of the hour to make the cloud concepts more beneficial to all sections
of the world is to activate the innovation culture; thereby, a stream of inventive
approaches can be unearthed to reinvigorate the sagging and struggling software
engineering domain. Here is one. Radha Guha [2] has come out with an improved
cost estimation model for the cloud-based software development.

2.11 Conclusion

Nowadays, for most business systems, software is a key enabler of their business
processes. The software availability and stability directly impact the company’s
revenue and customer satisfaction. Software development is therefore a critical
activity. Software development is undergoing a series of key changes. A growing
number of independent software vendors (ISVs) and system integrators (SIs) trans-
form themselves into service providers delivering their customers’ and partners’
applications in the form of services hosted in the cloud.
52 P. Raj et al.

The cloud technology could reduce the time needed for the development of
business services and to take them to the market. Each additional month or quarter
in which the cloud services are accessible to users has a direct impact on increasing
revenues, which affects the final financial statements. The speed at which software
applications can be developed, tested, and brought into production is definitely
one of the critical success factors for many companies. Therefore, any solution
accelerating the application time to market has an immediate and measurable impact
on return on investment (ROI).
Application developers are regularly confronted with a request to establish
special environments for developing, debugging, and compiling appropriate soft-
ware libraries for making software solutions. Typically, these environments are
established for a limited period of time. Accessing appropriately configured
development environments with an adequate processing power and storage space
on demand is very crucial for software engineering. To perform their tasks, the
programmers should be able to quickly configure servers, storage, and network
connections. Here comes the significance of cloud environments for taking soft-
ware to market quickly. In this chapter, we primarily discussed the pathbreaking
contributions of cloud infrastructures for realizing sophisticated and smart services
and applications.

References

1. Yara, P., Ramachandran, R., Balasubramanian, G., Muthuswamy, K., Chandrasekar, D.: Global
software development with cloud platforms. In: Software Engineering Approaches for
Offshore and Outsourced Development. Lecture Notes in Business Information Processing.
http://link.springer.com/chapter/10.1007/978-3-642-02987-5_10. vol. 35, pp. 81–95 (2009)
2. Guha, R.: Software engineering on semantic web and cloud computing platform. http://www.
cs.pitt.edu/~chang/231/y11/papers/cloudSE (2011). Accessed 24 Oct 2012
3. Chhabra, B., Verma, D., Taneja, B.: Software engineering issues from the cloud application
perspective. Int. J. Inf. Technol. Knowl. Manage. 2(2), 669–673 (2010)
4. Kuusela, R., Huomo, T., Korkala, M.: Lean Thinking Principles for Cloud Software
Development. VTT www.vtt.fi. A Research Summary of VTT Technical Research Centre of
Finland (2010)
5. Hashmi, S.I.: Using the cloud to facilitate global software development challenges. In: Sixth
IEEE International Conference on Global Software Engineering Workshops, 15–18 Aug 2011,
pp. 70–77. IEEE XPlore Digital Library, IEEE, Piscataway (2011)
6. Tamburri, D.A., Lago, P.: Satisfying cloud computing requirements with agile service networks.
In: IEEE World Congress on Services, 4–9 July 2011, pp. 501–506. IEEE XPlore Digital
Library, IEEE, Los Alamitos (2011)
7. Carroll, N., et al.: The discovery of agile service networks through the use of social network
analysis. In: International Conference on Service Sciences. IEEE Computer Society, IEEE,
Washington, DC (2010)
8. Bruneli’ere, H., Cabot, J., Jouault, F.: Combining model-driven engineering and cloud com-
puting. http://jordicabot.com/papers/MDE4Service10.pdf (2010). Accessed 24 Oct 2012
9. Becker, S., Tichy, M.: Towards model-driven evolution of performance critical business infor-
mation systems to cloud computing architectures. In: MMSM. http://www.cse.chalmers.
se/~tichy/2012/MMSM2012.pdf (2012). Accessed 24 Oct 2012
2 Envisioning the Cloud-Induced Transformations in the Software Engineering… 53

10. Dumbre, A., Senthil, S.P., Ghag, S.S.: Practicing Agile Software Development on the Windows
Azure Platform. White paper by Infosys Ltd., Bangalore. http://www.infosys.com/cloud/
resource-center/documents/practicing-agile-software-development.pdf (2011) Accessed 24
Oct 2012
11. Lean Software Development – Cutting Fat Out of Your Diet. A White Paper by Architech solutions.
http://www.architech.ca/wp-content/uploads/2010/07/Lean-Software-Development-Cutting-
Fat-Out-of-Your-Diet.pdf. Accessed 24 Oct 2012
12. Tripathi, N.: Practices of lean software development. http://cswf.wikispaces.com/file/view/Pra
ctices+in+Lean+Software+Development.pdf (2011). Accessed 24 Oct 2012
13. Talreja, Y.: Lean Agile methodologies accentuate benefits of cloud computing. http://www.
the-technology-gurus.com/yahoo_site_admin/assets/docs/LACC_white_paper_ed_
v5.320180428.pdf (2010). Accessed 24 Oct 2012
14. Das, D., Vaidya, K.: An Agile Process Framework for Cloud Application. A White Paper
by CSC. http://assets1.csc.com/lef/downloads/CSC_Papers_2011_Agile_Process_Framework.
pdf (2011). Accessed 24 Oct 2012
15. Alice Software as a Service(SaaS) Delivery Platform. A Whitepaper by Ekartha, Inc.
http://www.ekartha.com/resources/Alice_saas_delivery_platform.pdf. Accessed 24 Oct 2012
Chapter 3
Limitations and Challenges in Cloud-Based
Applications Development

N. Pramod, Anil Kumar Muppalla, and K.G. Srinivasa

Abstract Organisations and enterprise firms, from banks to social Web, are consid-
ering developing and deploying applications on the cloud due to the benefits offered
by them. These benefits include cost effectiveness, scalability and theoretically
unlimited computing resources. Many predictions by experts have indicated that
centralising the computation and storage by renting them from third-party provider
is the way to the future. However, before jumping into conclusions, engineers and
technology officers must assess and weigh the advantages of cloud applications
over concerns, challenges and limitations of cloud-based applications. Decisions
must also involve choosing the right service model and knowing the disadvantages
and limitations pertaining to that particular service model. Although cloud applica-
tions have benefits a galore, organisations and developers have raised concerns
over the security and reliability issues. The idea of handing important data over to
another company certainly has security and confidentiality worries. The implica-
tion does not infer that cloud applications are insecure and flawed but conveys that
they require more attention to cloud-related issues than the conventional on-premise
approaches. The objective of this chapter is to introduce the reader to the chal-
lenges of cloud application development and to present ways in which these chal-
lenges can be overcome. The chapter also discusses the issues with respect to
different service models and extends the challenges with reference to application
developer’s perspective.

Keywords Challenges in the cloud • Vendor lock-in • Security in the cloud • SLA
• Cost limitation • Traceability issue • Transparency in the cloud

N. Pramod • A.K. Muppalla • K.G. Srinivasa (*)


High Performance Computing Laboratory, Department of Computer Science
and Engineering, M S Ramaiah Institute of Technology,
Bangalore 560054, India
e-mail: [email protected]; [email protected]; [email protected]

Z. Mahmood and S. Saeed (eds.), Software Engineering Frameworks for the Cloud 55
Computing Paradigm, Computer Communications and Networks,
DOI 10.1007/978-1-4471-5031-2_3, © Springer-Verlag London 2013
56 N. Pramod et al.

3.1 Introduction

The paradigm of cloud computing introduces a change in visualisation of system


and data owned by an organisation. It is no longer a group of computing devices
present at one physical location and executing a particular (and only that program,
unless mentioned otherwise) software program with all the required data and
resources present at a static physical location but instead is a system which is geo-
graphically distributed with respect to both application and data. Researchers and
engineers working in the field of cloud computing define it in many ways. These
definitions are usually based on the application’s perspective, that is, the way one is
trying to employ cloud services for a particular application. A few definitions of
cloud computing are as shown below:
Cloud computing is a model for enabling convenient, on-demand network access to a
shared pool of configurable computing resources (e.g., networks, servers, Storage, applica-
tions, and services) that can be rapidly provisioned and released with minimal management
effort or service provider interaction. [1]
A Cloud is a type of parallel and distributed system consisting of a collection of intercon-
nected and virtualized computers that are dynamically provisioned and presented as one or
more unified computing resources based on service-level agreements established through
negotiation between the service provider and consumers. [2]

The desired properties of cloud computing can be characterised as technical,


economic and user experience as in [3].

3.1.1 Characteristics of Cloud Systems

General characteristics of cloud computing are as follows [1]:


On-demand self-service: A consumer can unilaterally provision computing capa-
bilities, such as server time and network storage, as needed automatically without
requiring human interaction with each service provider.
Broad network access: Capabilities are available over the network and accessed
through standard mechanisms that promote use by heterogeneous thin or thick
client platforms (e.g. mobile phones, tablets, laptops and workstations).
Resource pooling: The provider’s computing resources are pooled to serve multiple
consumers using a multi-tenant model, with different physical and virtual
resources dynamically assigned and reassigned according to consumer demand.
There is a sense of location independence in that the customer generally has no
control or knowledge over the exact location of the provided resources but may
be able to specify location at a higher level of abstraction (e.g. country, state or
data centre). Examples of resources include storage, processing, memory and
network bandwidth.
Rapid elasticity: Capabilities can be elastically provisioned and released, in some
cases automatically, to scale rapidly outward and inward commensurate with
demand. To the consumer, the capabilities available for provisioning often appear
to be unlimited and can be appropriated in any quantity at any time.
3 Limitations and Challenges in Cloud-Based Applications Development 57

Measured service: Cloud systems automatically control and optimise resource usage
by leveraging a metering capability at some level of abstraction that is appropriate
to the type of service used (e.g. storage, processing, bandwidth and active user
accounts). Resource usage can be monitored, controlled and reported, providing
transparency for both the provider and consumer of the utilised service.

3.1.2 Cloud Service Models

There are three generally agreed cloud service delivery models [4]:
• SaaS – software as a service: Refers to providing on-demand applications over
the Internet.
• PaaS – platform as a service: Refers to providing platform layer resources,
including operating system support and software development frameworks.
• IaaS – infrastructure as a service: Refers to on-demand provisioning of infra-
structural resources, usually in terms of VMs. A cloud owner that offers IaaS is
called an IaaS provider [5].
Newer terminologies such as DaaS (Data as a Service) [6] have also emerged, but
their applicability and use cases still remain a key question. In case of traditional IT
deployment, all the resources are under the control of a particular organisation. This
is not true anymore in case of cloud-based development. Cloud providers of each of
the cloud service models offer control over various resources. Figure 3.1 depicts a
generic view of the accessibility and control of resources with respect to IaaS, PaaS
and SaaS service models.

3.2 Challenges

Cloud computing influences an adopting organisation in a variety of ways. Cost


reduction capability in terms of savings on hardware resources, which increases
with increase in horsepower of computation and are unused most of the times but
are very much critical for crunch time usage. This flexibility in the availability of
hardware resources implies that the application can be highly scalable and dynamic
in nature in terms of utilisation of hardware resources. Amidst all the advantages,
the following are the challenges that restrict an organisation to migrate to cloud
applications (Fig. 3.2).

3.2.1 Security and Confidentiality

All Web service architectures have issues relating to security. On a similar note,
cloud application can be viewed as a different Web service model that has similar
security loopholes in them. Organisations which are keen on moving the in-house
58 N. Pramod et al.

Fig. 3.1 Consumer and vendor controls in cloud service models [24]

Fig. 3.2 Technology in


charge and security engineers
of an organisation must
consider the inherent issues Public Cloud
before migrating to cloud Security
PaaS Reliability
SaaS
Control
IaaS
3 Limitations and Challenges in Cloud-Based Applications Development 59

applications to cloud must consider the way in which the application security
behaves in a cloud environment. Well-known security issues such as data loss and
phishing pose serious threats to organisation’s data and software. In addition to
those, there are other security issues which arise due to the third-party dependency
for services pertaining to cloud application development and deployment. From
a very naive point of view, it looks daunting to put an organisation’s critical and
confidential data and its software into a third person’s CPU and storage. The multi-
tenancy model and the pooled computing resources in cloud computing have intro-
duced new security challenges that require novel techniques to tackle with [7].
One of the top cloud application security issues is lack of control over the comput-
ing infrastructure. An enterprise moving a legacy application to a cloud computing
environment gives up control over the networking infrastructure, including servers,
access to logs and incident response. Most applications are built to be run in the con-
text of an enterprise data centre, so the way they store and the way they transmit data
to other systems is assumed to be trusted or secure. This is no more true in case of
cloud environment. All the components that have traditionally been very trusted and
assume to be running in a safe environment now are running in an untrusted environ-
ment. Many more issues such as the Web interface, data storage and data transfer
have to be considered whilst making security assessments. The flexibility, openness
and public availability of cloud computing infrastructures challenge many funda-
mental assumptions about application security. The lack of physical control over the
networking infrastructure might mandate the use of encryption in the communication
between servers of an application that processes sensitive data to ensure its confiden-
tiality. Risks that a company may have accepted when the application was in-house
must be reconsidered when moving to a cloud environment.

Ex. 1
If an application is logging sensitive data in a file on the on-premise server and not
encrypting it, a company might accept that risk because it owns the hardware. This
will not be a safe acceptance anymore on the cloud environment as there exists no
static file system where the application log will reside due to the reason that the
application is executed in different virtual machines which may be on different
physical machines depending on the scale. The logging thus takes place onto some
shared storage array and hence the need to encrypt it arises. The security threat
model takes a different dimension on the cloud, and, hence, a lot of vulnerabilities
which were low are now high and they must be fixed.

Ex. 2
A company hosting an application in its own data centre might ward off a denial-
of-service attack with certain infrastructure or could take actions such as blocking
the attacking IP addresses. In case of cloud, if the provider handles the mitigation of
attacks, then the consumer or the organisation hosting application needs to re-account
for how the risk or attack can be mitigated as there is no control or visibility.
60 N. Pramod et al.

3.2.1.1 Overcoming the Challenge

It is important to understand the base security solutions provided by the service


provider, for example, firewalls and intrusion detection systems, which are built into
to the cloud architecture. Also, it is important to note assurances the provider is
willing to offer in the case of breaches or loss. These details will help an organisa-
tion in making security-related decisions and answering some important questions
such as ‘Are these solutions and assurances sufficient for the data which is being put
into the cloud?’ Employing a strong user authentication scheme for cloud service
will reduce many of the security breaches and data loss. In the end, an enterprise
should ensure that the cloud workloads will have at least the same level of protec-
tion as their sensitive on-premise workloads, but for less sensitive workloads, they
should avoid paying for excessive security.

3.2.2 Control

Introduction of third-party service provider decreases an organisation’s control over


its software and data. This holds good especially in case of SaaS where the SaaS
cloud provider may choose to run software from various clients on a single machine
and storage at a given point of time. There is no control over the decision pertaining
to the above issue. Furthermore, the actual control over the software and service is
limited to the condition mentioned in the policy and user agreement and only via
certain service provider defined API (and keys).
As an example, code snippet for authentication in a Rackspace [8] cloud service
(sent as JSON) is as shown below:

curl -i \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-d \
'{
"credentials": {
"username": "my_Rackspace_username",
"key": "12005700-0300-0010-1000-654000008923"}
}' \
https://auth.api.rackspacecloud.com/v1.1/auth

where:

username – is the assigned username for which the authentication request is being sent
key – the API key provided to access the cloud service
3 Limitations and Challenges in Cloud-Based Applications Development 61

If, for instance, the consumer wishes to introduce another layer of authentication,
then the cloud provider does not allow for this facility as the API is not designed to
provide such facility. This can be extended not only to authentication but to the
entire APIs used for various purposes during cloud application development. This
hinders access and limits any tweaking which can enable the application function
better or help the organisation in curbing cost [9]. Also, as a security concern, the
ability to limit access to certain confidential data will eventually go in vain as the
data is still available in some form or the other at the service provider and poses a
serious threat to confidentiality.

3.2.2.1 Overcoming the Challenge

Agreements and standardisation is one way to overcome the problem of control in a


cloud environment. Also, the paradigm of cloud does not make it feasible for a pro-
vider to give access control beyond a certain limit.

3.2.3 Reliability

For the cloud, reliability is broadly a function of the reliability of three individual
components:
• The hardware and software facilities offered by providers: The hardware (appli-
cable to SaaS, PaaS, IaaS models) and software (applicable to SaaS, PaaS mod-
els) provided by the service provider, though not completely in the consumer’s
control, are a major reliability factor as low-performing and low-quality setup
could lead to failure. This also is decisive about the availability of the applica-
tion. The less hardware failure and faster recovery from failure will ensure that
the cloud application is more reliable.
• The provider’s personnel and the consumer’s personnel: The personnel interact-
ing with the cloud service and the application may also cause reliability issues.
For example, if an employee is accessing resources for purposes other than the
assigned, then during crunch time it could lead to failure. Also, if the mainte-
nance of the systems is not undertaken regularly or is ignored, then this could
cause failure.
• Connectivity to subscribed services: Network resources connecting the cloud
service provider and the organisation are also accountable for the reliability of
the system.
Many suggestions on how to adopt trust models have been proposed, and one of
such can be found tabulated in Table 3.1: “Summary of cloud service providers, prop-
erties, resource access and key challenges over different cloud service models” [10].
62 N. Pramod et al.

Table 3.1 Summary of cloud service providers, properties, resource access and key challenges
over different cloud service models
Providers Properties Access to resources Key challenges
SaaS NetSuite – Web interface SaaS consumers have only Credential
enterprise access to the software management
resource which is provided as a on cloud
planning service. No control
(ERP) SaaS over tuning the
Taleo – human No installation software, operating Usage and
resource SaaS required system and hardware accountability
SalesForce – Shared software, resources Traceability of data
customer i.e. used by
relationship many
management organisation
SaaS (CRM)
Google – Google Ownership is Data security
Docs, online only on data
office suite
Microsoft – Office Pay as you use Protection of API
Live, Dynamics keys
Live CRM
PaaS Google App Engine Platform for PaaS consumers have Privacy control
developing access to the
scalable application develop-
applications ment environment, e.g.
Microsoft Test, deploy, host the operating system. Traceability of both
Windows Azure and maintain Tools and libraries can application
in the same be used to build and data
environment application over the
Force.com Easy integration given platform. No Maintenance
with Web control over hardware of audit trail
AT&T Synaptic services and resources and control Protecting API keys
databases over choice of Application
platform, i.e. the Security
choice of tuning and
changing the operating
system
IaaS Amazon EC2 Virtual machines IaaS consumers have Governance
are offered to access to the virtual
consumers machine instance
IBM Freedom of which can be Data encryption,
choice on configured in a way to especially in case
operating suit the operating of storage service
HP system system instance or API Key Protection
Rackspace image and application
Eucalyptus running over it. No
control over the
Cisco
hardware resources,
Joyent i.e. the physical
resources such as the
choice of processor on
each machine, size and
capacity of memory on
each machine
3 Limitations and Challenges in Cloud-Based Applications Development 63

3.2.3.1 Overcoming the Challenge

In cloud, the control of physical resources is under the cloud provider, and, hence,
the responsibility for workload management, uptime and persistence also falls on
him. Therefore, it is important to understand provider’s infrastructure, architecture
and the policies and processes governing them. The assurances of uptime and avail-
ability properties must be considered whilst choosing a provider. Also, the compen-
sation and backup measure which will be in place in case of failure of any kind must
be part of the agreement, thus taking into account the reliability factors.

3.2.4 Transparency

As discussed earlier, security issues due to third-party involvement give rise to


another subsidiary issue of trust and transparency. The problem of transparency
relates to the accountability of data usage, traceability of files and services on the
cloud, maintenance of audit trail, etc. on both the cloud provider and the cloud con-
sumer ends. According to Cloud Security Alliance (CSA), secrecy is not the only
way to build effective security measure. Their emphasis is on adopting and adhering
best practices and standards that create a more transparent and secure environment.
CSA is trying to get across to the purveyors of cloud services with STAR [11],
which is open to all cloud providers, and allows them to submit self-assessment
reports that document compliance to CSA published best practices. The searchable
registry will allow potential cloud customers to review the security practices of
providers, accelerating their due diligence and leading to higher quality procure-
ment experiences. CSA STAR represents a major leap forward in industry transpar-
ency, encouraging providers to make security capabilities a market differentiator.
The software used to monitor the audit trail and to track the files on cloud must
be capable of tracking all the activities irrespective of the type of architecture, that
is, multi-tenant or single tenant. This software can be used by both the consumer
and the provider and tally the same as a test for common audit trail. Transparency in
case of multi-tenant SaaS provider becomes a challenging task as the application
data is present in multiple machines along with other application (which may or
may not contain vulnerability).
The transparency issue arises mainly due to the paradigm change in cloud. It is a
shift from a focus on systems to a focus on data. Due to the inability of the current
logging and other mechanism to cope with the tracing issues, researchers explored
newer methods which worked accordingly on a cloud set up. The existing logging
mechanisms were mainly system-centric and built for debugging or monitoring
system health. They were not built for tracing data created within and across
machines. Furthermore, current logging mechanisms only monitor the virtual
machines layer, without paying attention to the physical machines hosting them.
Additionally, whilst file-intrusion detection and prevention tools such as TripWire
[12, 13] existed, they merely compared key signature changes and did not record
64 N. Pramod et al.

and track the history and evolution of data in the cloud. Research personnel at HP
are working on TrustCloud [14], a project launched to increase trust in cloud com-
puting via detective, file-centric approaches that increase data traceability, account-
ability and transparency in the cloud. With accurate audit trail and a transparent
view of data flow and history on cloud, the cloud services are bound to become
more reliable and the consumer has fairly more control over things which over-
comes a lot of potential challenges that hinders growth and migration towards cloud.

3.2.4.1 Overcoming the Challenge

Trust and following the best practices are one way to overcome this challenge. Trust
is developed over time by the provider by maintaining a clean track record in terms
of the characteristics of a particular cloud service. An organisation must look for the
following aspects before choosing a service provider:
• The history of the service provider
• The operational aspects apart from the ones mentioned in the service brochure,
for example, ‘Where are the data centres located?’ ‘Is the hardware maintenance
outsourced?’
• Additional tools, services and freedom offered to improve visibility and trace-
ability in the cloud environment
For example, users of IBM’s cloud services can use Tivoli management system
to manage their cloud and data centre services. TrustCloud can be another example
of a tool which can be used to increase transparency.

3.2.5 Latency

In a stand-alone system, it matters a lot where the data and other resources are situ-
ated for computation. In conventional client server architecture, the application
server is made to be located as close to the client as possible via the means of data
centres and CDNs (content delivery network). On a similar note it matters a lot
where the cloud is situated and that a cloud provider may have plenty of Web band-
width from a given data centre, but if that data centre is thousands of miles away,
then developers will need to accommodate and program for significant latency.
Latency is generally measured as the round-trip time it takes for a packet to reach a
given destination and come back, usually measured using the standard Linux pro-
gram, “ping”. As an example, if the cloud application is an email server, it is better
to have the cloud situated nearby. The multimedia content present in the application
can be handled by the services provided by CDNs which invisibly brings this con-
tent closer to the client.
Irrespective of the type of cloud service deployed, all cloud computing initiatives
have one thing in common, that is, data is centralised, whilst users are distributed.
3 Limitations and Challenges in Cloud-Based Applications Development 65

This means that if deployment is not planned carefully, there can be significant
issues due to the increased latency between the end users and their application servers.
All cloud services inherently use shared WANs, making packet delivery – specifi-
cally dropped or out of order IP packets during peak congestion – a constant prob-
lem in these environments. This results in packet retransmissions which, particularly
when compounded by increased latency, lower effective throughput and perceived
application performance.
Fortunately, in parallel with the cloud trend, WAN optimisation technology has
been evolving to overcome these challenges. WAN optimisation helps “clean up”
the cloud in real time by rebuilding lost packets and ensuring they are delivered in
the correct order, prioritising traffic whilst guaranteeing the necessary bandwidth,
using network acceleration to mitigate latency in long-distance environments and
de-duplicating data to avoid repetition. So with WAN optimisation, it is possible to
move the vast majority of applications into the cloud without having to worry about
geographic considerations [15].

3.2.5.1 Overcoming the Challenge

Organisations moving their latency-sensitive applications should consider negotiat-


ing with the service provider for possible support to reduce it and increase end-to-end
performance. At times, few service providers provide such facilities but mostly are
customised and configured for a specific consumer’s needs usually combining with
custom network configurations and private cloud. Also, care should be taken in order
to maintain the quality of normal services amidst all the tweaks to reduce latency.

3.2.6 Costing Model

It becomes important to differentiate between the cloud provider, consumer and the
actual customer who uses the application. The consumer is a person or an organisa-
tion that has access to cloud resources (depending on the service model, agreement
and the application type). Now this organisation must analyse and consider the
trade-offs amongst the computation, communication and integration. Cloud appli-
cations can significantly reduce the infrastructure cost, but it uses more network
resources (data usage, bandwidth) and hence raises the cost of data communication.
The cost per unit of computing resource used is likely to be higher as more resources
are used during the data exchange between the cloud service and the organisation.
This problem is particularly prominent if the consumer uses the hybrid cloud
deployment model where the organisation’s data is distributed amongst a number of
public/private (in-house IT infrastructure)/community clouds. Notable and com-
monly used pricing models in thirdparty systems are pay as you go and subscription
pricing. In the former, the billing is based on usage stats, and it is based on fixed,
agreed-upon prices in the latter case.
66 N. Pramod et al.

3.2.6.1 Overcoming the Challenge

Developers and architects should analyse the cloud provider’s costing model and
make an appropriate decision of choosing the most suitable model according to the
requirements. This decision includes understanding the trade-offs which the costing
model will result into; for example, in case of an IaaS model adoption scenario,
consideration towards a hybrid infrastructure wherein the sensitive and frequently
used large data or application can be part of a private cloud and the rest could be a
thirdparty service. Every approach has pros and cons, and the decision on costing
must exploit the market options and the requirements and at the same time should
also note this pro-con trade-off. Pay as you go could be useful if the requirements
are not well defined and the budget is limited, and the subscription pricing is useful
when the requirements are long term and are well defined.

3.2.7 Charging Model

The data usage charges in case of conventional models are fairly straightforward
and are with respect to bandwidth and online space consumption. But in case of the
cloud, the same does not hold good as the resources used is different at different
point in time due to the scalable nature of the application. Hence, due to the pool of
resources available, the cost analysis is a lot more complicated. The cost estimate is
now in terms of the number of instantiated virtual machines rather than the physical
server; that is, the instantiated VM has become the unit of cost. This resource pool
and its usage vary from service model to service model. For SaaS cloud providers,
the cost of developing scalability or multi-tenancy within their offering can be very
substantial. These include alteration or redesign and development of a software
under consideration which was initially developed for a conventional model, perfor-
mance and security enhancement for concurrent user access (similar to synchronisa-
tion and read and write problem) and dealing with complexities induced by the
above changes. On the other hand, SaaS providers need to consider the trade-off
between the provision of multi-tenancy and the cost savings yielded by multi-
tenancy such as reduced overhead through amortisation and reduced number of
on-site software licences. Therefore, the charging model must be tailored strategi-
cally for SaaS provider in order to increase profitability and sustainability of SaaS
cloud providers [7].

3.2.7.1 Overcoming the Challenge

A provider with better billing models and frameworks which determine usage of a
cloud service appropriately and accurately should be given preference over the rest.
For example, Rackspace has a billing model which is efficient and at the same time
3 Limitations and Challenges in Cloud-Based Applications Development 67

well represented and easy to understand with a well-defined set of information on


the cloud admin dashboard. The cloud infrastructure has become more efficient and
mature over the years and quite a lot of measures have been taken to overcome these
problems which include better tracking softwares and billing systems.

3.2.8 Service-Level Agreement (SLA)

Although cloud consumers do not have control over the underlying computing
resources, they do need to ensure the quality, availability, reliability and perfor-
mance of these resources when consumers have moved their core business functions
onto their entrusted cloud. In other words, it is vital for consumers to obtain guaran-
tees from providers on service delivery. Typically, these are provided through
service-level agreements (SLAs) negotiated between the providers and consumers.
The very first issue is the definition of SLA specifications in such a way that has an
appropriate level of granularity, namely, the trade-offs between expressiveness and
complicatedness, so that they can cover most of the consumer expectations and is
relatively simple to be weighted, verified, evaluated and enforced by the resource
allocation mechanism on the cloud. In addition, different cloud offerings (IaaS,
PaaS and SaaS) will need to define different SLA meta-specifications. This also
raises a number of implementation problems for the cloud providers. Furthermore,
advanced SLA mechanisms need to constantly incorporate user feedback and cus-
tomisation features into the SLA evaluation framework [16].

3.2.9 Vendor Lock-In

The issue of vendor lock-in is a rising concern due to the rapid development of cloud
technology. Currently, each cloud offering has its own way on how cloud consum-
ers/applications/users interact with the cloud. This severely hinders the develop-
ment of cloud ecosystems by forcing vendor locking, which prohibits the ability of
cloud consumers to choose from alternative vendors/offering simultaneously or
more from one vendor to another (migration) in order to optimise resources at dif-
ferent levels within an organisation. More importantly, proprietary or vendor-
specific cloud APIs make it very difficult to integrate cloud services with an
organisation’s own existing legacy systems. The primary goal of interoperability is
to realise the seamless fluid data across clouds and between cloud and local applica-
tions. Interoperability is essential due to various reasons. Many of the IT compo-
nents of a company are routine and static applications which need to handle numbers
and for which cloud service can be adopted. These applications vary from being
storage based to computation based. An organisation would prefer two different
vendors to achieve cost efficiency and performance enhancement via respective
68 N. Pramod et al.

Fig. 3.3 The result of survey conducted by CSA

service. But eventually these separate applications need to interact with the core
IT assets of the company, and, hence, there must exist some common way to interact
with these various cloud applications spread over different vendors. Standardisation
appears to be a good solution to address the interoperability issue. However, as
cloud computing is still a spreading wild fire, the interoperability problem has not
appeared on the pressing agenda of major industry cloud vendors [7].

3.2.9.1 Overcoming the Challenge

Wise choice in choosing a vendor is the only way to overcome this issue. Currently,
there are no standards governing cloud application platforms and services and hence
is a significant challenge to overcome in the coming years. However, steps have
been taken recently to manage this problem. The Federal Risk and Authorization
Management Program (FedRAMP) [17] is a government-wide program that pro-
vides a standardised approach to security assessment, authorisation and continuous
monitoring for cloud products and services. Cloud service providers are now required
to follow this standard, and hopefully it could be extended to a lot of migration and
interoperability issues.
Amongst these generic issues, few are of serious concern than the rest and few
have not seen the broad daylight due to the infancy of cloud computing. A survey
conducted by CSA involving over 240 organisations found that security is one of the
biggest issues with 87.5 % of the people voting for it followed by performance, cost,
etc. Figure 3.3 represents the survey statistics for the same question (i.e. rate chal-
lenges/issues of the cloud/on-demand model) over various issues.
3 Limitations and Challenges in Cloud-Based Applications Development 69

3.3 Security Challenges in Cloud Application Development

In a cloud environment, an enterprise cannot necessarily use the same tools and
services they deployed internally for security, such as a Web application firewall.
For example, a company that has deployed a Web application firewall (WAF) as
another level of security for a legacy app when exposing it to the Web no longer
has that option as the ownership and control of infrastructure at various levels changes
in case of cloud. The CSA’s cloud application security guidance noted that IaaS
vendors have started to offer cloud application security tools and services, including
WAFs, Web application security scanning and source code analysis. The tools are
specific to either the provider or third party, the report noted. It will be wise to
explore all possible APIs that might provide strong logging which in turn help as
leverage for security-related activity [18].
Having seen various issues in general, it is time now to look at security in par-
ticular with the service model point of view, that is, the issues which are inherent
and affect across various service models.

3.3.1 Challenges in Case of PaaS

3.3.1.1 Privacy Control

This is the first step in securing private data before sending it to the cloud. Cyber
laws and policies currently exist which disallow and impose relevant restrictions on
sending of private data onto third-party systems. A cloud service provider is just
another example of a third-party system, and organisations must apply the same
rules of handling third-party systems in this case. It is already clear that organisa-
tions are concerned at the prospect of private data going to the cloud. The cloud
service providers themselves recommend that if private data is sent onto their sys-
tems, it must be encrypted, removed or redacted. The question then arises “How can
the private data be automatically encrypted, removed, or redacted before sending it
up to the cloud service provider?”; that is, “How can the whole process be auto-
mated?”. It is known that encryption, in particular, is a CPU-intensive process which
threatens to add significant latency to the process.
Any solution implemented should broker the connection to the cloud service and
automatically encrypt any information an organisation does not want to share via a
third party. For example, this could include private or sensitive employee or cus-
tomer data such as home addresses or social security numbers, or patient data in a
medical context. Security engineers should look to provide for on-the-fly data pro-
tection by detecting private or sensitive data within the message being sent up to the
cloud service provider and encrypting it such that only the originating organisation
can decrypt it later. Depending on the policy, the private data could also be removed
or redacted from the originating data but then reinserted when the data is requested
back from the cloud service provider.
70 N. Pramod et al.

3.3.1.2 Traceability and Audit

As an organisational requirement, in order to monitor the financial consumption of


a rented or a paid for technology or service, the financial department needs to keep
track of the units of usage and audit trail. The cloud service providers themselves
provide this information on most occasions, but in the case of a dispute, it is impor-
tant to have an independent audit trail. Audit trails provide valuable information
about how an organisation’s employees are interacting with specific cloud services,
legitimately or otherwise.
The end-user organisation could consider a cloud service broker (CSB) solution
(such as CloudKick, CloudSwitch, Eucalyptus), as a means to create an indepen-
dent audit trail of its cloud service consumption. Once armed with his/her own
records of cloud service activity, the security engineer can confidently address any
concerns over billing or to verify employee activity. A CSB should provide report-
ing tools to allow organisations to actively monitor how services are being used.
There are multiple reasons why an organisation may want a record of cloud activity,
which leads us to discuss the issue of governance [19].

3.3.2 Challenges in Case of SaaS

3.3.2.1 Governance: Applying Restrictions and Exit Strategy

Being a third-party service, cloud resources need to have controlled and accounted
access. Governance in cloud computing is when an organisation wants to prevent
rogue (or unauthorised) employees from misusing a service. For example, the organ-
isation may want to ensure that a user working in marketing part of the application
can only access specific leads and does not have access to other restricted areas.
Another example is that an organisation may wish to control how many virtual
machines can be spun up by employees, and, indeed, that those same machines are
spun down later when they are no longer needed. So-called rogue cloud usage must
also be detected, so that the employees setting up their own accounts for using a
cloud service are detected and brought under an appropriate governance umbrella.
Whilst cloud service providers offer varying degrees of cloud service monitor-
ing, an organisation should consider implementing its own cloud service gover-
nance framework. The need for this independent control is of particular benefit
when an organisation is using multiple SaaS providers, that is, HR services, ERP
and CRM systems. However, in such a scenario, the security engineers also need to
be aware that different cloud providers have different methods of accessing infor-
mation. They also have different security models on top of that.
That points to the solution provided by a cloud broker, which brokers the differ-
ent connections and essentially smoothes over the differences between them. This
means organisations can use various services together but only have to interact
with a perfectly configured CSB application. In situations where there is something
3 Limitations and Challenges in Cloud-Based Applications Development 71

relatively commoditised like storage as a service, they can be used interchangeably.


This solves the issue of what to do if a cloud provider becomes unreliable or goes
down and means the organisation can spread the usage across different providers.
In fact, organisations should not have to get into the technical weeds of being able
to understand or mitigate between different interfaces. They should be able to move
up a level where they are using the cloud for the benefits of saving money.

3.3.2.2 Data Encryption

As discussed earlier, when moving data onto a third-party infrastructure, secrecy


can be one of the factors for security. This applies to storage infrastructure service
as well. Most of the companies are now moving for a cloud-based storage solution,
and this calls for an important aspect of secrecy, encryption. Encryption can be
handled in many ways. It must also be noted that encrypting data is a CPU-intense
process. Many organisations prefer to handle encryption in-house; that is, they pre-
fer to generate own keys and decide on a particular encryption algorithm to further
increase confidentiality. Cloud storage provider also provides the encryption facility
at the consumer end with unique and dynamically generated consumer-specific
encryption keys. The latest trends suggest that organisations are making use of
CSBs to accomplish this task. It is interesting to note that many organisations prefer
providers whose data centres are accessible and have better traceability than others
where it is difficult to track the data being sent onto cloud.

3.3.3 Challenges Relating to SaaS, PaaS, IaaS

3.3.3.1 Using API Keys

Many cloud services are accessed using simple REST [20] Web services interfaces.
These are commonly called “APIs”, since they are similar in concept to the more
heavyweight C++ or Java APIs used by programmers, though they are much easier
to leverage from a Web page or from a mobile phone, hence their increasing ubiquity.
In order to access these services, an API key is used. These are similar in some ways
to passwords. They allow organisations to access the cloud provider. For example, if
an organisation is using a SaaS offering, it will often be provided with an API key.
This is one security measure employed by the provider to increase accountability;
that is, if in case something goes wrong, then that can be easily tracked as every
application instance running would have a unique API key (which is associated with
a particular user credential) and the source application for the cause of the mistake
would also bear an API key. Hence, the misuse of a correct application can be only
through misuse of API keys, and it becomes important to protect them.
Consider the example of Google Apps. If an organisation wishes to enable single
sign-on to their Google Apps (so that their users can access their email without
72 N. Pramod et al.

having to log in a second time), then this access is via API keys. If these keys were
to be stolen, then an attacker would have access to the email of every person in that
organisation.
The casual use and sharing of API keys is an accident waiting to happen. Protection
of API keys can be performed by encrypting them when they are stored on the file
system, by storing them within a hardware security module (HSM) or by employing
more sophisticated security systems such as Kerberos [21] to monitor single sign-on.

3.4 Challenges for Application Developers

An application developer comes into picture in service models where the organisa-
tion has control over applications and computing resources. Hence, this perspective
is mainly applicable to PaaS where application development is on a particular third-
party cloud platform and to IaaS where the choice of platform is with the organisa-
tion, and over the chosen platform the developer writes applications. The following
are a few challenges currently faced by programmers and application developers in
developing applications on cloud platforms:

3.4.1 Lack of Standardisation

Cloud is still in its very early stages of development. There has been a surge in
enterprises adopting cloud technologies, but on the other hand, the technology has
not emerged enough to handle issues with this surge. The growth in different indus-
tries has been very self-centred, that is, the cloud providers have developed their
own APIs, virtualisation techniques, management techniques, etc. From a develop-
er’s perspective, every cloud provider supports different programming language and
syntax requirement, though most of them expose hash-based data interfaces or more
commonly JSON or Xml. This needs immediate attention, and steps must be taken
to standardise interfaces and programming methods. In case of conventional coun-
terpart, an application developed in PERL or PHP works fine when the application
is moved from one host to another or when there is a change in operations system.
Considerable developmental efforts are required in order to move from one cloud
provider to another which in turn implies that the cost of migration is significantly
high. History has shown us that languages like SQL and C were standardised to stop
undesired versions and proliferation.

3.4.2 Lack of Additional Programming Support

One of the key characteristics of good Web applications is that they are highly avail-
able. In order for this to be possible in a cloud application, it must be made to
dynamically replicate and mirror on machines across cloud with ease. Once this is
3 Limitations and Challenges in Cloud-Based Applications Development 73

done, the load balancing servers can serve these applications on demand, hence
increasing availability and without delays, that is, decrease in latency. As most of
the cloud platform providers employ multi-tenancy model, servicing hundreds of
application forces them to automate the task of mirroring and replication. In order
to achieve this seamlessly, the application must use very little or no state informa-
tion. The state variables include transactional or server variables, static variables
and variables which are present in the framework of the whole application. These
variables are always available in case of traditional environment as there is a static
application server and memory where they can be stored and accessed, but these are
very hard to find in a cloud environment. One of the ways of handling this situation
is to make use of a datastore or the cache store. Restriction on installing third-party
libraries, limited or no access with write permission to file systems hinders the capa-
bility of an application to store state information and hence forces an organisation
to use the providers’ datastore service which comes at a price.

3.4.3 Metrics and Best Practices

Cloud follows a pay-as-you-use policy, and, hence, consumers pay for almost every
bit of CPU usage. This necessitates the provider to present appropriate metrics on
processor usage and memory. A profile of the application with the skeleton of
classes or functions and their corresponding execution time, memory used and pro-
cessing power utilised, etc. will help the developer tune the code to optimise the use
of available processing power by choosing e.g. a different data structure or algo-
rithm with lesser time and space complexity
One of the solutions to this concern can be provided by the cloud host by abstract-
ing the common code patterns which are frequently used into optimal default librar-
ies as the cloud provider could easily employ optimisation techniques which would
suit the hardware underneath and the operating system used. This helps the devel-
oper to be assured that a piece of code is employing optimal techniques to produce
the desired effect. As an example, Apache PIG [22] gives a scripting-like interface
to Apache Hadoop’s [23] HDFS for analysing large-scale datasets.
In the end, the summary of cloud service models and their providers, properties,
access to resources and key challenges can be tabulated as in Table 3.1.

3.5 Conclusion

Cloud applications certainly have taken the IT industry to a new high, but like every
other technology, they have come short of a few things. In the search of exploiting
benefits of cloud applications, the inevitable trail of challenges has followed them
all along. The challenges in employing cloud services are discussed in this chapter.
The security challenges which are more specific to a type of service, that is, the type
74 N. Pramod et al.

of service model, are also described. With emerging trends in cloud-based applica-
tions development, the time has come to actually take a look at the pitfalls and
address them. The chapter has given an insight into how these challenges can be
overcome.
The major of all the concern turns out to be security that needs serious attention.
The overall conclusion is that cloud computing is in general prepared to success-
fully host most typical Web applications with added benefits such as cost savings,
but applications with the following properties need more careful study before their
deployment:
• Have strict latency or other network performance requirements.
• Require working with large datasets.
• Needs for availability are critical.
As a developer, one would like to see much advancement in terms of the devel-
opmental tool kit and the standardisation of APIs across various cloud development
platforms in the near future. This would also help in the transition from traditional
application to cloud-based environment as the intellectual investment required to
bring about this transition is less, and more developers can move from traditional
application development to cloud.

References

1. Mell, P., Grance, T.: The NIST Definition of Cloud Computing. Special Publication 800–145,
September 2001
2. Buyya, R., Yeo, C.S., Venugopal, S.: Market-oriented cloud computing: vision, hype, and
reality for delivering it services as computing utilities. In: High Performance Computing
and Communications, 2008, HPCC ’08, Dalian, China. 10th IEEE International Conference,
pp. 5–13 (2008)
3. Gong, C., et al.: The characteristics of cloud computing. In: 2010 39th International Conference
on Parallel Processing Workshops, San Diego
4. Zhang, Q., Cheng, L., Boutaba, R.: Cloud computing: State-of-the-art and research challenges.
J. Internet Serv. Appl. 1(1), 7–18 (2010)
5. Cloud Computing: What is infrastructure as a service. http://technet.microsoft.com/en-us/
magazine/hh509051.aspx
6. Wang, L., Tao, J., Kunze, M., Castellanos, A.C., Kramer, D., Karl, W.: Scientific cloud com-
puting: early definition and experience. 10th IEEE Int. Conf. High Perform. Comput. Commun.
9(3), 825–830 (2008)
7. Ramgovind, S., Eloff, M.M., Smith, E.: The management of security in cloud computing. In:
PROC 2010 I.E. International Conference on Cloud Computing, Indianapolis, USA (2010)
8. API and usage documentation for developer using Rackspace service. http://docs.rackspace.com
9. Wu, R., Ahn, G., Hongxin Hu, Singhal M.: Information flow control in cloud computing. In:
Collaborative Computing: Networking, Applications and Worksharing (CollaborateCom),
2010 6th International Conference, Brisbane, Australia, pp. 17. IEEE (2010)
10. Shimba, F.: Cloud computing: strategies for cloud computing adoption. Masters Dissertation,
Dublin Institute of Technology (2010)
11. About STAR: https://cloudsecurityalliance.org/star/faq/
12. Description of standard service by TripWire. http://www.tripwire.com/services/standard/
13. Description of Custom service by TripWire. http://www.tripwire.com/services/custom/
3 Limitations and Challenges in Cloud-Based Applications Development 75

14. Ko, R.K.L., Jagadpramana, P., Mowbray, M., Pearson, S., Kirchberg, M., Liang, Lee, B.S., HP
Laboratories: TrustCloud: a framework for accountability and trust in cloud computing. http://
www.hpl.hp.com/techreports/2011/HPL-2011-38.pdf
15. Minnear, R.: Latency: The Achilles Heel of cloud computing, 9 March 2011. Cloud Expo:
Article, Cloud Comput. J. http://cloudcomputing.sys-con.com/node/1745523 (2011)
16. Kuyoro, S.O., Ibikunle, F., Awodele, O.: Cloud computing security issues and challenges. Int.
J. Comput. Netw. 3(5) (2011)
17. FedRAMP: U.S General Services Administration Initiative. http://www.gsa.gov/portal/
category/102371
18. Security Guidance for Critical Areas of Focus in Cloud Computing V2.1, Prepared by CSA
2009. https://cloudsecurityalliance.org/csaguide.pdf
19. Weixiang, S., et al.: Cloud service broker, March 2012. http://tools.ietf.org/pdf/draft-shao-
opsawg-cloud-service-broker-03.pdf (2012)
20. Tyagi, S.: RESTful web service, August 2006. http://www.oracle.com/technetwork/articles/
javase/index-137171.html (2006)
21. Kerberos in the Cloud: Use Case Scenarios. https://www.oasis-open.org/committees/down-
load.php/38245/Kerberos-Cloud-use-cases-11june2010.pdf
22. Apache PIG: http://pig.apache.org/
23. Apache Hadoop: http://hadoop.apache.org/
24. SAAS, PAAS and IAAS – Making Cloud Computing Less Cloudy. http://cioresearchcenter.
com/2010/12/107/
Part II
Software Development Life Cycle
for Cloud Platform
Chapter 4
Impact of Cloud Services on Software
Development Life Cycle

Radha Krishna and R. Jayakrishnan

Abstract Cloud computing provides a natural extension to service-oriented


architecture (SOA) and the World Wide Web. It leads to a complete paradigm shift
in a number of areas such as software development, deployment, IT usage, and
software services industry. Among these areas, the impact on software development
life cycle needs special attention as they form a pivotal part in the cloud assessment
and migration. In this context, some key aspects include (a) implications of cloud-
based (public cloud based) solution on the privacy requirements, (b) implications of
cloud-based solution on testing services and project testing methodology, and (c)
implications of cloud-based solution of configuration management. In this chapter,
we propose to address the impacts, strategies, and best practices to minimize the
negative effects of these implications. The chapter discusses variations to software
development life cycle and related processes with respect to private cloud, public
cloud, and hybrid cloud models. These variations are analyzed based on the usage
pattern of each cloud-based solution, especially with respect to requirement analy-
sis, architecture and design, software construction, testing, and rollout. Relevant
processes such as project management, configuration management, and release
management are also discussed. The chapter concludes with a summary of various
cloud usage patterns and their impact on each of the software development life cycle
stages. These usage patterns and the impacts are generalized and can form the back-
bone of an enterprise cloud application development methodology.

Keywords Software development life cycle • Usage patterns • Design for failure
• Design for parallelism • Information architecture • Private cloud • Public cloud

R. Krishna (*) • R. Jayakrishnan


Infosys Ltd., 4252, 65th Cross, Kumaraswamy Layout II Stage, Bangalore 560078, India
e-mail: [email protected]; [email protected]

Z. Mahmood and S. Saeed (eds.), Software Engineering Frameworks for the Cloud 79
Computing Paradigm, Computer Communications and Networks,
DOI 10.1007/978-1-4471-5031-2_4, © Springer-Verlag London 2013
80 R. Krishna and R. Jayakrishnan

4.1 Introduction

It is generally agreed that evolution of a new paradigm requires adaptation in usage


patterns and associated functional areas to fully benefit from the paradigm shift [1].
Likewise, to leverage the benefit of cloud paradigm shift in software segment, soft-
ware development life cycle (SDLC) must continuously adopt new changes to be
the guideline for development/implementation of cloud-based projects. The user
communities, such as management professionals, academicians/researchers, or
software engineers, are very much keen in understanding and adopting the current
state and new changes in SDLC while adapting to the paradigm changes. This chap-
ter mainly describes the changes that are required in SDLC (part of software engi-
neering process) while adopting the cloud computing environment. An SDLC
typically comprises the following phases:
• Requirements
• Architecture
• Design
• Implementation
• Testing
• Production
• Support and Maintenance
To truly benefit from cloud environment, software development teams should
look at the cloud computing environment as a new development paradigm and
leverage it to lead to differentiated value. The rest of the chapter explains position-
ing of the application development process to enable to take the advantage of the
distributed nature of cloud environment.

4.2 Requirement Analysis

The industry, in general, tends to think of cloud as an enabler or rather a solution


and hence believes that it has no bearing on requirements. The truth is that cloud is
more of a choice at enterprise level. Hence, the fitment of the choice is an important
aspect of the analysis phase. Along with the choice, the guidelines and checklists
that aid in requirement analysis are also required for applications moving to cloud
to be successful. The requirement analysis needs to address this assessment. These
relevant requirements are mostly non-functional in nature.
This implies the following additional tasks that need to be planned as part of
requirement analysis:
• Cloud assessment
• Cloud usage pattern identification and capturing data points to support requirement
analysis based on usage patterns
4 Impact of Cloud Services on Software Development Life Cycle 81

4.3 Cloud Assessment

Cloud readiness assessment will help to evaluate the cloud readiness and applicability
for an enterprise. The assessment also helps to determine the business case and return
on investment. Typical assessment questions are listed below for reference. Note that
this list is not exhaustive:
• Does cloud architecture fit the requirements for the application?
• How interconnected is this application with other application in the enterprise—for
public cloud, can these interfaces be exposed for access from external networks?
• Is the enterprise comfortable with public cloud, or should the enterprise focus
only on private cloud option among other options?
• Identifying suitable cloud service provider (IAAS/PAAS [2]—and the specific
vendor under the category)
• Defining the strategy in adopting cloud for future projects
• Assessing the cost of using cloud (private or public cloud) (compare—capital
expense of hosted option vs. running cost of cloud option)
• How would applications be monitored after they are hosted on public cloud?
It is important to note that cloud assessment guidelines are defined at enterprise
level. The enterprise can optionally create tools to aid the projects and new initia-
tives to perform cloud assessment.

4.4 Usage Patterns and Requirements Capture

Below we present a list of common usage patterns [3] and corresponding require-
ment capturing questionnaire that helps to arrive at workload of an application and
decide on its readiness for cloud-based architecture.

4.4.1 Constant Usage of Cloud Resources over Time

This pattern is applicable to both internal and external (e.g., Web sites) applications
that are constantly used by enterprise users/external users, and there is little variance
in load and usage of these applications. Requirement analysis should detail out the
following information:
• Availability of applications at regular intervals.
• Defining the strategy for application downtime and uptime.
• More requirement analysis is required in designing the respective scripts to make
the application available at a required point of time.
• Defining limits of data loss in case of application crash.
82 R. Krishna and R. Jayakrishnan

4.4.2 Cyclic Internal Load

This pattern applies to recurrent business functionalities like batch jobs that execute
at end of day and data processing applications.
• Detail out the I/O volume required to satisfy the business process (costing of the
cloud solution is very I/O sensitive).

4.4.3 Cyclic External Load

This pattern includes applications that are developed to serve a particular demand,
like publishing examination results/election campaign and sites related to
entertainment.
• Detail out level of concurrency required across time periods and hence amount
of parallelism that can be applied to improve the performance of the system.

4.4.4 Spiked Internal Load

This pattern applies to executing one-time jobs for processing at a given point in time.
• Detail out requirements on identifying number of concurrent users accessing the
system.
• Identify volume of data that is required to process the business functionality.
• Detail out the network bandwidth and expected delay in response while process-
ing heavy load business functionality.
• Analyze variety of data that is used in day-to-day business.
• Define set of business functionalities and business components that can execute
side by side.
• Identify reusability of components.
• Identify different failure scenarios and respective handling mechanisms.

4.4.5 Spiked External Load

This pattern applies to applications that should be able to handle a sudden load which
may come from an external source example: customers, vendors, or public users.
• Define the limit of independence to access the application.
• Identify and analyze country-level regulations to handle the load.
• Identify industry-specific regulations while handling the load.
• Identify institutional specific fragility and capacity challenges.
4 Impact of Cloud Services on Software Development Life Cycle 83

4.4.6 Steady Growth over Time

This pattern usually applies to a mature application or Web site, wherein as additional
users are added, growth and resources are tracked.
• Cost of maintaining application on cloud

4.5 Architecture

In general, software that is to be deployed in cloud environments should be archi-


tected differently than on-premise hosted/deployed applications. Cloud computing
as a development environment for distributed model has led to emergence of variet-
ies of design and architecture principles. The new architecture paradigm requires
improving the thought process for horizontally scaling out the architectures by
developing and designing large number of smaller components that are loosely cou-
pled and easy to deploy in distributed environments.
Cloud computing solutions should operate on a network which is capable of
handling massive data transactions. The software development teams should be
aware that apart from general architecture and design principles, one needs special
skills to handle solutions with high I/O volume/velocity, and architects should come
up with a strategic and competitive skills to leverage the service provided by distrib-
uted environment vendors.
With every increase in demand of quality software from clients, enterprises must
produce the software that can be adapted to new environments without degrading the
existing parameters of quality of service for the application. To take the advantage of
distributed environment while developing cloud-based applications, there are couple
of changes and additions identified that are critical in determining the scalability of
the architecture to take the advantage of scalable infrastructure available in distrib-
uted environment. For example, architects should start thinking of architecting and
designing applications that support multi-tenancy, concurrency management, de-
normalized partitioned and shared-nothing data, asynchronous and parallel process-
ing, service-oriented composition supporting restful services [15], etc.

4.6 Information Architecture

As the world is growing and becoming more connected every day, data plays a vital
role in software application. The key in building the information architecture is to
closely align information to business process by availing the features available in
cloud environment. This process enables all stakeholders like business leaders,
vendors, consumers, service providers, users, and all other stakeholders in evaluating,
reconciling, and prioritizing on the information vision and related road map. The
84 R. Krishna and R. Jayakrishnan

information architecture should provide great care that should be taken in defining
strategy on development approach ensuring right decisions in development and
execution of an application.
Understanding the key considerations of data architecture in distributed environ-
ment and trade-offs for decisions made related to technology and architecture choices
in cloud environments is essential for good information architecture. For example,
decisions like data sharing is very crucial while defining the data services. This topic
mainly describes different varieties of data (relational, geospatial, unstructured data,
etc.) and different classifications and compliance of data (internal and external).
The information architecture provides information for relevant concepts, frame-
works, and services to access information in unique, consistent, and integrated way by
adopting new cutting-edge technology and guarantees responsiveness and trustworthy
information. Following are core decision points of information architecture:
• Access Information: Information services should provide unconstrained access
to the right users at the right time.
• Reusable Services: Facilitate discovery, selection, and reuse of services and
encourage uniformity of services.
• Information Governance: Provide proper utilities to support efficiency of infor-
mation governance strategy.
• Standards: Define set of standards to information where technology will support
process simplification.

4.7 Information Security

Security is one of the important nonfunctional requirements demanded by clients.


Information security plays a vital role in distributed environments while defining
information architecture. The level of security applied depends on the type of infor-
mation (Fig. 4.1).
In general, information is classified into four main categories as defined in
Table 4.1.
Authentication, authorization, and data protection are different mechanisms of
implementing security that every system should adopt. These security mechanisms need
to be applied to information which may be available in different formats and dates like:
• Information at Rest
• Information in Transit
• Transient Information
• Information CRUD
Table 4.2 provides information on strategies for various information categoriza-
tion and security options.
Information architecture should help the system in segregating information into
the above-mentioned categories, and each category will have challenging informa-
tion classification for different security mechanisms as defined in Table 4.3.
4 Impact of Cloud Services on Software Development Life Cycle 85

Fig. 4.1 Levels of


information security

e
iz

B.
or

C
eg

la
at

ss
C

ify
A.
Information
Security

y
ilit
ib
D

ns
.M

po
ov

es
e

.R
C
Table 4.1 Information categories
Public Private Confidential Secret
Data available Data private Data to be disclosed on Data never disclosed
for general to organization need-to-know basis after and can be seen only
public approval from the owner by the owner of the data
For Ex: For Ex: For Ex:
Intranet Customer information Password
information
Org chart Organization policy Pin number
and would-be changes
List of Source code for business SSN
employees critical modules
Source code Credit card number,
for in-house/ authorization code, and
utility modules expiry data combination
For Ex: Account number, last 4
digits of SSN, birth
date combination
Annual reports Source code
Share price for decryption

Table 4.3 should be understood with the following in mind:


• Authenticating the access to public information is optional.
• Authenticating the access to information that is private to the organization is
mandatory, and