Software Engineering Frameworks
Software Engineering Frameworks
Zaigham Mahmood
Saqib Saeed Editors
Software
Engineering
Frameworks for the
Cloud Computing
Paradigm
Computer Communications and Networks
Emphasis is placed on clear and explanatory styles that support a tutorial approach, so that
even the most complex of topics is presented in a lucid and intelligible manner.
Zaigham Mahmood • Saqib Saeed
Editors
Software Engineering
Frameworks for the Cloud
Computing Paradigm
Editors
Zaigham Mahmood Saqib Saeed
School of Computing and Mathematics Department of Computer Sciences
University of Derby Bahria University
Derby, UK Islamabad, Pakistan
Series Editor
A.J. Sammes
Centre for Forensic Computing
Cranfield University
Shrivenham Campus, Swindon, UK
ISSN 1617-7975
ISBN 978-1-4471-5030-5 ISBN 978-1-4471-5031-2 (eBook)
DOI 10.1007/978-1-4471-5031-2
Springer London Heidelberg New York Dordrecht
Dr Zaigham Mahmood
Dr Saqib Saeed
Overview
ix
x Preface
Objectives
The aim of this book is to present current research and development in the field of
software engineering as relevant to the cloud paradigm. The key objectives for this
book include:
• Capturing the state of the art in software engineering approaches for developing
cloud-suitable applications
• Providing relevant theoretical frameworks, practical approaches and current and
future research directions
• Providing guidance and best practices for students and practitioners of cloud-based
application architecture
• Advancing the understanding of the field of software engineering as relevant to
the emerging new paradigm of cloud computing
Organisation
• Part IV: Performance of Cloud Based Software Applications. This section consists
of two chapters that focus on efficiency and performance of cloud-based applications.
One chapter discusses the effective practices for cloud-based software engineering,
and the other chapter presents a framework for identifying relationships between
application performance factors.
Target Audience
Software Engineering Frameworks for Cloud Computing Paradigm has been developed
to support a number of potential audiences, including the following:
• Software engineers and application developers who wish to adapt to newer
approaches to building software that is more suitable for virtualised and multi-
tenant distributed environments
• IT infrastructure managers and project leaders who need to clearly understand
the requirement for newer methodologies in the context of cloud paradigm and
appreciate the issues of developing cloud-based applications
• Students and university lecturers of software engineering who have an interest in
further developing their expertise and enhancing their knowledge of the cloud-
relevant tools and techniques to architect cloud-friendly software
• Researchers in the fields of software engineering and cloud computing who wish
to further increase their understanding of the current practices, methodologies
and frameworks
Suggested Uses
The editors acknowledge the help and support of the following colleagues during
the review and editing phases of this book:
• Dr Wasif Afzal, Bahria University, Islamabad, Pakistan
• Dr Daniel Crichton, Jet Propulsion Laboratory, California Inst Tech, USA
• Dr Ashraf Darwish, Helwan University, Cairo, Egypt
• Dr Shehzad Khalid, Bahria University, Islamabad, Pakistan
• Prof Francisco Milton Mendes, Rural Federal University of the Semi-Arid, Brazil
• Prof Mahmood Niazi, King Fahd University of Petroleum and Minerals, Dhahran
• Dr S. Parthasarathy, Thiagarajar College of Engineering, Madurai, India
• Dr Pethuru Raj, Wipro Technologies, Bangalore, India
• Dr Muthu Ramachandran, Leeds Metropolitan University, Leeds, UK
• Dr C. R. Rene Robin, Jerusalem College of Engineering, Chennai, India
• Dr Lucio Agostinho Rocha, State University of Campinas, Brazil
• Dr Lloyd G. Waller, University of the West Indies, Kingston, Jamaica
• Dr Fareeha Zafar, GC University, Lahore, Pakistan
The editors would also like to thank the contributors of this book; the 26 authors
and co-authors, from academia as well as the industry from around the world, who
collectively submitted 15 chapters. Without their efforts in developing quality
contributions, conforming to the guidelines and meeting often the strict deadlines,
this text would not have been possible.
Grateful thanks are also due to our family members for their support and
understanding.
xiii
Contents
xv
xvi Contents
Radha Guha
Abstract Tim Berners-Lee’s vision of the Semantic Web or Web 3.0 is to trans-
form the World Wide Web into an intelligent Web system of structured, linked data
which can be queried and inferred as a whole by the computers themselves. This
grand vision of the Web is materializing many innovative uses of the Web. New
business models like interoperable applications hosted on the Web as services are
getting implemented. These Web services are designed to be automatically discov-
ered by software agents and exchange data among themselves. Another business
model is the cloud computing platform, where hardware, software, tools, and appli-
cations will be leased out as services to tenants across the globe over the Internet.
There are many advantages of this business model, like no capital expenditure,
speed of application deployment, shorter time to market, lower cost of operation,
and easier maintenance of resources, for the tenants. Because of these advantages,
cloud computing may be the prevalent computing platform of the future. To realize
all the advantages of these new business models of distributed, shared, and self-
provisioning environment of Web services and cloud computing resources, the tradi-
tional way of software engineering has to change as well. This chapter analyzes how
cloud computing, on the background of Semantic Web, is going to impact on the
software engineering processes to develop quality software. The need for changes in
the software development and deployment framework activities is also analyzed to
facilitate adoption of cloud computing platform.
R. Guha (*)
ECE Department, PESIT, Feet Ring Road, BSK III Stage,
560085, Bangalore, India
e-mail: [email protected]
Z. Mahmood and S. Saeed (eds.), Software Engineering Frameworks for the Cloud 3
Computing Paradigm, Computer Communications and Networks,
DOI 10.1007/978-1-4471-5031-2_1, © Springer-Verlag London 2013
4 R. Guha
1.1 Introduction
Since the inception of the World Wide Web (WWW) in 1990 by Tim Berners-Lee,
there has been a large warehouse of documents on the WWW, and the number of
documents is growing very rapidly. But, unless the information from these docu-
ments can be aggregated and inferred quickly, they do not have much use. Human
readers cannot read and make decisions quickly from large number of mostly irrel-
evant documents retrieved by the old search engines based on keyword searches.
Thus, Tim Berners-Lee’s vision is to transform this World Wide Web into an intel-
ligent Web system or Semantic Web [1–8] which will allow concept searches rather
than keyword searches. First, Semantic Web or Web 3.0 technologies will transform
disconnected text documents on the Web into a global database of structured, linked
data. These large volumes of linked data in global databases will no longer be only
for human consumption but for quick machine processing. Just like a relational
database system can answer a query by filtering out unnecessary data, Semantic
Web technologies will similarly filter out information from the global database.
This capability requires assigning globally accepted explicitly defined semantics to
the data in the Web for linking. Then these linked data in the global database will
collectively produce intelligent information by software agents on behalf of the
human users, and the full potential of the Web can be realized.
Anticipating this transition of the Web where data integration, inference, and
data exchange between heterogeneous applications will be possible, new business
models of application deployment and delivery over the Internet have been concep-
tualized. Applications can be hosted on the Web and accessed via the Internet by
geographically dispersed clients. These XML (eXtensible Markup Language)-
based, interoperable applications are called Web services which can publish their
location, functions, messages containing the parameter list to execute the functions,
and communication protocols for accessing the service using it correctly by all. As
the same service will be catered to multiple clients, they can even be customized
according to clients’ likes. Application architecture and delivery architecture will be
two separate layers for these Web applications for providing this flexibility. XML-
based Web 2.0 and Web 3.0 protocols like Service-Oriented Architecture (SOA),
Simple Object Access Protocol (SOAP), Web Service Description Language
(WSDL), and Universal Description, Discovery and Integration (UDDI) registry are
designed to discover Web services on the fly and to integrate applications developed
on heterogeneous computing platforms, operating systems, and with varieties of
programming languages. Applications like Hadoop and Mashup [9, 10] can com-
bine data and functionalities from multiple external sources hosted as Web services
and are producing valuable aggregate new information and creating new Web
services. Hadoop and Mashup can support high-performance computing involving
distributed file system with petabytes of data and parallel processing on more than
hundreds to thousands of computers.
In another business model, the application development infrastructure like
processors, storage, memory, operating system, and application development tools
1 Impact of Semantic Web and Cloud Computing Platform on Software Engineering 5
and software can all be delivered as utility to the clients over the Internet. This is
what is dubbed as cloud computing where a huge pool of physical resources hosted
on the Web will be shared by multiple clients as and when required. Because of
the many benefits of this business model like no capital expenditure, speed of
application deployment, shorter time to market, lower cost of operation, and easier
maintenance of resources for the clients, cloud computing may be the prevalent
computing p latform of the future.
On the other hand, economies of all developed countries depend on quality software,
and software cost is more than hardware cost. Moreover, because of the involvement
of many parties, software development is inherently a complex process, and most of
the software projects fail because of lack of communication and coordination
between all the parties involved. Knowledge management in software engineering
has always been an issue which affects better software development and its mainte-
nance. There is always some gap in understanding about what the business partners
and stakeholders want, how software designers and managers design the modules,
and how software developers implement the design. As the time passes, this gap in
understanding increases due to the increased complexity of the involvement of
many parties and continuously changing requirements of the software. This is more
so at the later stage when the software has to be maintained and no one has the
comprehensive knowledge about the whole system.
Now, with the inclusion of the Free/Libre/Open Source Software (FLOSS) [11]
pieces, Web services, and cloud computing platform, software development com-
plexity is going to increase manifold because of the synchronization needs with
third-party software and the increased communication and coordination complexity
with the cloud providers. The main thesis of this chapter is that the prevalent soft-
ware process models should involve the cloud providers in every step of decision-
making of software development life cycle to make the software project a success.
Also, the software developers need to change their software artifacts from plain text
documents to machine-readable structured linked data, to make them Semantic Web
ready. With this semantic transformation knowledge, management in software engi-
neering will be much easier, and compliance checking of various requirements
during project planning, design, development, testing, and verification can be
automated. Semantic artifacts will also give their product a competitive edge for auto-
matic discovery and integration with other applications and efficient maintenance
of their artifacts.
This chapter explores how Semantic Web can reduce software development
work with automatic discovery of distributed open source software components.
Also, Semantic Web techniques are explored that need to be incorporated in soft-
ware development artifacts to make them Semantic Web ready. Then, to realize the
many advantages of the cloud computing business model, how the well-established
software engineering process models have to adapt is analyzed. As the cloud pro-
vider is an external entity or third party, how difficult will be the interactions with
them? How to separate the roles of software engineers and cloud providers? As a
whole, cloud computing paradigm on Semantic Web background makes software
development project more complex.
6 R. Guha
World Wide Web was invented in 1990 by Tim Barners-Lee. Since then, the trans-
formation of the Web has been marked with Web 1.0, Web 2.0, and Web 3.0 tech-
nologies. In Web 1.0, the HTML (hypertext markup language) tags were added to
plain text documents for displaying the documents in a specific way on Web brows-
ers. Each document on the Web is a source of knowledge or a resource. In the
World Wide Web, with the hypertext transport protocol (HTTP), if the URL
(Universal Resource Locator) of any Web site (document) is known, then that
resource can be accessed or browsed over the Internet. Domain name service
(DNS) registry was developed to discover a machine on the Internet which hosts a
Web page URL. This capability of Web 1.0 published information pages which
were static and read only. HTML’s <href> tag (a form of metadata) links two docu-
ments for human readers to navigate to related topics. In Web 1.0, for quick search
and retrieval, metadata (data about data) that describes the contents of electronic
documents or resources are added in the document itself, which has the same pur-
pose as indexes in a book or catalogues in a library. Search engines like Google and
Yahoo create metadata databases out of those metadata in Web documents to find
the documents quickly. In Web 1.0, the contents of the Web pages are static and the
meanings of the Web pages are deciphered by the people who read them. Web
contents are developed by HTML and user input is captured in Web forms in the
client machine and sent to remote server via a common gateway interface (CGI) for
further processing.
In Web 2.0, XML (eXtensible Markup Language) was designed to give hierar-
chical structure to the document content, to transform it into data, and to transport
the document as data. Where HTML tags prescribe how to display the Web content
in client computer, the XML tags add another layer of metadata to query the
Web document for specific data. XML documents can be read and processed by
computers (by a parser) automatically and can be exchanged between applications
developed on heterogeneous computing platforms, operating systems, and varieties
1 Impact of Semantic Web and Cloud Computing Platform on Software Engineering 7
of programming languages once they all know the XML tags used in the documents.
As for example, in order to use text generated by a Word Processor and data from
spreadsheets and relational databases together, they all need to be transformed into
a common XML format first. This collaboration of applications is possible in a
closed community when all the applications are aware of the common XML tags.
Web 2.0 technologies also enabled pervasive or ubiquitous Web browsing involving
personal computers, mobile phones, and PDA (Personal Digital Assistant) running
different operating systems like Windows, Macintosh, or Linux, connected to the
Internet via wired or wireless connections. Web 2.0 technologies like XML, DHTML,
and AJAX (Asynchronous Java Script and XML) allowed two-way communica-
tions with dynamic Web contents and created social communities like Facebook,
MySpace, and Twitter. Web 2.0 has also seen the revolution of using the Web as the
practical medium for conducting businesses. An increasing number of Web-enabled
e-commerce applications like e-Bay and Amazon have emerged in this trend to buy
and sell products online.
But, for collaboration in the open, ever-expanding World Wide Web by all,
everybody on the Web has to agree on the meaning of the Web contents. XML alone
does not add semantics to the Web content. Thus, in Web 3.0, Resource Description
Framework (RDF) protocol is designed to add another layer of metadata to add
meaning or semantics to the data (text, images, audio, or video) inside the document
with RDF vocabularies understood by machines. As computer memory is not
expensive anymore, this metadata can be verbose even for human understanding
instead of being only for machine understanding. Authors, publishers, and users all
can add metadata about a Web resource in a standardized format. This self-describing
data inside the document can be individually addressed by HTTP URI (Universal
Resource Identifier) mechanism, processed and linked to other data from other doc-
uments, and inferred by machine automatically. URI is an expansion on the concept
of Universal Resource Locator or URL and can both be a name and location. Search
engines or crawlers will navigate the links and generate query response over the
aggregated linked data. This linked data will encourage reuse of information, reduce
redundancy, and produce more powerful aggregate information.
To this end, we need a standardized knowledge representation system [12, 13].
Modeling a knowledge domain using standard, shared vocabularies will facilitate
interoperability between different applications. Ontology is a formal representation
of knowledge as a set of concepts in a domain. Ontology components are classes,
their attributes, relations, restrictions, rules, and axioms. DublinCore, GFO (General
Formal Ontology), OpenCyc/SUMO (Suggested Upper Merged Ontology), DOLCE
(Descriptive Ontology for Linguistic and Cognitive Engineering), WordNet, FOAF
(Friend of a Friend), SIOC (Semantically Interlinked Online Communities), SKOS
(Simple Knowledge Organization System), DOAP (Description of a Project),
vCard, etc., are the much used well-known ontology libraries of RDF vocabularies.
For example, implementation of DublinCore makes use of XML and a Resource
Description Framework (RDF).
RDF triples describe any data in the form of subject, predicate, and object.
Subject, predicate, and object all are URIs which can be individually addressed in
8 R. Guha
the Web by the HTTP URI mechanism. Subject and object can be URIs from the
same document or from two separate documents or independent data sources linked
by the predicate URI. Object can also be just a string literal or a value. RDF creates
a graph-based data model spanning the entire Web which can be navigated or
crawled following the links by software agents. RDF schema (RDFS), Web ontology
language (OWL), and Simple Knowledge Organization System (SKOS) are devel-
oped to write rules and express hierarchical relations, inference between Web
resources. They vary in their expressiveness, logical thinking, and hierarchical
knowledge organization from being more limited to more powerful in RDFS to
SKOS. For querying the RDF data written in RDFS, OWL, or SKOS, RDF query
language named SPARQL has been developed.
RDF tags can be added automatically or semiautomatically by tools like RDFizers
[7], D2R (Database to RDF), JPEG → RDF, and Email → RDF. Linked data browsers
like Disco, Tabulator, and Marbles are getting designed to browse linked data
Semantic Web. Linked data search engines like Falcon and SWSE (Semantic Web
search engine) are getting designed for human navigation, and Swoogle and Sindice
are getting designed for applications.
Figure 1.1 shows the Semantic Web protocol stacks (Wedding Cake) proposed
by Tim Barners-Lee in 2000. The bottom of the Wedding Cake shows standards that
are well defined and widely accepted, whereas the other protocols are yet to be
implemented in most of the Web sites. Unicode is a 16-bit code word which is large
enough (216) for representing any characters in any languages in the world. URI
(Universal Resource Identifier) is the W3C’s codification for addressing any objects
over the Web. XML is for structuring the documents into data, and RDF is the
mechanism for describing data which can be understood by machines. Ontologies
are vocabularies from specific knowledge domain. Logic refers to making logical
inferences from associated linked data. Proof is keeping track of the steps of logical
inferences. Trust refers to the origin and quality of the data sources. This entire
protocol stack will transform the Web into a Semantic Web global database of
linked data for realizing the full potential of the Web.
1 Impact of Semantic Web and Cloud Computing Platform on Software Engineering 9
Cloud computing [14–16] is the most anticipated future trend of computing. Cloud
computing is the idea of renting out server, storage, network, software technologies,
tools, and applications as utility or service over the Internet as and when required in
contrast to owning them permanently. Depending on what resources are shared
and delivered to the customers, there are four types of cloud computing. In cloud
computing terminology, when hardware such as processors, storage, and network
are delivered as a service, it is called infrastructure as a service (IaaS). Examples of
IaaS are Amazon’s Elastic Cloud (EC2) and Simple Storage Service (S3). When
programming platforms and tools like Java, Python, .Net, MySQL, and APIs are
delivered as a service, it is called platform as a service (PaaS). When applications
are delivered as a service, it is called software as a service (SaaS).
Depending on the amount of self-governance or control on resources by the
tenant, there are three types of cloud like internal or private cloud, external or public
cloud, and hybrid cloud (Fig. 1.2). In private cloud, an enterprise owns all the
resources on-site and shares them between multiple applications. In public cloud,
the enterprise will rent the resources from an off-site cloud provider, and these
resources will be shared between multiple tenants. Hybrid cloud is in the middle
where an enterprise owns some resources and rents some other resources from a
third party.
Cloud computing is based on Service-Oriented Architecture (SOA) of Web 2.0
and Web 3.0 and virtualization [16–18] of hardware and software resources
(Fig. 1.3). Because of the virtualization technique, physical resources can be linked
dynamically to different applications running on different operating systems.
Because of the virtualization technique, physical resources can be shared among all
users, and there is efficient resource management which can provide higher resource
utilization and on-demand scalability. Increased resource utilization brings down
10 R. Guha
the cost of floor space, power, and cooling. Power savings is the most attractive
feature of cloud computing and is the renewed initiative of environment-friendly green
computing or green IT movement of today. Cloud computing not only reduces cost
of usage of resources but also reduces maintenance cost of resources for the user.
Cloud computing can support on-demand scalability. An application with occa-
sional demand for higher resources will pay for the higher resources only the time
it is used instead of leasing all the resources from the very beginning in anticipation
of future need. This fine-grained (hourly) pay-by-use model of cloud computing
is going to be very attractive to the customers. There are many other benefits of
cloud computing. Cloud infrastructure can support multiple protocols and change
in business model for applications more rapidly. It can also handle increased perfor-
mance requirements like service scaling, response time, and availability of the
application, as the cloud infrastructure is a huge pool of resources like servers,
storage, and network and provides elasticity of growth to the end users.
With this business model of catering multiple clients with shared resources,
world’s leading IT companies like Microsoft, Google, IBM, SalesForce, HP, and
Amazon are deploying clouds (Fig. 1.2). Web services and applications like Hadoop
and Mashup can run on these clouds. Though there are many advantages of cloud
computing platform, there are few challenges regarding safety and privacy of
tenant’s information in cloud platform which can threaten the adoption of cloud
computing platform by the masses. If these few challenges can be overcome,
because of many of its advantages, this cloud computing model may be the prevalent
computing model of the future.
All the resources of the cloud computing platform are shared by multiple tenants
(Fig. 1.4) over the Internet across the globe. In this shared environment, having trust
of data safety and privacy is of utmost importance to customers. Safety of data
means no loss of data pertaining to the owner of the data, and privacy of data means
1 Impact of Semantic Web and Cloud Computing Platform on Software Engineering 11
no unauthorized use of the sensitive data by others. As cloud provider has greater
resource pool, they can easily keep copies of data and ensure safety of user data.
Privacy of data is of more concern in public cloud than in private cloud. In public
cloud environment as data is stored in off-premise machines, users have less control
over the use of their data, and this mistrust can threaten the adoption of cloud
computing platform by the masses. Technology and law enforcement both should
protect privacy concerns of cloud customers [19, 20]. Software engineer must
build their applications as Web services which can guarantee to lessen this risk of
exposure of sensitive data of cloud customers.
Next, we look into the preexisting software development methodologies to develop
quality software products in traditional environment not involving Web services
and cloud computing platform.
Figure 1.6 depicts the steps of agile process model named Extreme Programming
(XP) for a traditional software development where the customer owns the develop-
ing platform or software developers develop in-house and deploy the software to the
customer after it is built. XP has many characteristics like user story card and CRC
(class, responsibility, collaboration) card narrated during the requirements gather-
ing stage jointly by the customer and the software engineers. Customer decides the
priority of each story card, and the highest priority card is only considered or “time-
boxed” for the current iteration of software development. Construction of code is
performed by two engineers sitting at the same machine so that there is less scope
of errors in the code. This is called pair programming. Code is continuously re-
factored or improved to make it more efficient.
In the following sections, analysis for the need for producing software develop-
ment artifacts for the Semantic Web and the challenges of the current business
model of application development and deployment involving Web 2.0 and Web 3.0
technologies and cloud computing platform are reported. Finally, methodologies to
develop quality software that will push forward the advances of the cloud computing
platform have been suggested.
Semantic Web effort has just started and not all are aware of it, even the IT profes-
sionals. The linked data initiative [7] that was taken in 2007 by a small group of
academic researchers from universities now has participants of few large companies
like BBC, Thompson Reuters, and Library of Congress who have transformed their
1 Impact of Semantic Web and Cloud Computing Platform on Software Engineering 15
Fig. 1.7 Linking open data cloud diagram giving an overview of published data sets and their
interlinkage relationships [7]
data for the Semantic Web. DBpedia is another community effort to transform the
Wikipedia documents for Semantic Web. Sophisticated queries can be run on
DBpedia data and link to other Semantic Web data. Friend of a Friend (FOAF) is
another project to link social Web sites and their people and describe what they
create or do. Federal and State governments are also taking initiatives to publish
public data online. US Census data is one such semantic data source which can be
queried and linked with other semantic data sources. Unless all government public
data can be transformed for the Semantic Web, they will not be suitable for interop-
erable Web applications.
Figure 1.7 shows the current size of the linked data Web as of March 2009. Today
there are 4.7 billion RDF triples which are interlinked by 142 million RDF links.
Anybody can transform their data in linked data standards and can link to the existing
linked data Web. In Fig. 1.7, the circles are nodes of independent data sources or
Web sites, and the arcs are their relationship with other data sources. The thicker
links specify more connections between the two data sources, and bidirectional
links mean both data sources are linked to each other.
Once the software engineers grasp the Semantic Web technologies and understand
their capabilities and their many advantages like interoperability, adaptability,
integration ability of open and distributed software components with other applications,
they will make their software artifacts Semantic Web ready. Once the software
artifacts are transformed into semantic artifacts software, maintainability will be
16 R. Guha
much more efficient and cheaper. All requirements can be optimized, linked, and
traced. Aggregating of information from requirements document will be easy, and
impact analysis before actual changes are made can be done more accurately.
Increased maintainability of software will also increase reliability of the software.
Semantic Web services will be easy to discover on the Web, and that will give
a competitive edge to their products. Semantic Web services which can be linked
with other Web services will create new and more powerful software applications,
encourage reuse, and reduce redundancy.
Benefits of Web services [23–26] are code reuse and speedy development of software
projects. But in order to use Web services from the Web, the application must create
a Web client which can interface with the Web services and request for services and
receive services. In Fig. 1.8, the Service-Oriented Architecture (SOA) that has
emerged to deliver software as a service (SaaS) business model is illustrated.
An application programming interface (API) of Web service is first created as
WSDL document using XML tags, for advertising to the world over the Internet. WSDL
documents have five major parts. It describes data types, messages, port, operation
(class and methods), binding (SOAP message), and location (URL). WSDL documents
need not be manually created. There are automatic tools like Apache Axis [25],
which will create the API from a Java programming code. Apache Axis is an open
source, XML-based Web service framework.
1 Impact of Semantic Web and Cloud Computing Platform on Software Engineering 17
After creating the WSDL document, a Web client to consume the Web service is
needed. Web client is created using SOAP to communicate request and response
messages between the two applications. SOAP is an XML messaging format for
exchanging structured data (XML documents) over HTTP transport protocol and
can be used for remote procedure call (RPC). SOAP structure has three parts:
(1) envelop, (2) header, and (3) body. Body defines the message and how to process it.
Software engineers have to master XML language and other Web technologies
like WSDL and SOAP in addition to knowing a programming language like Java or
C++ in order to use or create a Web service.
This section surveys how software development industry is trying to survive in the
era of Web 2.0 and Web 3.0 with Web services and cloud computing. In reference
[27], the authors present framework activities for designing applications based on
discovery of Semantic Web service using software engineering methodologies.
They propose generating semiautomatic semantic description of applications
exploiting the existing methodologies and tools of Web engineering. This increases
design efficiency and reduces manual effort of semantically annotating the new
application composed from Web services of multiple enterprises.
In Reference [28], Salesforce.com finds that agile process model works better on
cloud computing platform. Before cloud computing, release of the software to the
user took time and getting feedback from the customer took more time which
thwarted the very concept of agile development. Whereas now, new releases of the
software can be uploaded on the server and used by the users immediately. Basically
in this chapter, what they have described is the benefits of software as a service
hosted on the Internet and how it complements agile computing methodology. They
have not considered the challenges of cloud computing in developing new business
software.
Cloud computing being the newest hype of the IT industry, the challenges of
software engineering on cloud computing platform have not been studied yet, and
no software development process model for cloud computing platform has been
suggested yet. We analyze the challenges of the cloud computing platform on
software development process and suggest extending the existing agile process
model, named Extreme Programming, to mitigate all the challenges in Sect. 1.3.4.
In the rapidly changing computing environment with Web services and cloud
platform, software development is going to be very challenging. Software develop-
ment process will involve heterogeneous platforms, distributed Web services, and
18 R. Guha
multiple enterprises geographically dispersed all over the world. Existing software
process models and framework activities are not going to be adequate unless inter-
action with cloud providers is included.
Requirements gathering phase so far included customers, users, and software
engineers. Now it has to include the cloud providers as well, as they will be supplying
the computing infrastructure and maintain them too. As the cloud providers only
will know the size, architectural details, virtualization strategy, and resource utilization
percentage of the infrastructure, planning and design phases of software development
also have to include the cloud providers. The cloud providers can help in answering
these questions about (1) how many developers are needed, (2) component reuse,
(3) cost estimation, (4) schedule estimation, (5) risk management, (6) configuration
management, (7) change management, and (8) quality assurance.
Because of the component reuse of Web services, the size of the software in
number of kilo lines of code (KLOC) or number of function points (FP) to be newly
developed by the software engineer will reduce, but complexity of the project will
increase manyfold because of lack of documentations of implementation details
of Web services and their integration requirements. Only description that will be
available online is the metadata information of the Web services to be processed by
the computers automatically.
Only coding and testing phases can be done independently by the software
engineers. Coding and testing can be done on the cloud platform which is a huge
benefit as everybody will have easy access to the software being built. This will
reduce the cost and time for testing and validation.
However, software developers need to use the Web services and open source
software freely available from the cloud instead of procuring them. Software
developers should have more expertise in building software from readily available
components than writing it all and building a monolithic application. Refactoring of
existing application is required to best utilize the cloud infrastructure architecture in
a cost-effective way. In the latest hardware technology, the computers are multi-core
and networked, and the software engineers should train themselves in parallel and
distributed computing to complement these advances of hardware and network
technology. Software engineers should train themselves in Internet protocols, XML,
Web service standards and layered separation of concerns of SOA architecture
of Internet, and Semantic Web technologies to leverage all the benefits of Web
2.0. Cloud providers will insist that software should be as modular as possible for
occasional migration from one server to another for load balancing as required by
the cloud provider [16].
Maintenance phase should also include the cloud providers. There is a complete
shift of responsibility of maintenance of the infrastructure from software developers
to cloud providers. Now because of the involvement of the cloud provider, the
customer has to sign a contract with them as well so that the “Software Engineering
code of ethics” is not violated by the cloud provider. In addition, protection and
security of the data is of utmost importance which is under the jurisdiction of the
cloud provider now.
Also, occasional demand of higher resource usage of CPU time or network from
applications may thwart the pay-by-use model of cloud computing into jeopardy
1 Impact of Semantic Web and Cloud Computing Platform on Software Engineering 19
as multiple applications may need higher resource usage all at the same time not
anticipated by the cloud provider in the beginning. Especially when applications are
deployed as “software as a service” or “SaaS” model, they may have occasional
workload surge not anticipated in advance.
Cloud provider uses virtualization of resource technique to cater many customers
on demand in an efficient way. For higher resource utilization, occasional migration
of application from one server to another or from one storage to another may be
required by the cloud provider. This may be a conflict of interest with the customer
as they want dedicated resources with high availability and reliability of their
applications. To avoid this conflict, cloud providers need to introduce quality of
service provisions for higher-priority tenants.
Now we analyze how difficult will be the interaction between cloud providers
and the software engineers. The amount of interactions between software engineers
and cloud providers will depend on the type of cloud like public, private, or hybrid
cloud involvements. In private cloud, there is more control or self-governance by
the customer than in public cloud. Customer should also consider using private
cloud instead of using public cloud to assure availability and reliability of their
high-priority applications. Benefits of private cloud will be less interaction with
cloud provider, self-governance, high security, reliability, and availability of data
(Fig. 1.9). But cheaper computing on public cloud will always outweigh the benefits
of less complexity of SW development on private cloud platform and is going to
be more attractive.
managing risk in software projects that “You cannot control what you cannot mea-
sure.” Constructive cost estimation model (COCOMO) is mostly used model for
cost estimation of various software development projects. In COCOMO model
(Table 1.2), three classes of software projects have been considered so far. These
software projects are classified as (1) Organic, (2) Semidetached, (3) Embedded
according to the software team size, their experiences, and development (HW, SW,
and operations) constraints. We extend [29] this cost estimation model with a new
class of software project for cloud computing platform. In basic COCOMO model
effort (man month), development time (months) and number of people required
are given by the following equations.
1.5 Conclusion
The development of Semantic Web or Web 3.0 can transform the World Wide Web
into an intelligent Web system of structured, linked data which can be queried and
inferred as a whole by the computers themselves. This Semantic Web capability is
materializing many innovative use of the Web such as hosting Web services and
cloud computing platform. Web services and cloud computing are paradigm shifts
over traditional way of developing and deploying of software. This will make software
engineering more difficult as software engineers have to master the Semantic Web
1 Impact of Semantic Web and Cloud Computing Platform on Software Engineering 23
skills for using open source software on distributed computing platform and
they have to interact with a third party called the “cloud provider” in all stages of
software processes. Automatic discovery and integration with Web services will
reduce the amount of work in terms of line of code (LOC) or function points (FP)
required for developing software on cloud platform but there will be added semantic
skill requirements and communication and coordination requirements with the
cloud providers which makes software development project more complex.
First, the Semantic Web techniques are explored on what the software developers
need to incorporate in their artifacts in order to be discovered easily on the Web to
give their product a competitive edge and for efficient software integration and
maintenance purposes. Then, the need for changes in the prevalent software process
models is analyzed to suggest that they should incorporate the new dimension of
interactions with the cloud providers and separate roles of software engineers and
cloud providers. A new agile process model is proposed in this chapter which
includes the anticipated interactions requirement with the cloud provider which
will mitigate all the challenges of software development on cloud computing
platform and make it more advantageous to develop and deploy software on the
cloud computing platform.
Cloud computing being the anticipated future computing platform, more soft-
ware engineering process models need to be researched which can mitigate all
its challenges and reap all its benefits. Also, safety and privacy issues of data in
cloud computing platform need to be considered seriously so that cloud computing
is truly accepted by all.
References
Abstract The software engineering field is on the move. The contributions of software
solutions for IT-inspired business automation, acceleration, and augmentation are enor-
mous. The business values are also rapidly growing with the constant and consistent
maturity and stability of software technologies, processes, infrastructures, frameworks,
architectural patterns, and tools. On the other hand, the uncertainty in the global econ-
omy has a direct bearing on the IT budgets of worldwide organizations. That is, they
are expecting greater flexibility, responsiveness, and accountability from their IT
division, which is being chronically touted as the cost center. This insists on shorter
delivery cycles and on delivering low-cost yet high-quality solutions. Cloud computing
prescribes a distinguished delivery model that helps IT organizations to provide quality
solutions efficiently in a manner that suits to evolving business needs. In this chapter,
we are to focus how software-development tasks can get greatly simplified and stream-
lined with cloud-centric development processes, practices, platforms, and patterns.
2.1 Introduction
The number of pioneering discoveries in the Internet space is quite large. In the
recent past, the availability of devices and tools to access online and on-demand
professional and personal services has increased dramatically. Software has been
P. Raj (*)
Wipro Technologies, Bangalore 560035, India
e-mail: [email protected]
V. Venkatesh • R. Amirtharajan
School of Electrical and Electronics Engineering, SASTRA University,
Thanjavur, Tamil Nadu, India
Z. Mahmood and S. Saeed (eds.), Software Engineering Frameworks for the Cloud 25
Computing Paradigm, Computer Communications and Networks,
DOI 10.1007/978-1-4471-5031-2_2, © Springer-Verlag London 2013
26 P. Raj et al.
pervasive and persuasive. It runs on almost all kinds of everyday devices that are
increasingly interconnected as well as Internet-connected. This deeper and extreme
connectivity opens up fresh possibilities and opportunities for students, scholars,
and scientists. The devices at the ground level are seamlessly integrated with cyber
applications at remote, online, on-demand cloud servers. The hardware and software
infrastructure solutions need to be extremely scalable, nimble, available, high-
performing, dynamic, modifiable, real-time, and completely secure. Cloud computing
is changing the total IT landscape by presenting every single and tangible IT resource
as a service over any network. This strategically sound service enablement decimates
all kinds of dependencies, portability, interoperability issues, etc.
Cloud services and applications are becoming very popular and penetrative these
days. Increasingly, both business and IT applications are being modernized appro-
priately and moved to clouds to be subsequently subscribed and consumed by global
user programs and people directly anytime anywhere for free or a fee. The aspect
of software delivery is henceforth for a paradigm shift with the smart leverage of
cloud concepts and competencies. Now there is a noteworthy trend emerging fast to
inspire professionals and professors to pronounce the role and responsibility of
clouds in software engineering. That is, not only cloud-based software delivery but
also cloud-based software development and debugging are insisted as the need of
the hour. On carefully considering the happenings, it is no exaggeration to say that
the end-to-end software production, provision, protection, and preservation are to
happen in virtualized IT environments in a cost-effective, compact, and cognitive
fashion. Another interesting and strategic pointer is that the number and the type of
input/output devices interacting with remote, online, and on-demand cloud are on
the climb. Besides fixed and portable computing machines, there are slim and sleek
mobile, implantable, and wearable devices emerging to access, use, and orchestrate
a wider variety of disparate and distributed professional as well as personal cloud
services. The urgent thing is to embark on modernizing and refining the currently
used application development processes and practices in order to make cloud-based
software engineering simpler, successful, and sustainable.
In this chapter, we discuss cloud-sponsored transformations for IT and leveraging
clouds for global software development and present a reflection on software
engineering. The combination of agility and cloud infrastructure for next-generation
software engineering, the convergence of service and cloud paradigms, the amalga-
mation of model-driven architecture, and the cloud and various mechanisms for
assisting cloud software development are also discussed. At the end, cloud platform
solutions for software engineering are discussed, and software engineering challenges
with respect to cloud environments are also presented.
higher utilization, and optimization. This section explores the tectonic and seismic
shifts of IT through the cloud concepts.
• Adaptive IT – There are a number of cloud-inspired innovations in the form of
promising, potential, and powerful deployment; delivery; pricing; and consump-
tion models in order to sustain the IT value for businesses. With IT agility setting
in seamlessly, business agility, autonomy, and adaptivity are being guaranteed
with the adoption and adaption of cloud idea.
• People IT – Clouds support centralized yet federated working model. It
operates at a global level. For example, today there are hundreds of thousands
of smartphone applications and services accumulated and delivered via
mobile clouds. With ultrahigh broadband communication infrastructures
and advanced to compute clouds in place, the vision of the Internet of
devices, services, and things is to see a neat and nice reality. Self-, surroundings-,
and situation-aware services will become common, plentiful, and cheap;
thereby, IT promptly deals with peoples’ needs precisely and delivers on
them directly.
• Green IT – The whole world is becoming conscious about the power energy
consumption and the heat getting dissipated into our living environment. There
are calculated campaigns at different levels for arresting climate change and for
sustainable environment through less greenhouse-gas emission. IT is being
approached for arriving at competent green solutions. Grid and cloud computing
concepts are the leading concepts for green environment. Especially the smart
energy grid and the Internet of Energy (IoE) disciplines are gaining a lot of
ground in order to contribute decisively for the global goal of sustainability.
The much-published and proclaimed cloud paradigm leads to lean compute,
communication, and storage infrastructures, which significantly reduce the
electricity consumption.
• Optimal IT – There are a number of worthwhile optimizations happening in the
business-enabling IT space. “More with less” has become the buzzword for both
business and IT managers. Cloud enablement has become the mandatory thing
for IT divisions as there are several distinct benefits getting accrued out of this
empowerment. Cloud certainly has the wherewithal for the goals behind the IT
optimization drive.
• Next-Generation IT – With a number of delectable advancements in wireless and
wired broadband communication space, the future Internet is being positioned as
the central figure in conceiving and concretizing people-centric discoveries and
inventions. With cloud emerging as the new-generation compute infrastructure,
we will have connected, simplified, and smart IT that offers more influential and
inferential capability to humans.
• Converged, Collaborative, and Shared IT – The cloud idea is fast penetrating
into every tangible domain. Cloud’s platforms are famous for not only software
deployment and delivery but also for service design, development, debugging,
and management. Further on, clouds, being the consolidated, converged, and
centralized infrastructure, are being prescribed and presented as the best bet
for enabling seamless and spontaneous service integration, orchestration, and
28 P. Raj et al.
Globalization and distribution are the two key concepts in the IT field. Software
development goes off nations’ boundaries and tends toward places wherein quality
software engineers and project managers are available in plenty. On-site, off-shoring,
near-shoring, etc., are some of the recent buzzwords in IT circles due to these devel-
opments. That is, even a software project gets developed in different locations as
the project team gets distributed across the globe. With the sharp turnarounds in a
communication field, a kind of tighter coordination and collaboration among team
members are possible in order to make project implementation successful and
sustainable. In-sourcing has paved the way for outsourcing with the maturity of
appropriate technologies. As widely known, software sharply enhances the com-
petitive advantage and edge for businesses. Hence, global software development
(GSD) has become a mandatory thing for the world organizations. Nevertheless,
when embarking on GSD, organizations continue to face challenges in adhering to
the development life cycle. The advent of the Internet has supported GSD by
bringing new concepts and opportunities resulting in benefits such as scalability,
flexibility, independence, reduced cost, resource pools, and usage tracking. It has
also caused the emergence of new challenges in the way software is being delivered
to stakeholders. Application software and data on the cloud are accessed through
services, which follow SOA principles.
GSD is actually the software-development process incorporating teams spread
across the globe in different locations, countries, and even continents. The driver for
this sort of arrangement is by the fact that conducting software projects in multiple
geographical locations is likely to result in benefits such as cost reduction and
2 Envisioning the Cloud-Induced Transformations in the Software Engineering… 29
reduced time to market, access to a larger skill pool, proximity to customer, and
24-h development by following the sun. But, at the same time, GSD brings challenges
to distributed software-development activities due to geographic, cultural, linguistic,
and temporal distance between the project development teams.
Because of the distance between the software-development teams, GSD
encounters certain challenges in terms of collaboration, communication, coordination,
culture, management, organizational, outsourcing, development process, develop-
ment teams, and tools. The real motive for using the cloud for supporting GSD is
that the cloud idea thrives as it is closely related to the service paradigm. That is,
services are created, provisioned, and delivered from cloud-based service platforms.
Since SOA runs a mechanism for development and management of distributed
dynamic systems, and it evolved from the distributed-component-based approach, it
is argued that cloud has the innate potential and strength to successfully cater for the
challenges of GSD where a project is developed across different geographical
locations. GSD challenges can be overcome through SOA. This will contribute to
increased interoperability, diversification, and business and technology alignment.
Cloud as the next-generation centralized and service-oriented infrastructure is capable
of decimating all the internal as well as externally imposed challenges.
• Global Software Development (GSD) in Cloud Platforms [1] – Clouds offer
instant resource provisioning, flexibility, on-the-fly scaling, and high availability
for continuously evolving GSD-related activities. Some of the use cases include.
• Development Environments – With clouds, the ability to acquire, deploy, configure,
and host development environments become “on-demand.” The development
environments are always on and always available to the implementation teams
with fine-grained access control mechanisms. In addition, the development
environments can be purpose-built with support for application-level tools, source
code repositories, and programming tools. After the project is done, these can also
be archived or destroyed. The other key element of these “on-demand” hosting
environments is the flexibility through its quick “prototyping” support. Prototyping
becomes flexible, in that as new code and ideas can be quickly turned into work-
able proof of concepts (PoCs) and tested.
• Developer Tools – Hosting developer tools such as IDEs and simple code editors
in the cloud eliminates the need for developers to have local IDEs and other
associated development tools, which are made available across time zones and
places.
• Content Collaboration Spaces – Clouds make collaboration and coordination
practical, intuitive, and flexible through easy enabling of content collaboration
spaces, modeled after the social software domain tools like Facebook, but centering
on project-related information like invoices, statements, RFPs, requirement doc-
uments, images, and data sets. These content spaces can automate many project-
related tasks such as automatically creating MS Word versions of all imported
text documents or as complex as running work flows to collate information from
several different organizations working in collaboration. Each content space can
be unique, created by composing a set of project requirements. Users can invite
30 P. Raj et al.
Radha Guha writes in [2] that over the last half-century, there have been robust and
resilient advancements in the hardware engineering domain. That is, there are radical
and rapid improvisations in computers, memory, storage, communication networks,
mobile devices, and embedded systems. This has been incessantly pushing the need
for larger and more complex software. Software development not only involves
many different hardware elements, it also involves many different parties like end
users and software engineers. That is why software development has become such
an inherently complicated task. Software developers are analyzing, articulating, and
adopting the proven and prescribed engineering disciplines. That is, leveraging
systematic, disciplined, and quantifiable approach to make software development
more manageable to produce quality software products. The success or quality of a
software project is measured by whether it is developed within the stipulated time
and agreed budget and by its throughput, user-friendliness, consumability, depend-
ability, and modifiability.
Typically, a software engineering engagement starts off with an explicit and
elegant process model comprising several formally defined and synchronized
phases. The whole development process of software from its conceptualization to
implementation to operation and retirement is called the software-development
life cycle (SDLC). SDLC goes through several sub-activities like requirement’s gath-
ering, planning, design, coding, testing, deployment, maintenance, and retirement.
2 Envisioning the Cloud-Induced Transformations in the Software Engineering… 31
These activities are well synchronized in accordance to the process model adopted
for a particular software development. There are many process models to choose
from like water fall model, rapid application development (RAD) model, and spiral
model depending on the size of the project, delivery time requirement, and type of
the project. The development of an avionic embedded system will adopt a different
process model from development of a Web application.
Even though software engineering [3] takes the engineering approach, the success
of software products is more difficult than products from other engineering domains
like mechanical engineering or civil engineering. This is because software is
intangible during its development. Software project managers use a number of
techniques and tools to monitor the software building activities in a more visible
way. These activities include software project tracking and control, risk management,
quality assurance, measurements, configuration management, work product or
document’s generation, review, and reusability management.
Even after taking all these measures for sticking to the plan and giving much
importance to document generation for project tracking and control, many software
projects failed. More than 50 % of software projects fail due to various reasons
like schedule and budget slippage, non-user-friendly interface of the software, and
non-flexibility for maintenance and change of the software. Therefore, there is a
continued and consistent focus on simplifying and streamlining software implementa-
tion. In this chapter, we are to see some of the critical and crucial improvements in
software engineering process with the availability of cloud infrastructures.
The Evolutions and Revolutions in the Software Engineering Field – There are a
number of desirable and delectable advancements in the field of software engineering
in order to make the tough task of software construction easier and quicker. This
section describes the different levels and layers in which the software engineering
discipline and domain evolve.
At the building-block level, data, procedures, classes, components, agents, aspects,
events, and services are the key abstraction and encapsulation units for building and
orchestrating software modules into various types of specific and generic software.
Services especially contribute in legacy modernization and migration to open
service-oriented platforms (SOPs) besides facilitating the integration of disparate,
distributed, and decentralized applications. In short, building blocks are the key
ingredient enabling software elegance, excellence, and evolution. In the recent past,
formal models in digital format and service composites are evolving fast in order
to further simplify and streamline the tough task of software assembly and imple-
mentation. As software complexity is on the rise, the need for fresh thoughts and
techniques is too on the climb.
On the language level, a bevy of programming languages (open source as well as
proprietary) were produced and promoted by individuals, innovators, and institu-
tions. Even, there are efforts underway in order to leverage fit-for-purpose languages
to build different parts and portions of software applications. Software libraries are
growing in number, and the ideas of software factory and industrialization are
picking up fast lately. Service registry and repository are an interesting phenome-
non for speeding up software realization and maintenance. Programming languages
32 P. Raj et al.
and approaches thrive as there are different programming paradigms such as object
orientation, event- and model-driven concepts, componentization, and service
orientation. Further on, there are script languages in the recent past generating and
getting a lot of attention due to their unique ability of achieving more with less code.
Formal models in digitalized format and service composites are turning out to be
a blessing in disguise for the success and survival of software engineering. There
are domain-specific languages (DSLs) that could cater to the specific demands of
domains quite easily and quickly.
As far as development environments are concerned, there are a number of diverse
application building platforms for halving the software developmental complexity
and cost. That is, there are a slew of integrated development environments (IDEs),
rapid application development (RAD) tools, code generators and cartridges, enabling
CASE tools, compilers, debuggers, profilers, purpose-specific engines, generic and
specific frameworks, best practices, key guidelines, etc. Plug and play mechanism
has gained a lot with the overwhelming adoption of eclipse IDE for inserting and
instantiating different language compilers and interpreters. The long-standing
objectives of platform portability (Java) and language portability (.NET Framework)
are being achieved at a middleware level. There are standards-compliant toolkits
for process modeling, simulation, improvement, investigation, and mapping. Services
as the well-qualified process elements are being discovered, compared, and orches-
trated for partial or full process automation.
At the process level, waterfall is the earliest one, and thereafter there came a
number of delicious variations in software-development methodology with each
one having both pros and cons. Iterations, increments, and integrations are being
touted as the fundamental characteristics for swifter software production. Agile pro-
gramming is gaining a lot of ground as business changes are more frequent than
ever before and software complexity is also growing. Agility and authenticity in
software building are graciously achieved with improved and constant interactions
with customers and with the enhanced visibility and controllability on software
implementation procedures. Agility, being a well-known horizontal technique,
matches, mixes, and merges with other paradigms such as service-oriented program-
ming and model-driven software development to considerably assist in lessening
the workload of software developers and coders. Another noteworthy trend is that
rather than code-based implementation, configuration-based software production
catches up fast.
At the infrastructural level, the cloud idea has brought in innumerable transfor-
mations. The target of IT agility is seeing a neat and nice reality and this in turn
could lead to business agility. Technically, cloud-inspired infrastructures are virtual-
ized, elastic, self-servicing, automated, and shared. Due to the unique capabilities
and competencies of cloud IT infrastructures (in short, clouds), all kinds of enterprise
IT platforms (development, execution, management, governance, and delivery)
are being accordingly manipulated and migrated to be hosted in clouds, which are
extremely converged, optimized, dynamic, lean, and green. Such meteoric movement
decisively empowers application platforms to be multitenant, unified, and central-
ized catering to multiple customers and users with all the enhanced productivity,
2 Envisioning the Cloud-Induced Transformations in the Software Engineering… 33
extensibility, and effectiveness. In other words, cloud platforms are set to rule and
reign the IT world in the days to unfold. In other words, platforms are getting
service-enabled so that any service (application, platform, and infrastructure) can
discover and use them without any barriers. Service enablement actually expresses
and exposes every IT resource as a service so that all kinds of the resource’s incom-
patibilities are decimated completely. That is, resources readily connect, concur,
compose, and collaborate with one another without any externally or internally
imposed constrictions, contradictions, and confusions. In a nutshell, the unassailable
service science has come as a unifying factor for the dilapidated and divergent
IT world.
In summary, the deeply dissected, discoursed, and deliberated software-
development discipline is going through a number of pioneering and positive
changes as described above.
As indicated previously, there have been many turns and twists in the hot field of
software engineering. It is an unquestionable fact that the cloud paradigm, without
an iota of doubt, has impacted the entire IT elegantly and exceedingly. Besides
presenting a bright future on the aspect of centralized deployment, delivery, and
management of IT resources, the cloud idea has opened up fresh opportunities and
possibilities for cloud-based software design, development, and debugging in a
simplified and systematic fashion. That is, with the overwhelming adoption and
adaption of cloud infrastructures (private, public, community, and hybrid), produc-
ing and preserving enterprise-scale, mission-critical, and value-added software are
going to be definitely distinct. There are four key drivers that unanimously elevate
the software development to be advanced to an accomplished in a cloud. These are:
• Time, Cost, and Productivity – The developer community is being mandated to
do more, quicker, and with fewer resources.
• Distributed Complex Sourcing – Due to various reasons, IT project team
members are geographically dispersed.
• Faster Delivery of Innovation – The focus is on enabling architects and developers
to think ingeniously in order to deliver business value.
• Increasing Complexity – In today’s world, an enterprise-scale project easily
consumes several million lines resulting in more complexity.
In order to reduce complexity, resources, cost, and time considerably, profes-
sionals and professors are vigorously and rigorously striving and searching for
incredibly inventive solutions. Newer concepts, process optimization, best practices,
fresh programming models, state-of-the-art platforms, design patterns and metrics,
and advanced tools are being increasingly unearthed and utilized for lessening the
software development workload. Researchers are continuously at work in order to
34 P. Raj et al.
discover competent and compact methods and mechanisms for simplifying and
streamlining the increasingly multifaceted tasks of constructing and conserving
next-generation software systems. The major benefits of agile methodology over the
traditional methods are:
• Faster time to market
• Quick return on investment
• Shorter release cycles
• Better adaptability and responsiveness to business changing requirements
• Early detection of failure and immediate correction
There are several agile development methods such as Scrum, extreme programming,
test-driven development, and lean software development [4]. With agile models,
business houses expect that services and solutions are being delivered incrementally
earlier rather than later, and delivery cycle time period comes down sharply. That
is, one delivery cycle takes up from 2 to 4 weeks. However, in the midst of these
turnarounds, there arise a number of critical challenges, as mentioned below:
• High effort and cost involved in setting up infrastructures
• Lack of skilled resources
• Lack of ability to build applications from multiple places across the globe
There are a few popular cloud platforms available in order to enable software
development in cloud environments. Google App Engine, salesforce.com, cloud-
foundry.org, cloudbees.com, corenttech.com, heroku.com, windowsazure.com, etc.,
are the leading platforms for cloud-based application development, scaling, and
sustainability.
Collabnet (http://www.collab.net/), a product firm for enabling software devel-
opment in cloud-based platforms, expounds and enlightens on the seamless conver-
gence of the agile programming models, application lifecycle management (ALM)
product, and clouds for a precise and decisive answer for the perpetual software
engineering challenges, changes, and concerns. It convincingly argues that cloud
technologies reduce development barriers by providing benefits in the following
critical areas:
• Availability – Code is centralized and infrastructure is scalable and available on
demand.
• Access – Ensures flexible access to test environments and transparency to project
data for the entire team.
• Overhead – Reduced support overhead, no upgrade latency – teams use an on-
demand model to get what they need, quickly and easily.
Agile processes set the strong and stimulating foundation for distributed teams to
work closely together with all the right and relevant stakeholders to better anticipate
and respond to user expectations. Agile teams today are empowered to clearly
communicate with users to act and react expediently to their feedback. That is, they
are able to collaboratively and cleverly iterate toward the desired state and user
satisfaction. Cloud intrinsically facilitates open collaboration across geographies
2 Envisioning the Cloud-Induced Transformations in the Software Engineering… 35
and time zones with little investment or risk. With more and more development and
test activities moving toward clouds, organizations are able to save time and money
using virtual and shared resources on need basis. Developers could save time
by leaving configuration, upgrades, and maintenance to cloud providers, who usually
employ highly educated and experienced people. Anytime anywhere access is facil-
itated for those with proper authentication and authorization, and assets are
completely centralized and controlled.
Agile and cloud are being positioned together and prescribed as a powerful and
pathbreaking combination for the software-development community. This might
seem counterintuitive to those entrenched in waterfall processes or those comfort-
able with the idea of a daily stand-up and colocated teams. The reality is altogether
different. That is, there are a number of technical and business cases emerging for
using the agile methods in the cloud. The agility concepts make development
teams responsive to the changing needs of businesses and empower them to be
adaptable and flexible. Further on, proven agile processes help to break down all
sorts of barriers and blockages between development and production, allowing
teams to work together to concentrate on meeting stakeholder expectations. The
synchronization of agile and cloud paradigms fully free up developers from all
kinds of difficulties to achieve more with less, to innovate fast, and to ultimately
bring value to the business.
The service idea has matured and stabilized as the dominant approach for designing,
developing, and delivering open, sustainable, and interoperable service-oriented
systems for enterprise, Web, embedded, and cloud spaces. Even many of the modules
of packaged business software solutions are modified and presented as services.
Services are publicly discoverable and accessible, reusable, and composable
modules for building distinct and specific applications through configuration and
customization, runtime matching, selection and usage of distributed, disparate
and decentralized services, replacement of existing service components through the
substitution of new advanced service components, and service orchestration.
Services as process elements are supporting and sustaining process-oriented systems,
which are generally more flexible. That is, operation and controlling of software
solutions at process level considerably reduce the software development, management,
and maintenance tasks.
Thus, the process propensity of the service paradigm and cloud-centric service-
oriented infrastructures and platforms bring a number of distinct advantages for
software engineering. Services and cloud computing have garnered much attention
from both industry and academia because they enable the rapid and radical devel-
opment of enterprise-scale, mission-critical, high-performance, dynamic, and dis-
tributed applications. Agility, adaptivity, and affordability, the prime characteristics
of next-generation software systems, can be realized with the smart leverage of
36 P. Raj et al.
processes, services, and cloud platforms. Precisely speaking, the service paradigm
is to energize futuristic software design, whereas cloud platforms are being tipped
and touted as the next-generation service-centric platforms for service development,
deployment, management, and delivery.
Service-Oriented Software Development – It is to see a lot of delectable and
decisive shifts with the adoption of cloud platforms. The smooth and seamless
convergence of services and clouds promises shining days for software-development
community. Of course, there are a few challenges that need utmost attention from
scholars, scientists, and students. Security, visibility, controllability, performance,
availability, usability, etc., need to be obviated in order to fast-track service-based
software implementation in clouds.
As widely pronounced, services are being positioned as the most flexible and
fertile component for software production. That is, software solutions are made of
interoperable services. It is all about the dynamic discovery and purposeful interac-
tions among a variety of services that are local or remote, business or IT-centric, and
owned or subscribed from third-party service providers. Services are standards-
compliant, self-describing, and autonomous entities in order to decimate all kinds
of dependencies and incompatibilities, to promote seamless and spontaneous
collaborations, and to share each of their capability and competency with others
over networks. Process and workflow-based service compositions result in dynamic
applications that are highly portable. XML is the key data representation, exchange,
and persistence mechanism facilitating service interoperability. Policies are being
framed and encouraged in order to achieve automated service finding, binding, usage,
monitoring, and governance. The essence of service governance is to explicitly
establish pragmatic policies and enforce them stringently. With a consistent rise in
automation, there is a possibility for deviation and distraction, and hence the service
governance discipline is gaining a lot of ground these days.
As there is a clear distinction between service users and providers, service-level
agreement (SLA) and even operation-level agreement (OLA) are becoming vital for
service-centric business success and survival. Furthermore, there are geographically
distributed several providers providing identical or similar services and hence SLA,
which unambiguously describes runtime requirements that govern a service’s
interactions with different users, has come as a deciding factor for service selection
and utilization. A service contract describes its interface and the associated con-
tractual obligations. Using standard protocols and respective interfaces, application
developers can dynamically search, discover, compose, test, verify, and execute
services in their applications at runtime. In a nutshell, SOA-based application devel-
opment is through service registration, discovery, assessment, and composition,
which primarily involves three stakeholders:
• A service provider is one who develops and hosts the service in cloud platforms.
• A service consumer is a person or program that finds and uses a service to build
an application.
• A service broker mediates between service providers and consumers. It is a
program or professional in helping out providers publishing their unique services
and guiding consumers to identify ideal services.
2 Envisioning the Cloud-Induced Transformations in the Software Engineering… 37
come to the rescue here by ensuring effective application delivery. Applications can
be affordably deployed and maintained in advanced cloud platforms. Application
capabilities can be provided as a service. All kinds of non-functional (quality of
service (QoS)) attributes are effortlessly accomplished with clouds. Anytime any-
where resource access is being facilitated. Centralized monitoring and management
are remarkably simplified here. That is, clouds as the next-generation service-
oriented infrastructures (SOIs) have emerged in correct time in order to take the
service idea to greater heights. It is therefore no exaggeration to proclaim that
the software engineering field is greatly and grandiosely empowered by evolving
cloud concepts.
Agile Service Networks (ASNs) [6, 7] – Cloud computing’s high flexibility needs
novel software engineering approaches and technologies to deliver agile, flexible,
scalable, yet secure software solutions with full technical and business gains. One
way is to allow applications to do the computing in cloud, and the other is to allow
users to integrate with the applications. Agile service networks (ASNs) are themselves
an emerging paradigm envisioning collaborative and dynamic service interactions
(network edges) among global service-oriented applications (network nodes). ASNs
can be used as a paradigm for software engineering in the cloud, since they are
indeed able to deliver solutions which are both compliant to the cloud’s needs and
able to harness it, bringing about its full potential.
Context adaptation is used in ASNs to achieve agility. The concept of ASN is
defined as a consequence of “late service binding.” In the context of services’ dyna-
mism, which is achieved through late service binding, ASNs become a perfect
example of how agility can be achieved in SOA systems. Adaptation is presented as
one of the main tenets of SOA. This paradigm regards highly dynamic systems
within a rapidly changing context to which applications must adapt. In this sense,
ASNs are used to exemplify industrial needs for adaptive, context-aware systems.
ASN Key Features – ASNs are dynamic entities. Dynamism is seen as an essential
part of the service interactions within collaborative industries (i.e., industrial value
networks). Dynamism in ASNs is the trigger to service rearrangement and applica-
tion adaptation. For example, an ASN made of collaborative resource brokering
such as distributed stock markets is dynamic in the sense that different partners may
participate actively, others may be dynamically added while brokering is ongoing,
others may retire from the brokering process, and others may dynamically change
their business goals and hence their brokering strategy. ASNs are business-oriented:
ASNs are borne out of business corporative collaborations and represent complex
service applications interacting in a networked business scenario involving multiple
corporations or partners in different sites (i.e., different geo-locations). Within
ASNs, business value can be computed, analyzed, and maximized.
Cloud-Induced Software Engineering Challenges – As widely reported, there
are some important concerns with public clouds. Security, controllability, visibility,
performance, and availability are the major issues. Virtualization, the central
technology for the massive uptake and incontestable success of the cloud idea, has
introduced new security holes. Typically, public clouds are more or less accom-
modating several customers to be economical, and there are real dangers and risks
in a shared environment. If a cloud is not available for a few minutes, the resulting
40 P. Raj et al.
Modeling has been a fundamental and foundational activity for ages. Before a
complex system gets formed, a model of the system is created as it could throw
some light about the system’s final structure and behavior. Models could extract
and expose any kind of hidden risks and lacunae in system functioning and give a
bit of confidence for designers and developers to plan and proceed obviating all
kinds of barriers. Models give an overall understanding about the system to be
built. In short, models decompose the system into a collection of smaller and man-
ageable chunks in order to empower engineers to have a firm grip and grasp of the
system under implementation. Modeling is one of the prominent and dominant
complexity-mitigation techniques as systems across domains are fast-growing in
complexity.
As IT systems are growing complexity, formal models are presented as the next-
generation abstraction and encapsulation unit for them. In the recent past, models
have been used as building blocks for having portable, sustainable, and flexible IT
systems. Models are created digitally, stored, refined, and revitalized as per the
changing needs. There are formats such as XML Metadata Interchange (XMI) for
exporting models over the network or any other media to other systems as inputs for
further processing. There are unified and visual languages and standardized notations
emerging and energizing compact and formal model representation, persistence,
manipulation, and exchange. Product vendors and open source software developers
have come out with innumerable software tools for facilitating model creation, trans-
formation, verification, validation, and exporting. For object orientation, unified
modeling language (UML) has been the standard one for defining and describing
models for various constructs and activities. For component-based assembly and ser-
vice-orientation programming, UML profiles have been created in order to keep UML
as the modeling language for software engineering. Further on, there are processing
modeling and execution languages such as BPML and BPEL and notations such as
BPMN in order to develop process-centric applications. That is, process models act as
the building blocks for system engineering.
Model-driven architecture (MDA) is the associated application architecture.
Model-driven software engineering (MDSE) is being presented as the most dynamic
and drastic method for application engineering. Emerging and evolving MDSE
techniques can automate the development of new cloud applications program-
matically. Typically, cloud applications are a seamless union of several unique
services running on different IT platforms. That is, for producing competent cloud
applications, all the right and relevant services from diverse and geographically
distributed servers have to be meticulously found, bound, and linked up in order to
build and sustain modular (loosely coupled and highly cohesive) cloud applications.
Precisely speaking, services have been overwhelmingly accepted as the most
productive and pliable building block for realizing adaptive, mission-critical, and
enterprise-scale applications.
42 P. Raj et al.
Today, not only development processes but also environments have to be very
agile [10] and anticipative as software development becomes more sophisticated.
Cloud-induced agile IT environments are being presented as the viable and valuable
44 P. Raj et al.
nor the exact amount of efforts can be accurately finalized upfront. Even the end
users themselves cannot fully articulate what they need. Hence, the requirements
must be collaboratively discovered, analyzed, and finalized. Agile processes [14]
involve building software in small segments, testing those segments, and then getting
end-user feedback. The aim is to create a rapid feedback loop between the develop-
ers and the actual users.
Lean agile development methodologies and the cloud model complement each
other very well. Cloud services take pride in meeting user requirements rapidly,
delivering applications whenever and to whatever extent they are needed. Agile
methods give high credence to user collaboration in requirements discovery.
The lean agile system of software development aims to break down project require-
ments into small and achievable segments. This approach guarantees user feedback
on every task of the project. Segments can be planned, developed, and tested
individually to maintain high-quality standards without any major bottlenecks. The
development stage of every component thus becomes a single “iteration” process.
Moreover, lean agile software methods place huge emphasis on developing a
collaborative relationship between application developers and end users. The entire
development process is transparent to the end user and feedback is sought at all
stages of development, and the needy changes are made accordingly then and there.
Using lean agile development in conjunction with the cloud paradigm provides a
highly interactive and collaborative environment. The moment developers finalize
a feature, they can push it as a cloud service; users can review it instantly and
provide valuable feedback. Thus, a lengthy feedback cycle can be eliminated
thereby reducing the probability of misstated or misunderstood requirements. This
considerably curtails the time and efforts for the software development organization
while increasing end-user satisfaction. Following the lean agile approach of
demand-driven production, end users’ needs are integrated in a more cohesive and
efficient manner with software delivery as cloud services. This approach stimulates
and sustains a good amount of innovation, requirement discovery, and validation in
cloud computing.
concerns of large and scalable SaaS applications. While building applications for
our clients, developers had to address multitenancy, data management, security,
scalability, caching, and many other features. Many of the most successful SaaS
companies had themselves built their own platforms and frameworks to address
their specific applications and cost needs. Companies like Salesforce and NetSuite,
first and foremost, built platforms to meet their application needs and lower delivery
costs, rather than building them to be sold as a platform as a service (PaaS).
Release of SaaS application platforms by companies like Salesforce has not
made a significant difference in the development and delivery of commercial
SaaS applications. Currently, many PaaS/SaaS platforms on the market are suitable
for development of only small situational applications, rather than commercial busi-
ness applications that are of interest to startups, independent software vendors
(ISVs), and enterprises. These platforms use proprietary languages, are tied to a
specific hardware/software infrastructures, and do not provide the right abstractions
for developers. Alice was developed to address the above concerns and provide
a robust and open platform for the rapid development of scalable cloud services
applications. Figure 2.1 illustrates the reference architecture of the Alice Platform
for SaaS application development and delivery.
50 P. Raj et al.
With the coherent participation of cloud service providers, the software development
complexity is to climb further [3]. In the ensuing cloud era, software develop-
ment process will start to involve heterogeneous platforms, distributed services, and
multiple enterprises geographically dispersed all over the world. Existing software
process models are simply insufficient unless the remote interaction with cloud
providers is a part and parcel of the whole process. Requirements gathering phase
so far included customers, end users, and software engineers. Now it has to include
cloud service providers (CSPs) as well, as they will be supplying the computing
infrastructure, software development, management, maintenance platforms, etc.
As the cloud providers are only conversant with the infrastructure utilization details,
their experts can do the capacity planning, risk management, configuration manage-
ment, quality assurance, etc., well. Similarly, analysis and design activities should
also include CSPs, who can chip in with some decision-enabling details such as
software-development cost, schedule, resource, and time.
Development and debugging can be done on cloud platforms. There is a huge
cost benefit for individuals, innovators, and institutions. This will reduce the cost
and time for verification and validation. Software developers should have gained
more right and relevant expertise in building software from readily available
components than writing them from the scratch. The monolithic applications have
been shunted out and modular application has the future. Revisiting and refactoring
of existing application is required to best utilize the cloud paradigm in a cost-effective
manner. In the recent past, computers are fit with multicore processors. Another
trend is computers are interconnected as well as with the Web. Computers are
becoming communicators and vice versa. Computers are multifaceted, networked,
and shrinking in size, whereas the scope of computing is growing. Therefore,
software engineers should train themselves in parallel and distributed computing
to complement the unprecedented and inescapable advances in hardware and
networking. Software engineers should train themselves in Web protocols, XML,
service orientation, etc. Web is on the growing trajectory as it started with a simple
Web (Web 1.0). Today it is the social Web (Web 2.0) and semantic Web (Web 3.0)
attracting the attention of professionals as well as people. Tomorrow definitely it
will be the smart Web (Web 4.0). The cloud proposition is on the fast track and
thereby there will be a scintillating synchronization between the enlarging Web
concepts and the cloud idea.
Cloud providers also have the appropriate infrastructure and methods in hand in
order for application maintenance [14]. There is a service-level agreement (SLA)
being established as a contract between cloud users (in this case, software engineers)
and cloud providers. Especially the advanced cloud infrastructure ensures non-
functional (scalability, availability, security, sustainability, etc.) requirements. Other
serious challenges confronting the cloud-based software development include
the following. As we see, the development of software is multilateral in a cloud envi-
ronment unlike the collocated and conventional application software development.
2 Envisioning the Cloud-Induced Transformations in the Software Engineering… 51
The difference between these two radical approaches presents some of the noticeable
challenges to software engineering:
• Software Composition – Traditionally, application software engineers develop a
set of coherent and cohesive modules and assemble them to form an application,
whereas in the fast-expanding cloud landscape, finding and composing third-
party software components is a real challenge.
• Query-Oriented Versus API-Oriented Programming – MapReduce, streaming,
and complex event processing require developers to adopt a more functional
query-oriented style of processing to derive information. Rather than a large sur-
face area of OO APIs, these systems use an extension of SQL-like operations
where clients pass in application specific functions which are executed against
associated data sources. Doing complex join queries or function composition
such as MapReduce is a difficult proposition.
• Availability of Source Code – In the current scene, full source of the code is
available. However, in the multilateral software development, there is no source
code available because of third-party components. Therefore, the challenge for
software engineers is the complete comprehension of the system.
• Execution Model – The application software developed generally is executed
on single machine, whereas the multilateral software developed for cloud
environment is often distributed between multiple machines. Therefore, the
challenge for software engineers is the traceability of state of executing entity
and debugging.
• Application Management – The challenges are there as usual when there is an
attempt to embrace newer technologies. Application lifecycle management
(ALM) is quiet straightforward in the traditional setting, whereas globally,
collaborative and cloud-based application management is beset with definite
concerns and challenges.
The need of the hour to make the cloud concepts more beneficial to all sections
of the world is to activate the innovation culture; thereby, a stream of inventive
approaches can be unearthed to reinvigorate the sagging and struggling software
engineering domain. Here is one. Radha Guha [2] has come out with an improved
cost estimation model for the cloud-based software development.
2.11 Conclusion
Nowadays, for most business systems, software is a key enabler of their business
processes. The software availability and stability directly impact the company’s
revenue and customer satisfaction. Software development is therefore a critical
activity. Software development is undergoing a series of key changes. A growing
number of independent software vendors (ISVs) and system integrators (SIs) trans-
form themselves into service providers delivering their customers’ and partners’
applications in the form of services hosted in the cloud.
52 P. Raj et al.
The cloud technology could reduce the time needed for the development of
business services and to take them to the market. Each additional month or quarter
in which the cloud services are accessible to users has a direct impact on increasing
revenues, which affects the final financial statements. The speed at which software
applications can be developed, tested, and brought into production is definitely
one of the critical success factors for many companies. Therefore, any solution
accelerating the application time to market has an immediate and measurable impact
on return on investment (ROI).
Application developers are regularly confronted with a request to establish
special environments for developing, debugging, and compiling appropriate soft-
ware libraries for making software solutions. Typically, these environments are
established for a limited period of time. Accessing appropriately configured
development environments with an adequate processing power and storage space
on demand is very crucial for software engineering. To perform their tasks, the
programmers should be able to quickly configure servers, storage, and network
connections. Here comes the significance of cloud environments for taking soft-
ware to market quickly. In this chapter, we primarily discussed the pathbreaking
contributions of cloud infrastructures for realizing sophisticated and smart services
and applications.
References
1. Yara, P., Ramachandran, R., Balasubramanian, G., Muthuswamy, K., Chandrasekar, D.: Global
software development with cloud platforms. In: Software Engineering Approaches for
Offshore and Outsourced Development. Lecture Notes in Business Information Processing.
http://link.springer.com/chapter/10.1007/978-3-642-02987-5_10. vol. 35, pp. 81–95 (2009)
2. Guha, R.: Software engineering on semantic web and cloud computing platform. http://www.
cs.pitt.edu/~chang/231/y11/papers/cloudSE (2011). Accessed 24 Oct 2012
3. Chhabra, B., Verma, D., Taneja, B.: Software engineering issues from the cloud application
perspective. Int. J. Inf. Technol. Knowl. Manage. 2(2), 669–673 (2010)
4. Kuusela, R., Huomo, T., Korkala, M.: Lean Thinking Principles for Cloud Software
Development. VTT www.vtt.fi. A Research Summary of VTT Technical Research Centre of
Finland (2010)
5. Hashmi, S.I.: Using the cloud to facilitate global software development challenges. In: Sixth
IEEE International Conference on Global Software Engineering Workshops, 15–18 Aug 2011,
pp. 70–77. IEEE XPlore Digital Library, IEEE, Piscataway (2011)
6. Tamburri, D.A., Lago, P.: Satisfying cloud computing requirements with agile service networks.
In: IEEE World Congress on Services, 4–9 July 2011, pp. 501–506. IEEE XPlore Digital
Library, IEEE, Los Alamitos (2011)
7. Carroll, N., et al.: The discovery of agile service networks through the use of social network
analysis. In: International Conference on Service Sciences. IEEE Computer Society, IEEE,
Washington, DC (2010)
8. Bruneli’ere, H., Cabot, J., Jouault, F.: Combining model-driven engineering and cloud com-
puting. http://jordicabot.com/papers/MDE4Service10.pdf (2010). Accessed 24 Oct 2012
9. Becker, S., Tichy, M.: Towards model-driven evolution of performance critical business infor-
mation systems to cloud computing architectures. In: MMSM. http://www.cse.chalmers.
se/~tichy/2012/MMSM2012.pdf (2012). Accessed 24 Oct 2012
2 Envisioning the Cloud-Induced Transformations in the Software Engineering… 53
10. Dumbre, A., Senthil, S.P., Ghag, S.S.: Practicing Agile Software Development on the Windows
Azure Platform. White paper by Infosys Ltd., Bangalore. http://www.infosys.com/cloud/
resource-center/documents/practicing-agile-software-development.pdf (2011) Accessed 24
Oct 2012
11. Lean Software Development – Cutting Fat Out of Your Diet. A White Paper by Architech solutions.
http://www.architech.ca/wp-content/uploads/2010/07/Lean-Software-Development-Cutting-
Fat-Out-of-Your-Diet.pdf. Accessed 24 Oct 2012
12. Tripathi, N.: Practices of lean software development. http://cswf.wikispaces.com/file/view/Pra
ctices+in+Lean+Software+Development.pdf (2011). Accessed 24 Oct 2012
13. Talreja, Y.: Lean Agile methodologies accentuate benefits of cloud computing. http://www.
the-technology-gurus.com/yahoo_site_admin/assets/docs/LACC_white_paper_ed_
v5.320180428.pdf (2010). Accessed 24 Oct 2012
14. Das, D., Vaidya, K.: An Agile Process Framework for Cloud Application. A White Paper
by CSC. http://assets1.csc.com/lef/downloads/CSC_Papers_2011_Agile_Process_Framework.
pdf (2011). Accessed 24 Oct 2012
15. Alice Software as a Service(SaaS) Delivery Platform. A Whitepaper by Ekartha, Inc.
http://www.ekartha.com/resources/Alice_saas_delivery_platform.pdf. Accessed 24 Oct 2012
Chapter 3
Limitations and Challenges in Cloud-Based
Applications Development
Abstract Organisations and enterprise firms, from banks to social Web, are consid-
ering developing and deploying applications on the cloud due to the benefits offered
by them. These benefits include cost effectiveness, scalability and theoretically
unlimited computing resources. Many predictions by experts have indicated that
centralising the computation and storage by renting them from third-party provider
is the way to the future. However, before jumping into conclusions, engineers and
technology officers must assess and weigh the advantages of cloud applications
over concerns, challenges and limitations of cloud-based applications. Decisions
must also involve choosing the right service model and knowing the disadvantages
and limitations pertaining to that particular service model. Although cloud applica-
tions have benefits a galore, organisations and developers have raised concerns
over the security and reliability issues. The idea of handing important data over to
another company certainly has security and confidentiality worries. The implica-
tion does not infer that cloud applications are insecure and flawed but conveys that
they require more attention to cloud-related issues than the conventional on-premise
approaches. The objective of this chapter is to introduce the reader to the chal-
lenges of cloud application development and to present ways in which these chal-
lenges can be overcome. The chapter also discusses the issues with respect to
different service models and extends the challenges with reference to application
developer’s perspective.
Keywords Challenges in the cloud • Vendor lock-in • Security in the cloud • SLA
• Cost limitation • Traceability issue • Transparency in the cloud
Z. Mahmood and S. Saeed (eds.), Software Engineering Frameworks for the Cloud 55
Computing Paradigm, Computer Communications and Networks,
DOI 10.1007/978-1-4471-5031-2_3, © Springer-Verlag London 2013
56 N. Pramod et al.
3.1 Introduction
Measured service: Cloud systems automatically control and optimise resource usage
by leveraging a metering capability at some level of abstraction that is appropriate
to the type of service used (e.g. storage, processing, bandwidth and active user
accounts). Resource usage can be monitored, controlled and reported, providing
transparency for both the provider and consumer of the utilised service.
There are three generally agreed cloud service delivery models [4]:
• SaaS – software as a service: Refers to providing on-demand applications over
the Internet.
• PaaS – platform as a service: Refers to providing platform layer resources,
including operating system support and software development frameworks.
• IaaS – infrastructure as a service: Refers to on-demand provisioning of infra-
structural resources, usually in terms of VMs. A cloud owner that offers IaaS is
called an IaaS provider [5].
Newer terminologies such as DaaS (Data as a Service) [6] have also emerged, but
their applicability and use cases still remain a key question. In case of traditional IT
deployment, all the resources are under the control of a particular organisation. This
is not true anymore in case of cloud-based development. Cloud providers of each of
the cloud service models offer control over various resources. Figure 3.1 depicts a
generic view of the accessibility and control of resources with respect to IaaS, PaaS
and SaaS service models.
3.2 Challenges
All Web service architectures have issues relating to security. On a similar note,
cloud application can be viewed as a different Web service model that has similar
security loopholes in them. Organisations which are keen on moving the in-house
58 N. Pramod et al.
Fig. 3.1 Consumer and vendor controls in cloud service models [24]
applications to cloud must consider the way in which the application security
behaves in a cloud environment. Well-known security issues such as data loss and
phishing pose serious threats to organisation’s data and software. In addition to
those, there are other security issues which arise due to the third-party dependency
for services pertaining to cloud application development and deployment. From
a very naive point of view, it looks daunting to put an organisation’s critical and
confidential data and its software into a third person’s CPU and storage. The multi-
tenancy model and the pooled computing resources in cloud computing have intro-
duced new security challenges that require novel techniques to tackle with [7].
One of the top cloud application security issues is lack of control over the comput-
ing infrastructure. An enterprise moving a legacy application to a cloud computing
environment gives up control over the networking infrastructure, including servers,
access to logs and incident response. Most applications are built to be run in the con-
text of an enterprise data centre, so the way they store and the way they transmit data
to other systems is assumed to be trusted or secure. This is no more true in case of
cloud environment. All the components that have traditionally been very trusted and
assume to be running in a safe environment now are running in an untrusted environ-
ment. Many more issues such as the Web interface, data storage and data transfer
have to be considered whilst making security assessments. The flexibility, openness
and public availability of cloud computing infrastructures challenge many funda-
mental assumptions about application security. The lack of physical control over the
networking infrastructure might mandate the use of encryption in the communication
between servers of an application that processes sensitive data to ensure its confiden-
tiality. Risks that a company may have accepted when the application was in-house
must be reconsidered when moving to a cloud environment.
Ex. 1
If an application is logging sensitive data in a file on the on-premise server and not
encrypting it, a company might accept that risk because it owns the hardware. This
will not be a safe acceptance anymore on the cloud environment as there exists no
static file system where the application log will reside due to the reason that the
application is executed in different virtual machines which may be on different
physical machines depending on the scale. The logging thus takes place onto some
shared storage array and hence the need to encrypt it arises. The security threat
model takes a different dimension on the cloud, and, hence, a lot of vulnerabilities
which were low are now high and they must be fixed.
Ex. 2
A company hosting an application in its own data centre might ward off a denial-
of-service attack with certain infrastructure or could take actions such as blocking
the attacking IP addresses. In case of cloud, if the provider handles the mitigation of
attacks, then the consumer or the organisation hosting application needs to re-account
for how the risk or attack can be mitigated as there is no control or visibility.
60 N. Pramod et al.
3.2.2 Control
curl -i \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-d \
'{
"credentials": {
"username": "my_Rackspace_username",
"key": "12005700-0300-0010-1000-654000008923"}
}' \
https://auth.api.rackspacecloud.com/v1.1/auth
where:
username – is the assigned username for which the authentication request is being sent
key – the API key provided to access the cloud service
3 Limitations and Challenges in Cloud-Based Applications Development 61
If, for instance, the consumer wishes to introduce another layer of authentication,
then the cloud provider does not allow for this facility as the API is not designed to
provide such facility. This can be extended not only to authentication but to the
entire APIs used for various purposes during cloud application development. This
hinders access and limits any tweaking which can enable the application function
better or help the organisation in curbing cost [9]. Also, as a security concern, the
ability to limit access to certain confidential data will eventually go in vain as the
data is still available in some form or the other at the service provider and poses a
serious threat to confidentiality.
3.2.3 Reliability
For the cloud, reliability is broadly a function of the reliability of three individual
components:
• The hardware and software facilities offered by providers: The hardware (appli-
cable to SaaS, PaaS, IaaS models) and software (applicable to SaaS, PaaS mod-
els) provided by the service provider, though not completely in the consumer’s
control, are a major reliability factor as low-performing and low-quality setup
could lead to failure. This also is decisive about the availability of the applica-
tion. The less hardware failure and faster recovery from failure will ensure that
the cloud application is more reliable.
• The provider’s personnel and the consumer’s personnel: The personnel interact-
ing with the cloud service and the application may also cause reliability issues.
For example, if an employee is accessing resources for purposes other than the
assigned, then during crunch time it could lead to failure. Also, if the mainte-
nance of the systems is not undertaken regularly or is ignored, then this could
cause failure.
• Connectivity to subscribed services: Network resources connecting the cloud
service provider and the organisation are also accountable for the reliability of
the system.
Many suggestions on how to adopt trust models have been proposed, and one of
such can be found tabulated in Table 3.1: “Summary of cloud service providers, prop-
erties, resource access and key challenges over different cloud service models” [10].
62 N. Pramod et al.
Table 3.1 Summary of cloud service providers, properties, resource access and key challenges
over different cloud service models
Providers Properties Access to resources Key challenges
SaaS NetSuite – Web interface SaaS consumers have only Credential
enterprise access to the software management
resource which is provided as a on cloud
planning service. No control
(ERP) SaaS over tuning the
Taleo – human No installation software, operating Usage and
resource SaaS required system and hardware accountability
SalesForce – Shared software, resources Traceability of data
customer i.e. used by
relationship many
management organisation
SaaS (CRM)
Google – Google Ownership is Data security
Docs, online only on data
office suite
Microsoft – Office Pay as you use Protection of API
Live, Dynamics keys
Live CRM
PaaS Google App Engine Platform for PaaS consumers have Privacy control
developing access to the
scalable application develop-
applications ment environment, e.g.
Microsoft Test, deploy, host the operating system. Traceability of both
Windows Azure and maintain Tools and libraries can application
in the same be used to build and data
environment application over the
Force.com Easy integration given platform. No Maintenance
with Web control over hardware of audit trail
AT&T Synaptic services and resources and control Protecting API keys
databases over choice of Application
platform, i.e. the Security
choice of tuning and
changing the operating
system
IaaS Amazon EC2 Virtual machines IaaS consumers have Governance
are offered to access to the virtual
consumers machine instance
IBM Freedom of which can be Data encryption,
choice on configured in a way to especially in case
operating suit the operating of storage service
HP system system instance or API Key Protection
Rackspace image and application
Eucalyptus running over it. No
control over the
Cisco
hardware resources,
Joyent i.e. the physical
resources such as the
choice of processor on
each machine, size and
capacity of memory on
each machine
3 Limitations and Challenges in Cloud-Based Applications Development 63
In cloud, the control of physical resources is under the cloud provider, and, hence,
the responsibility for workload management, uptime and persistence also falls on
him. Therefore, it is important to understand provider’s infrastructure, architecture
and the policies and processes governing them. The assurances of uptime and avail-
ability properties must be considered whilst choosing a provider. Also, the compen-
sation and backup measure which will be in place in case of failure of any kind must
be part of the agreement, thus taking into account the reliability factors.
3.2.4 Transparency
and track the history and evolution of data in the cloud. Research personnel at HP
are working on TrustCloud [14], a project launched to increase trust in cloud com-
puting via detective, file-centric approaches that increase data traceability, account-
ability and transparency in the cloud. With accurate audit trail and a transparent
view of data flow and history on cloud, the cloud services are bound to become
more reliable and the consumer has fairly more control over things which over-
comes a lot of potential challenges that hinders growth and migration towards cloud.
Trust and following the best practices are one way to overcome this challenge. Trust
is developed over time by the provider by maintaining a clean track record in terms
of the characteristics of a particular cloud service. An organisation must look for the
following aspects before choosing a service provider:
• The history of the service provider
• The operational aspects apart from the ones mentioned in the service brochure,
for example, ‘Where are the data centres located?’ ‘Is the hardware maintenance
outsourced?’
• Additional tools, services and freedom offered to improve visibility and trace-
ability in the cloud environment
For example, users of IBM’s cloud services can use Tivoli management system
to manage their cloud and data centre services. TrustCloud can be another example
of a tool which can be used to increase transparency.
3.2.5 Latency
In a stand-alone system, it matters a lot where the data and other resources are situ-
ated for computation. In conventional client server architecture, the application
server is made to be located as close to the client as possible via the means of data
centres and CDNs (content delivery network). On a similar note it matters a lot
where the cloud is situated and that a cloud provider may have plenty of Web band-
width from a given data centre, but if that data centre is thousands of miles away,
then developers will need to accommodate and program for significant latency.
Latency is generally measured as the round-trip time it takes for a packet to reach a
given destination and come back, usually measured using the standard Linux pro-
gram, “ping”. As an example, if the cloud application is an email server, it is better
to have the cloud situated nearby. The multimedia content present in the application
can be handled by the services provided by CDNs which invisibly brings this con-
tent closer to the client.
Irrespective of the type of cloud service deployed, all cloud computing initiatives
have one thing in common, that is, data is centralised, whilst users are distributed.
3 Limitations and Challenges in Cloud-Based Applications Development 65
This means that if deployment is not planned carefully, there can be significant
issues due to the increased latency between the end users and their application servers.
All cloud services inherently use shared WANs, making packet delivery – specifi-
cally dropped or out of order IP packets during peak congestion – a constant prob-
lem in these environments. This results in packet retransmissions which, particularly
when compounded by increased latency, lower effective throughput and perceived
application performance.
Fortunately, in parallel with the cloud trend, WAN optimisation technology has
been evolving to overcome these challenges. WAN optimisation helps “clean up”
the cloud in real time by rebuilding lost packets and ensuring they are delivered in
the correct order, prioritising traffic whilst guaranteeing the necessary bandwidth,
using network acceleration to mitigate latency in long-distance environments and
de-duplicating data to avoid repetition. So with WAN optimisation, it is possible to
move the vast majority of applications into the cloud without having to worry about
geographic considerations [15].
It becomes important to differentiate between the cloud provider, consumer and the
actual customer who uses the application. The consumer is a person or an organisa-
tion that has access to cloud resources (depending on the service model, agreement
and the application type). Now this organisation must analyse and consider the
trade-offs amongst the computation, communication and integration. Cloud appli-
cations can significantly reduce the infrastructure cost, but it uses more network
resources (data usage, bandwidth) and hence raises the cost of data communication.
The cost per unit of computing resource used is likely to be higher as more resources
are used during the data exchange between the cloud service and the organisation.
This problem is particularly prominent if the consumer uses the hybrid cloud
deployment model where the organisation’s data is distributed amongst a number of
public/private (in-house IT infrastructure)/community clouds. Notable and com-
monly used pricing models in thirdparty systems are pay as you go and subscription
pricing. In the former, the billing is based on usage stats, and it is based on fixed,
agreed-upon prices in the latter case.
66 N. Pramod et al.
Developers and architects should analyse the cloud provider’s costing model and
make an appropriate decision of choosing the most suitable model according to the
requirements. This decision includes understanding the trade-offs which the costing
model will result into; for example, in case of an IaaS model adoption scenario,
consideration towards a hybrid infrastructure wherein the sensitive and frequently
used large data or application can be part of a private cloud and the rest could be a
thirdparty service. Every approach has pros and cons, and the decision on costing
must exploit the market options and the requirements and at the same time should
also note this pro-con trade-off. Pay as you go could be useful if the requirements
are not well defined and the budget is limited, and the subscription pricing is useful
when the requirements are long term and are well defined.
The data usage charges in case of conventional models are fairly straightforward
and are with respect to bandwidth and online space consumption. But in case of the
cloud, the same does not hold good as the resources used is different at different
point in time due to the scalable nature of the application. Hence, due to the pool of
resources available, the cost analysis is a lot more complicated. The cost estimate is
now in terms of the number of instantiated virtual machines rather than the physical
server; that is, the instantiated VM has become the unit of cost. This resource pool
and its usage vary from service model to service model. For SaaS cloud providers,
the cost of developing scalability or multi-tenancy within their offering can be very
substantial. These include alteration or redesign and development of a software
under consideration which was initially developed for a conventional model, perfor-
mance and security enhancement for concurrent user access (similar to synchronisa-
tion and read and write problem) and dealing with complexities induced by the
above changes. On the other hand, SaaS providers need to consider the trade-off
between the provision of multi-tenancy and the cost savings yielded by multi-
tenancy such as reduced overhead through amortisation and reduced number of
on-site software licences. Therefore, the charging model must be tailored strategi-
cally for SaaS provider in order to increase profitability and sustainability of SaaS
cloud providers [7].
A provider with better billing models and frameworks which determine usage of a
cloud service appropriately and accurately should be given preference over the rest.
For example, Rackspace has a billing model which is efficient and at the same time
3 Limitations and Challenges in Cloud-Based Applications Development 67
Although cloud consumers do not have control over the underlying computing
resources, they do need to ensure the quality, availability, reliability and perfor-
mance of these resources when consumers have moved their core business functions
onto their entrusted cloud. In other words, it is vital for consumers to obtain guaran-
tees from providers on service delivery. Typically, these are provided through
service-level agreements (SLAs) negotiated between the providers and consumers.
The very first issue is the definition of SLA specifications in such a way that has an
appropriate level of granularity, namely, the trade-offs between expressiveness and
complicatedness, so that they can cover most of the consumer expectations and is
relatively simple to be weighted, verified, evaluated and enforced by the resource
allocation mechanism on the cloud. In addition, different cloud offerings (IaaS,
PaaS and SaaS) will need to define different SLA meta-specifications. This also
raises a number of implementation problems for the cloud providers. Furthermore,
advanced SLA mechanisms need to constantly incorporate user feedback and cus-
tomisation features into the SLA evaluation framework [16].
The issue of vendor lock-in is a rising concern due to the rapid development of cloud
technology. Currently, each cloud offering has its own way on how cloud consum-
ers/applications/users interact with the cloud. This severely hinders the develop-
ment of cloud ecosystems by forcing vendor locking, which prohibits the ability of
cloud consumers to choose from alternative vendors/offering simultaneously or
more from one vendor to another (migration) in order to optimise resources at dif-
ferent levels within an organisation. More importantly, proprietary or vendor-
specific cloud APIs make it very difficult to integrate cloud services with an
organisation’s own existing legacy systems. The primary goal of interoperability is
to realise the seamless fluid data across clouds and between cloud and local applica-
tions. Interoperability is essential due to various reasons. Many of the IT compo-
nents of a company are routine and static applications which need to handle numbers
and for which cloud service can be adopted. These applications vary from being
storage based to computation based. An organisation would prefer two different
vendors to achieve cost efficiency and performance enhancement via respective
68 N. Pramod et al.
service. But eventually these separate applications need to interact with the core
IT assets of the company, and, hence, there must exist some common way to interact
with these various cloud applications spread over different vendors. Standardisation
appears to be a good solution to address the interoperability issue. However, as
cloud computing is still a spreading wild fire, the interoperability problem has not
appeared on the pressing agenda of major industry cloud vendors [7].
Wise choice in choosing a vendor is the only way to overcome this issue. Currently,
there are no standards governing cloud application platforms and services and hence
is a significant challenge to overcome in the coming years. However, steps have
been taken recently to manage this problem. The Federal Risk and Authorization
Management Program (FedRAMP) [17] is a government-wide program that pro-
vides a standardised approach to security assessment, authorisation and continuous
monitoring for cloud products and services. Cloud service providers are now required
to follow this standard, and hopefully it could be extended to a lot of migration and
interoperability issues.
Amongst these generic issues, few are of serious concern than the rest and few
have not seen the broad daylight due to the infancy of cloud computing. A survey
conducted by CSA involving over 240 organisations found that security is one of the
biggest issues with 87.5 % of the people voting for it followed by performance, cost,
etc. Figure 3.3 represents the survey statistics for the same question (i.e. rate chal-
lenges/issues of the cloud/on-demand model) over various issues.
3 Limitations and Challenges in Cloud-Based Applications Development 69
In a cloud environment, an enterprise cannot necessarily use the same tools and
services they deployed internally for security, such as a Web application firewall.
For example, a company that has deployed a Web application firewall (WAF) as
another level of security for a legacy app when exposing it to the Web no longer
has that option as the ownership and control of infrastructure at various levels changes
in case of cloud. The CSA’s cloud application security guidance noted that IaaS
vendors have started to offer cloud application security tools and services, including
WAFs, Web application security scanning and source code analysis. The tools are
specific to either the provider or third party, the report noted. It will be wise to
explore all possible APIs that might provide strong logging which in turn help as
leverage for security-related activity [18].
Having seen various issues in general, it is time now to look at security in par-
ticular with the service model point of view, that is, the issues which are inherent
and affect across various service models.
This is the first step in securing private data before sending it to the cloud. Cyber
laws and policies currently exist which disallow and impose relevant restrictions on
sending of private data onto third-party systems. A cloud service provider is just
another example of a third-party system, and organisations must apply the same
rules of handling third-party systems in this case. It is already clear that organisa-
tions are concerned at the prospect of private data going to the cloud. The cloud
service providers themselves recommend that if private data is sent onto their sys-
tems, it must be encrypted, removed or redacted. The question then arises “How can
the private data be automatically encrypted, removed, or redacted before sending it
up to the cloud service provider?”; that is, “How can the whole process be auto-
mated?”. It is known that encryption, in particular, is a CPU-intensive process which
threatens to add significant latency to the process.
Any solution implemented should broker the connection to the cloud service and
automatically encrypt any information an organisation does not want to share via a
third party. For example, this could include private or sensitive employee or cus-
tomer data such as home addresses or social security numbers, or patient data in a
medical context. Security engineers should look to provide for on-the-fly data pro-
tection by detecting private or sensitive data within the message being sent up to the
cloud service provider and encrypting it such that only the originating organisation
can decrypt it later. Depending on the policy, the private data could also be removed
or redacted from the originating data but then reinserted when the data is requested
back from the cloud service provider.
70 N. Pramod et al.
Being a third-party service, cloud resources need to have controlled and accounted
access. Governance in cloud computing is when an organisation wants to prevent
rogue (or unauthorised) employees from misusing a service. For example, the organ-
isation may want to ensure that a user working in marketing part of the application
can only access specific leads and does not have access to other restricted areas.
Another example is that an organisation may wish to control how many virtual
machines can be spun up by employees, and, indeed, that those same machines are
spun down later when they are no longer needed. So-called rogue cloud usage must
also be detected, so that the employees setting up their own accounts for using a
cloud service are detected and brought under an appropriate governance umbrella.
Whilst cloud service providers offer varying degrees of cloud service monitor-
ing, an organisation should consider implementing its own cloud service gover-
nance framework. The need for this independent control is of particular benefit
when an organisation is using multiple SaaS providers, that is, HR services, ERP
and CRM systems. However, in such a scenario, the security engineers also need to
be aware that different cloud providers have different methods of accessing infor-
mation. They also have different security models on top of that.
That points to the solution provided by a cloud broker, which brokers the differ-
ent connections and essentially smoothes over the differences between them. This
means organisations can use various services together but only have to interact
with a perfectly configured CSB application. In situations where there is something
3 Limitations and Challenges in Cloud-Based Applications Development 71
Many cloud services are accessed using simple REST [20] Web services interfaces.
These are commonly called “APIs”, since they are similar in concept to the more
heavyweight C++ or Java APIs used by programmers, though they are much easier
to leverage from a Web page or from a mobile phone, hence their increasing ubiquity.
In order to access these services, an API key is used. These are similar in some ways
to passwords. They allow organisations to access the cloud provider. For example, if
an organisation is using a SaaS offering, it will often be provided with an API key.
This is one security measure employed by the provider to increase accountability;
that is, if in case something goes wrong, then that can be easily tracked as every
application instance running would have a unique API key (which is associated with
a particular user credential) and the source application for the cause of the mistake
would also bear an API key. Hence, the misuse of a correct application can be only
through misuse of API keys, and it becomes important to protect them.
Consider the example of Google Apps. If an organisation wishes to enable single
sign-on to their Google Apps (so that their users can access their email without
72 N. Pramod et al.
having to log in a second time), then this access is via API keys. If these keys were
to be stolen, then an attacker would have access to the email of every person in that
organisation.
The casual use and sharing of API keys is an accident waiting to happen. Protection
of API keys can be performed by encrypting them when they are stored on the file
system, by storing them within a hardware security module (HSM) or by employing
more sophisticated security systems such as Kerberos [21] to monitor single sign-on.
An application developer comes into picture in service models where the organisa-
tion has control over applications and computing resources. Hence, this perspective
is mainly applicable to PaaS where application development is on a particular third-
party cloud platform and to IaaS where the choice of platform is with the organisa-
tion, and over the chosen platform the developer writes applications. The following
are a few challenges currently faced by programmers and application developers in
developing applications on cloud platforms:
Cloud is still in its very early stages of development. There has been a surge in
enterprises adopting cloud technologies, but on the other hand, the technology has
not emerged enough to handle issues with this surge. The growth in different indus-
tries has been very self-centred, that is, the cloud providers have developed their
own APIs, virtualisation techniques, management techniques, etc. From a develop-
er’s perspective, every cloud provider supports different programming language and
syntax requirement, though most of them expose hash-based data interfaces or more
commonly JSON or Xml. This needs immediate attention, and steps must be taken
to standardise interfaces and programming methods. In case of conventional coun-
terpart, an application developed in PERL or PHP works fine when the application
is moved from one host to another or when there is a change in operations system.
Considerable developmental efforts are required in order to move from one cloud
provider to another which in turn implies that the cost of migration is significantly
high. History has shown us that languages like SQL and C were standardised to stop
undesired versions and proliferation.
One of the key characteristics of good Web applications is that they are highly avail-
able. In order for this to be possible in a cloud application, it must be made to
dynamically replicate and mirror on machines across cloud with ease. Once this is
3 Limitations and Challenges in Cloud-Based Applications Development 73
done, the load balancing servers can serve these applications on demand, hence
increasing availability and without delays, that is, decrease in latency. As most of
the cloud platform providers employ multi-tenancy model, servicing hundreds of
application forces them to automate the task of mirroring and replication. In order
to achieve this seamlessly, the application must use very little or no state informa-
tion. The state variables include transactional or server variables, static variables
and variables which are present in the framework of the whole application. These
variables are always available in case of traditional environment as there is a static
application server and memory where they can be stored and accessed, but these are
very hard to find in a cloud environment. One of the ways of handling this situation
is to make use of a datastore or the cache store. Restriction on installing third-party
libraries, limited or no access with write permission to file systems hinders the capa-
bility of an application to store state information and hence forces an organisation
to use the providers’ datastore service which comes at a price.
Cloud follows a pay-as-you-use policy, and, hence, consumers pay for almost every
bit of CPU usage. This necessitates the provider to present appropriate metrics on
processor usage and memory. A profile of the application with the skeleton of
classes or functions and their corresponding execution time, memory used and pro-
cessing power utilised, etc. will help the developer tune the code to optimise the use
of available processing power by choosing e.g. a different data structure or algo-
rithm with lesser time and space complexity
One of the solutions to this concern can be provided by the cloud host by abstract-
ing the common code patterns which are frequently used into optimal default librar-
ies as the cloud provider could easily employ optimisation techniques which would
suit the hardware underneath and the operating system used. This helps the devel-
oper to be assured that a piece of code is employing optimal techniques to produce
the desired effect. As an example, Apache PIG [22] gives a scripting-like interface
to Apache Hadoop’s [23] HDFS for analysing large-scale datasets.
In the end, the summary of cloud service models and their providers, properties,
access to resources and key challenges can be tabulated as in Table 3.1.
3.5 Conclusion
Cloud applications certainly have taken the IT industry to a new high, but like every
other technology, they have come short of a few things. In the search of exploiting
benefits of cloud applications, the inevitable trail of challenges has followed them
all along. The challenges in employing cloud services are discussed in this chapter.
The security challenges which are more specific to a type of service, that is, the type
74 N. Pramod et al.
of service model, are also described. With emerging trends in cloud-based applica-
tions development, the time has come to actually take a look at the pitfalls and
address them. The chapter has given an insight into how these challenges can be
overcome.
The major of all the concern turns out to be security that needs serious attention.
The overall conclusion is that cloud computing is in general prepared to success-
fully host most typical Web applications with added benefits such as cost savings,
but applications with the following properties need more careful study before their
deployment:
• Have strict latency or other network performance requirements.
• Require working with large datasets.
• Needs for availability are critical.
As a developer, one would like to see much advancement in terms of the devel-
opmental tool kit and the standardisation of APIs across various cloud development
platforms in the near future. This would also help in the transition from traditional
application to cloud-based environment as the intellectual investment required to
bring about this transition is less, and more developers can move from traditional
application development to cloud.
References
1. Mell, P., Grance, T.: The NIST Definition of Cloud Computing. Special Publication 800–145,
September 2001
2. Buyya, R., Yeo, C.S., Venugopal, S.: Market-oriented cloud computing: vision, hype, and
reality for delivering it services as computing utilities. In: High Performance Computing
and Communications, 2008, HPCC ’08, Dalian, China. 10th IEEE International Conference,
pp. 5–13 (2008)
3. Gong, C., et al.: The characteristics of cloud computing. In: 2010 39th International Conference
on Parallel Processing Workshops, San Diego
4. Zhang, Q., Cheng, L., Boutaba, R.: Cloud computing: State-of-the-art and research challenges.
J. Internet Serv. Appl. 1(1), 7–18 (2010)
5. Cloud Computing: What is infrastructure as a service. http://technet.microsoft.com/en-us/
magazine/hh509051.aspx
6. Wang, L., Tao, J., Kunze, M., Castellanos, A.C., Kramer, D., Karl, W.: Scientific cloud com-
puting: early definition and experience. 10th IEEE Int. Conf. High Perform. Comput. Commun.
9(3), 825–830 (2008)
7. Ramgovind, S., Eloff, M.M., Smith, E.: The management of security in cloud computing. In:
PROC 2010 I.E. International Conference on Cloud Computing, Indianapolis, USA (2010)
8. API and usage documentation for developer using Rackspace service. http://docs.rackspace.com
9. Wu, R., Ahn, G., Hongxin Hu, Singhal M.: Information flow control in cloud computing. In:
Collaborative Computing: Networking, Applications and Worksharing (CollaborateCom),
2010 6th International Conference, Brisbane, Australia, pp. 17. IEEE (2010)
10. Shimba, F.: Cloud computing: strategies for cloud computing adoption. Masters Dissertation,
Dublin Institute of Technology (2010)
11. About STAR: https://cloudsecurityalliance.org/star/faq/
12. Description of standard service by TripWire. http://www.tripwire.com/services/standard/
13. Description of Custom service by TripWire. http://www.tripwire.com/services/custom/
3 Limitations and Challenges in Cloud-Based Applications Development 75
14. Ko, R.K.L., Jagadpramana, P., Mowbray, M., Pearson, S., Kirchberg, M., Liang, Lee, B.S., HP
Laboratories: TrustCloud: a framework for accountability and trust in cloud computing. http://
www.hpl.hp.com/techreports/2011/HPL-2011-38.pdf
15. Minnear, R.: Latency: The Achilles Heel of cloud computing, 9 March 2011. Cloud Expo:
Article, Cloud Comput. J. http://cloudcomputing.sys-con.com/node/1745523 (2011)
16. Kuyoro, S.O., Ibikunle, F., Awodele, O.: Cloud computing security issues and challenges. Int.
J. Comput. Netw. 3(5) (2011)
17. FedRAMP: U.S General Services Administration Initiative. http://www.gsa.gov/portal/
category/102371
18. Security Guidance for Critical Areas of Focus in Cloud Computing V2.1, Prepared by CSA
2009. https://cloudsecurityalliance.org/csaguide.pdf
19. Weixiang, S., et al.: Cloud service broker, March 2012. http://tools.ietf.org/pdf/draft-shao-
opsawg-cloud-service-broker-03.pdf (2012)
20. Tyagi, S.: RESTful web service, August 2006. http://www.oracle.com/technetwork/articles/
javase/index-137171.html (2006)
21. Kerberos in the Cloud: Use Case Scenarios. https://www.oasis-open.org/committees/down-
load.php/38245/Kerberos-Cloud-use-cases-11june2010.pdf
22. Apache PIG: http://pig.apache.org/
23. Apache Hadoop: http://hadoop.apache.org/
24. SAAS, PAAS and IAAS – Making Cloud Computing Less Cloudy. http://cioresearchcenter.
com/2010/12/107/
Part II
Software Development Life Cycle
for Cloud Platform
Chapter 4
Impact of Cloud Services on Software
Development Life Cycle
Keywords Software development life cycle • Usage patterns • Design for failure
• Design for parallelism • Information architecture • Private cloud • Public cloud
Z. Mahmood and S. Saeed (eds.), Software Engineering Frameworks for the Cloud 79
Computing Paradigm, Computer Communications and Networks,
DOI 10.1007/978-1-4471-5031-2_4, © Springer-Verlag London 2013
80 R. Krishna and R. Jayakrishnan
4.1 Introduction
Cloud readiness assessment will help to evaluate the cloud readiness and applicability
for an enterprise. The assessment also helps to determine the business case and return
on investment. Typical assessment questions are listed below for reference. Note that
this list is not exhaustive:
• Does cloud architecture fit the requirements for the application?
• How interconnected is this application with other application in the enterprise—for
public cloud, can these interfaces be exposed for access from external networks?
• Is the enterprise comfortable with public cloud, or should the enterprise focus
only on private cloud option among other options?
• Identifying suitable cloud service provider (IAAS/PAAS [2]—and the specific
vendor under the category)
• Defining the strategy in adopting cloud for future projects
• Assessing the cost of using cloud (private or public cloud) (compare—capital
expense of hosted option vs. running cost of cloud option)
• How would applications be monitored after they are hosted on public cloud?
It is important to note that cloud assessment guidelines are defined at enterprise
level. The enterprise can optionally create tools to aid the projects and new initia-
tives to perform cloud assessment.
Below we present a list of common usage patterns [3] and corresponding require-
ment capturing questionnaire that helps to arrive at workload of an application and
decide on its readiness for cloud-based architecture.
This pattern is applicable to both internal and external (e.g., Web sites) applications
that are constantly used by enterprise users/external users, and there is little variance
in load and usage of these applications. Requirement analysis should detail out the
following information:
• Availability of applications at regular intervals.
• Defining the strategy for application downtime and uptime.
• More requirement analysis is required in designing the respective scripts to make
the application available at a required point of time.
• Defining limits of data loss in case of application crash.
82 R. Krishna and R. Jayakrishnan
This pattern applies to recurrent business functionalities like batch jobs that execute
at end of day and data processing applications.
• Detail out the I/O volume required to satisfy the business process (costing of the
cloud solution is very I/O sensitive).
This pattern includes applications that are developed to serve a particular demand,
like publishing examination results/election campaign and sites related to
entertainment.
• Detail out level of concurrency required across time periods and hence amount
of parallelism that can be applied to improve the performance of the system.
This pattern applies to executing one-time jobs for processing at a given point in time.
• Detail out requirements on identifying number of concurrent users accessing the
system.
• Identify volume of data that is required to process the business functionality.
• Detail out the network bandwidth and expected delay in response while process-
ing heavy load business functionality.
• Analyze variety of data that is used in day-to-day business.
• Define set of business functionalities and business components that can execute
side by side.
• Identify reusability of components.
• Identify different failure scenarios and respective handling mechanisms.
This pattern applies to applications that should be able to handle a sudden load which
may come from an external source example: customers, vendors, or public users.
• Define the limit of independence to access the application.
• Identify and analyze country-level regulations to handle the load.
• Identify industry-specific regulations while handling the load.
• Identify institutional specific fragility and capacity challenges.
4 Impact of Cloud Services on Software Development Life Cycle 83
This pattern usually applies to a mature application or Web site, wherein as additional
users are added, growth and resources are tracked.
• Cost of maintaining application on cloud
4.5 Architecture
As the world is growing and becoming more connected every day, data plays a vital
role in software application. The key in building the information architecture is to
closely align information to business process by availing the features available in
cloud environment. This process enables all stakeholders like business leaders,
vendors, consumers, service providers, users, and all other stakeholders in evaluating,
reconciling, and prioritizing on the information vision and related road map. The
84 R. Krishna and R. Jayakrishnan
information architecture should provide great care that should be taken in defining
strategy on development approach ensuring right decisions in development and
execution of an application.
Understanding the key considerations of data architecture in distributed environ-
ment and trade-offs for decisions made related to technology and architecture choices
in cloud environments is essential for good information architecture. For example,
decisions like data sharing is very crucial while defining the data services. This topic
mainly describes different varieties of data (relational, geospatial, unstructured data,
etc.) and different classifications and compliance of data (internal and external).
The information architecture provides information for relevant concepts, frame-
works, and services to access information in unique, consistent, and integrated way by
adopting new cutting-edge technology and guarantees responsiveness and trustworthy
information. Following are core decision points of information architecture:
• Access Information: Information services should provide unconstrained access
to the right users at the right time.
• Reusable Services: Facilitate discovery, selection, and reuse of services and
encourage uniformity of services.
• Information Governance: Provide proper utilities to support efficiency of infor-
mation governance strategy.
• Standards: Define set of standards to information where technology will support
process simplification.
e
iz
B.
or
C
eg
la
at
ss
C
ify
A.
Information
Security
y
ilit
ib
D
ns
.M
po
ov
es
e
.R
C
Table 4.1 Information categories
Public Private Confidential Secret
Data available Data private Data to be disclosed on Data never disclosed
for general to organization need-to-know basis after and can be seen only
public approval from the owner by the owner of the data
For Ex: For Ex: For Ex:
Intranet Customer information Password
information
Org chart Organization policy Pin number
and would-be changes
List of Source code for business SSN
employees critical modules
Source code Credit card number,
for in-house/ authorization code, and
utility modules expiry data combination
For Ex: Account number, last 4
digits of SSN, birth
date combination
Annual reports Source code
Share price for decryption