0% found this document useful (0 votes)
117 views45 pages

Cloud Computing Unit-2and3

Uploaded by

massdangerman701
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
117 views45 pages

Cloud Computing Unit-2and3

Uploaded by

massdangerman701
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

Cloud Service Models

There are the following three types of cloud service models -

1. Infrastructure as a Service (IaaS)


2. Platform as a Service (PaaS)
3. Software as a Service (SaaS)

Infrastructure as a Service (IaaS)


IaaS is also known as Hardware as a Service (HaaS). It is a computing
infrastructure managed over the internet. The main advantage of using
IaaS is that it helps users to avoid the cost and complexity of purchasing
and managing the physical servers.

Characteristics of IaaS
There are the following characteristics of IaaS -

o Resources are available as a service


o Services are highly scalable
o Dynamic and flexible
o GUI and API-based access
o Automated administrative tasks
Example: DigitalOcean, Linode, Amazon Web Services (AWS), Microsoft
Azure, Google Compute Engine (GCE), Rackspace, and Cisco Metacloud.
Platform as a Service (PaaS)
PaaS cloud computing platform is created for the programmer to develop,
test, run, and manage the applications.

Characteristics of PaaS
There are the following characteristics of PaaS -

o Accessible to various users via the same development application.


o Integrates with web services and databases.
o Builds on virtualization technology, so resources can easily be
scaled up or down as per the organization's need.
o Support multiple languages and frameworks.
o Provides an ability to "Auto-scale".
Example: AWS Elastic Beanstalk, Windows Azure, Heroku, Force.com,
Google App Engine, Apache Stratos, Magento Commerce Cloud, and
OpenShift.

Software as a Service (SaaS)


SaaS is also known as "on-demand software". It is a software in which the
applications are hosted by a cloud service provider. Users can access
these applications with the help of internet connection and web browser.

Characteristics of SaaS
There are the following characteristics of SaaS -

o Managed from a central location


o Hosted on a remote server
o Accessible over the internet
o Users are not responsible for hardware and software updates.
Updates are applied automatically.
o The services are purchased on the pay-as-per-use basis
Example: BigCommerce, Google Apps, Salesforce, Dropbox, ZenDesk,
Cisco WebEx, ZenDesk, Slack, and GoToMeeting.
Difference between IaaS, PaaS, and SaaS

IaaS Paas SaaS

It provides a It provides virtual It provides web software


virtual data center platforms and tools to and apps to complete
to store create, test, and deploy business tasks.
information and apps.
create platforms
for app
development,
testing, and
deployment.
It provides access It provides runtime It provides software as a
to resources such as environments and service to the end-users.
virtual machines, deployment tools for
virtual storage, etc. applications.
It is used by It is used by developers. It is used by end users.
network architects.
IaaS provides only PaaS provides SaaS provides
Infrastructure. Infrastructure+Platform. Infrastructure+Platform
+Software.
SOA and Cloud :

Cloud Computing SOA

Cloud computing is information on SOA is service on demand.


demand.

Internet based computing. Technology based computing.

Service is delivered over the internet. Service is delivered by


interacting with various other
services or software.
Here the client inheriting this application Here the client needs a
need a system and internet service to technology provider along with
implement cloud computing. various services.

Multi-Core Technology

Multi-core technology is the term that describes today's processors


that have two or more working processor chips (more commonly referred
to as cores) working simultaneously as one system. Dual cores or chips
with two processors that work as one system are the first type of multi-
core technology applications.

How It Works
The multi-core processor technology was conceptualized and has
revolved around the idea of being able to make parallel computing
possible. Parallel computing could dramatically increase the speed,
efficiency and performance of computers by simply putting 2 or more
Central Processing Units (or CPU) in only one chip.
This would ultimately minimize the power and heat consumption of the
system while still being able to greatly boost system performance without
sacrificing energy consumption limits.
This would give more performance with less or with the same amount of
energy.
The multi-core technology would also enable users to do more tasks at the
same time. Since more computing workloads could be done at the same
time, manufacturers such as Intel and AMD could focus more on
increasing computing and processing performance without increasing
clock speeds and thus avoid the need for consuming more energy.

Applications
Multi-core technology is useful especially in very demanding applications
and tasks such as video editing, encoding and 3D gaming. The full effect
and the advantage of having a multi-core computer, however, is felt only
when it is used together with a multithreading operating system such as
Windows XP or Linux and with applications that are capable of
multithreading.

Issues
As of now, most computer applications are not yet capable of splitting
their tasks into workloads that could be separately processed on each
core. It would still take time before the software industry catches up with
computer processor manufacturers and create applications with
multithreading capabilities.
Moreover, even the programs and operating systems that have the ability
to split tasks into several threads are still far from being able to
maximize the multi-core technology's potential.
Experts claim that the structure of software and applications that exist
today and the design of most computer hardware are made for multiple
homogeneous cores so they're really not made for optimal usage of multi-
core technology.
Many applications are coded without resource consumption in mind, as
with many of the most popular office packages and 3D games, which will
not allow the multi-core system to assist in workload management.
Cloud Computing Data Storage

Cloud Storage is a service that allows to save data on offsite storage


system managed by third-party and is made accessible by a web services
API.

Storage Devices
Storage devices can be broadly classified into two categories:
 Block Storage Devices
 File Storage Devices
Block Storage Devices
The block storage devices offer raw storage to the clients. These raw
storage are partitioned to create volumes.
File Storage Devices
The file Storage Devices offer storage to clients in the form of files,
maintaining its own file system. This storage is in the form of Network
Attached Storage (NAS).
Cloud Storage Classes
Cloud storage can be broadly classified into two categories:
 Unmanaged Cloud Storage
 Managed Cloud Storage

Unmanaged Cloud Storage


Unmanaged cloud storage means the storage is preconfigured for the
customer. The customer can neither format, nor install his own file
system or change drive properties.
Managed Cloud Storage
Managed cloud storage offers online storage space on-demand. The
managed cloud storage system appears to the user to be a raw disk that
the user can partition and format.
Creating Cloud Storage System
The cloud storage system stores multiple copies of data on multiple
servers, at multiple locations. If one system fails, then it is required only
to change the pointer to the location, where the object is stored.
To aggregate the storage assets into cloud storage systems, the cloud
provider can use storage virtualization software known
as StorageGRID. It creates a virtualization layer that fetches storage
from different storage devices into a single management system. It can
also manage data from CIFS and NFS file systems over the Internet. The
following diagram shows how StorageGRID virtualizes the storage into
storage clouds

Cloud Networking
Cloud Networking is service or science in which company’s
networking procedure is hosted on public or private cloud. Cloud
Computing is source manage in which more than one computing
resources share identical platform and customers are additionally
enabled to get entry to these resources to specific extent. Cloud
networking in similar fashion shares networking however it gives
greater superior features and network features in cloud with
interconnected servers set up under cyberspace.
Why cloud networking is required and in-demand?
It is in demand by many companies for their speedy and
impervious delivery, fast processing, dependable transmission
of information without any loss, pocket-friendly set-up.

Benefited corporations who select Cloud Networking consist


of internet service providers, e-commerce, cloud service
providers, community operators, cloud service providers.
It permits users to boost their networks in accordance with
necessities in cloud-based services.
An actual cloud network provides high-end monitoring to
globally positioned servers, controls site visitors flow between
interconnected servers, protects structures with superior
network safety, and offers visibility to user by means of its
centralized management.
The web access can be expanded and made greater reliable
bandwidth to promote couple of network features into cloud.
It ensures overall performance and safety in multi-cloud
surrounding so that Information technology receives greater
visibility by means of supplying end-users with necessities and
experience they need.
Workloads are shared between cloud surroundings using
software program as provider application.
Safety is given to user to get entry to web page and
infrastructure by means of transferring functions to the cloud
with standard security model.
The gateway offers contextual access code and multi-layer
firewall. Applications and offerings are given to allotted data
centers in cloud environment.

Advantages of Cloud Networking :


1. On-Demand Self Service –
Cloud computing provides required application, services, and
utility to client. With login key, they can begin to use besides
any human interplay and cloud service providers. It consists of
storage and digital machines.
2. High Scalability –
It requests grant of resources on large scale besides any human
intervention with every service provider.
3. Agility –
It shares the assets efficiently amongst customers and works
quickly.
4. Multi-sharing –
By distributed computing, distinctive clients from couple of
areas share identical resources through fundamental
infrastructure.
5. Low Cost –
It is very economical and can pay in accordance with its usage.
6. Services in pay per use Model –
Application Programming Interface is given to clients to use
resources and offerings and pay on service basis.
7. High availability and Reliability –
The servers are accessible at the proper time besides any delay
or disappointment.
8. Maintenance –
It is user-friendly as they are convenient to get entry to from
their location and does not require any installation set up.
Web 2.0
2004 When the word Web 2.0 become famous due to the First Web
2.0 conference (later known as the Web 2.0 summit) held by Tim
O’Reilly and Dale Dougherty, the term was coined by Darcy DiNucci in
1999. Web 2.0 refers to worldwide websites which highlight user-
generated content, usability, and interoperability for end users. Web
2.0 is also called the participative social web. It does not refer to a
modification to any technical specification, but to modify the way Web
pages are designed and used. The transition is beneficial but it does not
seem that when the changes occur. Interaction and collaboration with
each other are allowed by Web 2.0 in a social media dialogue as the
creator of user-generated content in a virtual community. Web 2.0 is
an enhanced version of Web 1.0.

Major Features of Web 2.0:


 Free sorting of information, permits users to retrieve and
classify the information collectively.
 Dynamic content that is responsive to user input.
 Information flows between the site owner and site users using
evaluation & online commenting.
 Developed APIs to allow self-usage, such as by a software
application.
 Web access leads to concerns different, from the traditional
Internet user base to a wider variety of users.
Usage of Web 2.0: The social Web contains several online tools and
platforms where people share their perspectives, opinions, thoughts,
and experiences. Web 2.0 applications tend to interact much more with
the end user. As such, the end-user is not only a user of the application
but also a participant in these 8 tools mentioned below:
 Podcasting
 Blogging
 Tagging
 Curating with RSS
 Social bookmarking
 Social networking
 Social media
 Web content voting
Web 3.0
It refers to the evolution of web utilization and interaction which
includes altering the Web into a database, with the integration of DLT
(Distributed Ledger Technology blockchain is an example) and that
data can help to make Smart Contracts based on the needs of the
individual. It enables the up-gradation of the backend of the web, after
a long time of focusing on the frontend (Web 2.0 has mainly been about
AJAX, tagging, and other front-end user-experience innovation). Web
3.0 is a term that is used to describe many evolutions of web usage and
interaction among several paths. In this, data isn’t owned but instead
shared but still is, where services show different views for the same web
/ the same data.
The Semantic Web (3.0) promises to establish “the world’s information”
in a more reasonable way than Google can ever attain with its existing
engine schema. This is particularly true from the perspective of
machine conception as opposed to human understanding. The Semantic
Web necessitates the use of a declarative ontological language like OWL
to produce domain-specific ontologies that machines can use to reason
about information and make new conclusions, not simply match
keywords.
Main features That can Help us Define Web 3.0:
 Semantic Web: The succeeding evolution of the Web involves
the Semantic Web. The semantic web improves web
technologies in demand to create, share and connect content
through search and analysis based on the capability to
comprehend the meaning of words, rather than on keywords or
numbers.
 Artificial Intelligence: Combining this capability with natural
language processing, in Web 3.0, computers can distinguish
information like humans to provide faster and more relevant
results. They become more intelligent to fulfill the
requirements of users.
 3D Graphics: The three-dimensional design is being used
widely in websites and services in Web 3.0. Museum guides,
computer games, e-commerce, geospatial contexts, etc. are all
examples that use 3D graphics.
 Connectivity: With Web 3.0, information is more connected
thanks to semantic metadata. As a result, the user experience
evolves to another level of connectivity that leverages all the
available information.
 Ubiquity: Content is accessible by multiple applications, every
device is connected to the web, and the services can be used
everywhere.
 DLT and Smart Contracts: With the help of DLT, we can have
a virtually impossible-to-hack database from which one can
have value to their content and things they can own virtually,
this is the technology that enables a trustless society through
the integration of smart contracts which does not need to have
a middle man to be a guarantor to make that contract occur on
certain cause its based on data from that DLT. It’s a powerful
tool that can make the world a far better place and generate
more opportunities for everyone on the internet.
Software Processes
The term software specifies to the set of computer programs, procedures
and associated documents (Flowcharts, manuals, etc.) that describe the
program and how they are to be used.
A software process is the set of activities and associated outcome that
produce a software product. Software engineers mostly carry out these
activities. These are four key process activities, which are common to all
software processes. These activities are:

1. Software specifications: The functionality of the software and


constraints on its operation must be defined.
2. Software development: The software to meet the requirement must
be produced.
3. Software validation: The software must be validated to ensure that
it does what the customer wants.
4. Software evolution: The software must evolve to meet changing
client needs.

The Software Process Model


A software process model is a specified definition of a software process,
which is presented from a particular perspective. Models, by their nature,
are a simplification, so a software process model is an abstraction of the
actual process, which is being described. Process models may contain
activities, which are part of the software process, software product, and
the roles of people involved in software engineering. Some examples of
the types of software process models that may be produced are:

1. A workflow model: This shows the series of activities in the process


along with their inputs, outputs and dependencies. The activities in
this model perform human actions.
2. 2. A dataflow or activity model: This represents the process as a set
of activities, each of which carries out some data transformations.
It shows how the input to the process, such as a specification is
converted to an output such as a design. The activities here may be
at a lower level than activities in a workflow model. They may
perform transformations carried out by people or by computers.
3. 3. A role/action model: This means the roles of the people involved
in the software process and the activities for which they are
responsible.
There are several various general models or paradigms of software
development:

1. The waterfall approach: This takes the above activities and


produces them as separate process phases such as requirements
specification, software design, implementation, testing, and so on.
After each stage is defined, it is "signed off" and development goes
onto the following stage.
2. Evolutionary development: This method interleaves the activities of
specification, development, and validation. An initial system is
rapidly developed from a very abstract specification.
3. Formal transformation: This method is based on producing a
formal mathematical system specification and transforming this
specification, using mathematical methods to a program. These
transformations are 'correctness preserving.' This means that you
can be sure that the developed programs meet its specification.
4. System assembly from reusable components: This method assumes
the parts of the system already exist. The system development
process target on integrating these parts rather than developing
them from scratch.
Software Crisis
1. Size: Software is becoming more expensive and more complex with
the growing complexity and expectation out of software. For
example, the code in the consumer product is doubling every couple
of years.
2. Quality: Many software products have poor quality, i.e., the
software products defects after putting into use due to ineffective
testing technique. For example, Software testing typically finds 25
errors per 1000 lines of code.
3. Cost: Software development is costly i.e. in terms of time taken to
develop and the money involved. For example, Development of the
FAA's Advanced Automation System cost over $700 per lines of
code.
4. Delayed Delivery: Serious schedule overruns are common. Very
often the software takes longer than the estimated time to develop,
which in turn leads to cost shooting up. For example, one in four
large-scale development projects is never completed.

Agile Software Development Life Cycle (SDLC)


Software development life cycle (SDLC) is a phenomenon
to design, develop and, test high-quality software. The primary aim of
SDLC is to produce high-quality software that fulfills the customer
requirement within times and cost estimates.
Agile Software Development Life Cycle (SDLC) is the combination
of both iterative and incremental process models. It focuses on process
adaptability and customer satisfaction by rapid delivery of working
software product. Agile SDLC breaks down the product into small
incremental builds. These builds are provided into iterations.
In the agile SDLC development process, the customer is able to see the
result and understand whether he/she is satisfied with it or not. This is
one of the advantages of the agile SDLC model. One of its disadvantages is
the absence of defined requirements so, it is difficult to estimate the
resources and development cost.

Each iteration of agile SDLC consists of cross-functional teams working


on various phases:
1. Requirement gathering and analysis
2. Design the requirements
3. Construction/ iteration
4. Deployment
5. Testing
6. Feedback

Requirements gathering and analysis


In this phase, you must define the requirements. You should explain
business opportunities and plan the time and effort needed to build the
project. Based on this information, you can evaluate technical and
economic feasibility.

Design the requirements


When you have identified the project, work with stakeholders to define
requirements. You can use the user flow diagram or the high-level UML
diagram to show the work of new features and show how it will apply to
your existing system.

Construction/ Iteration
When the team defines the requirements, the work begins. The designers
and developers start working on their project. The aims of designers and
developers deploy the working product within the estimated time. The
product will go into various stages of improvement, so it includes simple,
minimal functionality.

Deployment
In this phase, the team issues a product for the user's work environment.

Testing
In this phase, the Quality Assurance team examine the product's
performance and look for the bug.

Feedback
After releasing of the product, the last step is to feedback it. In this step,
the team receives feedback about the product and works through the
feedback
Agile SDLC Process Flow
1. Concept: Project are imagined and prioritized.
2. Inception: Team members are created, funding is put in place, and
basic environments and requirements are discussed.
3. Iteration/Constriction: The software development team works to
deliver working software. It is based on requirement and feedback.
4. Release: Perform quality assurance (QA) testing, provides internal
and external training, documentation development, and final
version of iteration into the product.
5. Production: It is ongoing support of the software.

Advantages of Agile SDLC


1. Project is divided into short and transparent iterations.
2. It has a flexible change process.
3. It minimizes the risk of software development.
4. Quick release of the first product version.
5. The correctness of functional requirement is implemented into the
development process.
6. Customer can see the result and understand whether he/she is
satisfied with it or not.

Disadvantages of Agile SDLC


1. The development team should be highly professional and client-
oriented.
2. New requirement may be a conflict with the existing architecture.
3. With further correction and change, there may be chances that the
project will cross the expected time.
4. There may be difficult to estimate the final coast of the project due
to constant iteration.
5. A defined requirement is absent.

Pervasive Computing
Pervasive Computing is also called as Ubiquitous computing, and
it is the new trend toward embedding everyday objects with
microprocessors so that they can communicate information. It refers to
the presence of computers in common objects found all around us so
that people are unaware of their presence. All these devices
communicate with each other over wireless networks without the
interaction of the user.
Pervasive computing is a combination of three technologies, namely:
1. Micro electronic technology:
This technology gives small powerful device and display with
low energy consumption.
2. Digital communication technology:
This technology provides higher bandwidth, higher data
transfer rate at lower costs and with world wide roaming.
3. The Internet standardization:
This standardization is done through various standardization
bodies and industry to give the framework for combining all
components into an interoperable system with security, service
and billing systems.

Thus, wireless communication, consumer electronics and computer


technology were all merged into one to create a new environment called
pervasive computing environment. It helps to access information and
render modern administration in areas that do not have a traditional
wire-based computing environment.
Pervasive computing is the next dimension of personal computing in the
near future, and it will definitely change and improve our work
environment and communication methods.
Pervasive computing will provide us with small portable personal
assistant devices having high speed, wireless communication, lower
power consumption rate, data storage in persistent memory, coin sized
disk device, small color display video and speech processing technology.
All these features will give the users freedom to effectively communicate
and access information from any place in the world at any time.
Key Characteristics of Pervasive computing:
1. Many devices can be integrated into one system for multi-
purpose uses.
2. A huge number of various interfaces can be used to build an
optimized user interface.
3. Concurrent operation of online and offline supported.
4. A large number of specialized computers are integrated
through local buses and Internet.
5. Security elements are added to prevent misuse and
unauthorized access.
6. Personalization of functions adapts the systems to the user’s
preferences, so that no PC knowledge is required of the user to
use and manage the system.
These type of functions can be extended into network operations for use
in workplace, home and mobile environments.
Applications:
There are a rising number of pervasive devices available in the
market nowadays. The areas of application of these devices include:
 Retail
 Airlines booking and check-in
 Sales force automation
 Healthcare
 Tracking
 Car information System
 Email access via WAP (Wireless Application Protocol) and
voice.
For example, in retail industry, there is a requirement for faster and
cheaper methods to bring goods to the consumer from stores via
Internet. Mobile computers are provided with bar code readers for
tracking the product during manufacture. Currently consumers use
computers to select products. In future, they will use PDA (Personal
Digital Assistant) and pervasive devices in the domestic markets too.
When they complete writing the list of items to be bought on these
devices, this list can then be sent to the supermarket, and purchase can
be delivered to the consumer. The advantages of this are faster
processing of data and execution of data mining
Virtualization
Virtualization is the "creation of a virtual (rather than actual)
version of something, such as a server, a desktop, a storage device, an
operating system or network resources".
In other words, Virtualization is a technique, which allows to share a
single physical instance of a resource or an application among multiple
customers and organizations. It does by assigning a logical name to a
physical storage and providing a pointer to that physical resource when
demanded.

What is the concept behind the Virtualization?


Creation of a virtual machine over existing operating system and
hardware is known as Hardware Virtualization. A Virtual machine
provides an environment that is logically separated from the underlying
hardware.
The machine on which the virtual machine is going to create is known
as Host Machine and that virtual machine is referred as a Guest
Machine

Types of Virtualization:
1. Hardware Virtualization.
2. Operating system Virtualization.
3. Server Virtualization.
4. Storage Virtualization.

1) Hardware Virtualization:
When the virtual machine software or virtual machine manager (VMM)
is directly installed on the hardware system is known as hardware
virtualization.
The main job of hypervisor is to control and monitoring the processor,
memory and other hardware resources.
After virtualization of hardware system we can install different operating
system on it and run different applications on those OS.
Usage:
Hardware virtualization is mainly done for the server platforms, because
controlling virtual machines is much easier than controlling a physical
server.

2) Operating System Virtualization:


When the virtual machine software or virtual machine manager (VMM)
is installed on the Host operating system instead of directly on the
hardware system is known as operating system virtualization.
Usage:
Operating System Virtualization is mainly used for testing the
applications on different platforms of OS.

3) Server Virtualization:
When the virtual machine software or virtual machine manager (VMM)
is directly installed on the Server system is known as server virtualization.
Usage:
Server virtualization is done because a single physical server can be
divided into multiple servers on the demand basis and for balancing the
load.

4) Storage Virtualization:
Storage virtualization is the process of grouping the physical storage
from multiple network storage devices so that it looks like a single storage
device.
Storage virtualization is also implemented by using software applications.
Usage:
Storage virtualization is mainly done for back-up and recovery purposes.
UNIT-3
Parallel Computing
It is also known as parallel processing. It utilizes several
processors. Each of the processors completes the tasks that have been
allocated to them. In other words, parallel computing involves
performing numerous tasks simultaneously. A shared memory or
distributed memory system can be used to assist in parallel computing.
All CPUs in shared memory systems share the memory. Memory is shared
between the processors in distributed memory systems.
Parallel computing provides numerous advantages. Parallel computing
helps to increase the CPU utilization and improve the performance
because several processors work simultaneously. Moreover, the failure of
one CPU has no impact on the other CPUs' functionality. Furthermore, if
one processor needs instructions from another, the CPU might cause
latency.

Advantages and Disadvantages of Parallel Computing


There are various advantages and disadvantages of parallel computing.
Some of the advantages and disadvantages are as follows:
Advantages

1. It saves time and money because many resources working together


cut down on time and costs.
2. It may be difficult to resolve larger problems on Serial Computing.
3. You can do many things at once using many computing resources.
4. Parallel computing is much better than serial computing for
modeling, simulating, and comprehending complicated real-world
events.
Disadvantages

1. The multi-core architectures consume a lot of power.


2. Parallel solutions are more difficult to implement, debug, and
prove right due to the complexity of communication and
coordination, and they frequently perform worse than their serial
equivalents

Distributing Computing?
It comprises several software components that reside on different systems
but operate as a single system. A distributed system's computers can be
physically close together and linked by a local network or geographically
distant and linked by a wide area network (WAN). A distributed system
can be made up of any number of different configurations, such as
mainframes, PCs, workstations, and minicomputers. The main aim of
distributed computing is to make a network work as a single computer.
There are various benefits of using distributed computing. It enables
scalability and makes it simpler to share resources. It also aids in the
efficiency of computation processes.
Advantages and Disadvantages of Distributed Computing
There are various advantages and disadvantages of distributed
computing. Some of the advantages and disadvantages are as follows:
Advantages

1. It is flexible, making it simple to install, use, and debug new


services.
2. In distributed computing, you may add multiple machines as
required.
3. If the system crashes on one server, that doesn't affect other servers.
4. A distributed computer system may combine the computational
capacity of several computers, making it faster than traditional
systems.
Disadvantages

1. Data security and sharing are the main issues in distributed


systems due to the features of open systems
2. Because of the distribution across multiple servers, troubleshooting
and diagnostics are more challenging.
3. The main disadvantage of distributed computer systems is the lack
of software support.

MapReduce
A MapReduce is a data processing tool which is used to process the data
parallelly in a distributed form. It was developed in 2004, on the basis of
paper titled as "MapReduce: Simplified Data Processing on Large
Clusters," published by Google.
The MapReduce is a paradigm which has two phases, the mapper phase,
and the reducer phase. In the Mapper, the input is given in the form of a
key-value pair. The output of the Mapper is fed to the reducer as input.
The reducer runs only after the Mapper is over. The reducer too takes
input in key-value format, and the output of reducer is the final output.

Steps in Map Reduce


o The map takes data in the form of pairs and returns a list of <key,
value> pairs. The keys will not be unique in this case.
o Using the output of Map, sort and shuffle are applied by the Hadoop
architecture. This sort and shuffle acts on these list of <key, value>
pairs and sends out unique keys and a list of values associated with
this unique key <key, list(values)>.
o An output of sort and shuffle sent to the reducer phase. The reducer
performs a defined function on a list of values for unique keys, and
Final output <key, value> will be stored/displayed.

Sort and Shuffle


The sort and shuffle occur on the output of Mapper and before the
reducer. When the Mapper task is complete, the results are sorted by key,
partitioned if there are multiple reducers, and then written to disk. Using
the input from each Mapper <k2,v2>, we collect all the values for each
unique key k2. This output from the shuffle phase in the form of <k2,
list(v2)> is sent as input to reducer phase.

Usage of MapReduce

o It can be used in various application like document clustering,


distributed sorting, and web link-graph reversal.
o It can be used for distributed pattern-based searching.
o We can also use MapReduce in machine learning.
o It was used by Google to regenerate Google's index of the World
Wide Web.
o It can be used in multiple computing environments such as multi-
cluster, multi-core, and mobile environment.

Twister and Iterative Map Reduce


The Twister is designed to effectively support iterative MapReduce
function. To reach this flexibility it reads data from the local disk of the
worker nodes and handle the intermediate data data in the distributed
memory of the workers mode.

The messaging infrastructure in twister is called broker network and it is


responsible to perform data transfer using publish/subscribe messaging.
Twister has three main entity:

1. Client Side Driver responsible to drive entire MapReduce computation

2. Twister Daemon running on every working node.

3. The broker Network.

Access Data

1. To access input data for map task it either reads dta from the local
disk of the worker nodes.

2. receive data directly via the broker network.

They keep all data read as file and having data as native file allows
Twister to pass data directly to any executable. Additionally they allow
tool to perform typical file operations like

(i) create directories, (ii) delete directories, (iii) distribute input files
across worker nodes, (iv) copy a set of resources/input files to all worker
nodes, (v) collect output files from the worker nodes to a given location,
and (vi) create partition-file for a given set of data that is distributed
across the worker nodes.

Intermediate Data

The intermediate data are stored in the distributed memory of the worker
node. Keeping the map output in distributed memory enhances the speed
of the computation by sending the output of the map from these memory
to reduces.

Messaging

The use of publish/subscribe messaging infrastructure improves the


efficiency of Twister runtime. It use scalable NaradaBrokering messaging
infrastructure to connect difference Broker network and reduce load on
any one of them.
Fault Tolerance

There are three assumption for for providing fault tolerance for iterative
mapreduce:

(i) failure of master node is rare adn no support is provided for that.

(ii) Independent of twister runtime the communication network can be


made fault tolerant.

(iii) the data is replicated among the nodes of the computation


infrastructure. Based on these assumptions we try to handle failures of
map/reduce tasks, daemons, and worker nodes failures.

Aneka in Cloud Computing


Aneka includes an extensible set of APIs associated with programming
models like MapReduce.
These APIs support different cloud models like a private, public, hybrid
Cloud.
Manjrasoft focuses on creating innovative software technologies to
simplify the development and deployment of private or public cloud
applications. Our product plays the role of an application platform as a
service for multiple cloud computing.

o Multiple Structures:
o Aneka is a software platform for developing cloud computing
applications.
o In Aneka, cloud applications are executed.
o Aneka is a pure PaaS solution for cloud computing.
o Aneka is a cloud middleware product.
o Manya can be deployed over a network of computers, a multicore
server, a data center, a virtual cloud infrastructure, or a
combination thereof.
Multiple containers can be classified into three major categories:

o Textile services
o Foundation Services
o Application Services

1. Textile Services:
Fabric Services defines the lowest level of the software stack that
represents multiple containers. They provide access to resource-
provisioning subsystems and monitoring features implemented in many.
2. Foundation Services:
Fabric Services are the core services of Manya Cloud and define the
infrastructure management features of the system. Foundation services
are concerned with the logical management of a distributed system built
on top of the infrastructure and provide ancillary services for delivering
applications.
3. Application Services:
Application services manage the execution of applications and constitute
a layer that varies according to the specific programming model used to
develop distributed applications on top of Aneka.

Architecture of Aneka
Aneka is a platform and framework for developing distributed
applications on the Cloud. It uses desktop PCs on-demand and CPU
cycles in addition to a heterogeneous network of servers or datacenters.
Aneka provides a rich set of APIs for developers to transparently exploit
such resources and express the business logic of applications using
preferred programming abstractions.
System administrators can leverage a collection of tools to monitor and
control the deployed infrastructure. It can be a public cloud available to
anyone via the Internet or a private cloud formed by nodes with restricted
access.
A multiplex-based computing cloud is a collection of physical and
virtualized resources connected via a network, either the Internet or a
private intranet. Each resource hosts an instance of multiple containers
that represent the runtime environment where distributed applications
are executed. The container provides the basic management features of a
single node and takes advantage of all the other functions of its hosting
services.
Services are divided into clothing, foundation, and execution services.
Foundation services identify the core system of Anka middleware, which
provides a set of infrastructure features to enable Anka containers to
perform specific and specific tasks. Fabric services interact directly with
nodes through the Platform Abstraction Layer (PAL) and perform
hardware profiling and dynamic resource provisioning. Execution
services deal directly with scheduling and executing applications in the
Cloud.
One of the key features of Aneka is its ability to provide a variety of ways
to express distributed applications by offering different programming
models; Execution services are mostly concerned with providing
middleware with the implementation of these models. Additional services
such as persistence and security are inverse to the whole stack of services
hosted by the container.

Apache Hadoop
Apache Hadoop was born to enhance the usage and solve major
issues of big data. The web media was generating loads of information on
a daily basis, and it was becoming very difficult to manage the data of
around one billion pages of content. In order of revolutionary, Google
invented a new methodology of processing data popularly known as
MapReduce. Later after a year Google published a white paper of Map
Reducing framework where Doug Cutting and Mike Cafarella, inspired by
the white paper and thus created Hadoop to apply these concepts to an
open-source software framework that supported the Nutch search engine
project. Considering the original case study, Hadoop was designed with
much simpler storage infrastructure facilities.

Apache Hadoop is the most important framework for working with Big
Data. Hadoop biggest strength is scalability. It upgrades from working on
a single node to thousands of nodes without any issue in a seamless
manner.
The different domains of Big Data means we are able to manage the
data’s are from videos, text medium, transactional data, sensor
information, statistical data, social media conversations, search engine
queries, ecommerce data, financial information, weather data, news
updates, forum discussions, executive reports, and so on

Google’s Doug Cutting and his team members developed an Open Source
Project namely known as HADOOP which allows you to handle the very
large amount of data. Hadoop runs the applications on the basis of
MapReduce where the data is processed in parallel and accomplish the
entire statistical analysis on large amount of data.

It is a framework which is based on java programming. It is intended to


work upon from a single server to thousands of machines each offering
local computation and storage. It supports the large collection of data set
in a distributed computing environment.

The Apache Hadoop software library based framework that gives


permissions to distribute huge amount of data sets processing across
clusters of computers using easy programming models.

The Apache Hadoop Module


Hadoop Common: Includes the common utilities which supports the
other Hadoop modules

HDFS: Hadoop Distributed File System provides unrestricted, high-


speed access to the data application.

Hadoop YARN: This technology is basically used for scheduling of job


and efficient management of the cluster resource.

MapReduce: This is a highly efficient methodology for parallel


processing of huge volumes of data.

Application mapping
Application mapping refers to the process of identifying and mapping
interactions and relationships between applications and the underlying
infrastructure. An application, or network map, visualizes the devices on
a network and how they are related. It gives users a sense of how the
network performs in order to run analysis and avoid data bottlenecks.
For containerized applications, it depicts the dynamic connectivities and
interactions between the microservices.
What is Application Mapping?
As enterprises grow, the number and complexity of applications grow as
well. Application mapping helps IT teams track the interactions and
relationships between applications, software, and supporting hardware.
In the past, companies mapped out interdependencies between apps using
extensive spreadsheets and manual audits of application code. Today,
however, companies can rely on an application mapping tool that
automatically discovers and visualizes interactions for IT teams. Popular
application mapping tools include configuration management database –
CMDB application mapping or UCMDB application mapping.
Some application delivery controllers also integrate application mapping
software.
Application mapping includes the following techniques:
• SNMP-Based Maps — Simple Network Management Protocol (SNMP)
monitors the health of computer and network equipment such as routers.
An SNMP-based map uses data from routers to switch management
information bases (MIBs).
• Active Probing — Creates a map with data from packets that report IP
router and switch forwarding paths to the destination address. The maps
are used to find “peering links” between Internet Service Providers
(ISPs). The peering links allow ISPs to exchange customer traffic.
• Route Analytics — Creates a map by passively listening to layer 3
protocol exchanges between routers. This data facilitates real-time
network monitoring and routing diagnostics.

What are the Benefits of Application Mapping?


Application mapping diagrams can be helpful for the following benefits:
• Visibility – locate where exactly applications are running and plan
accordingly for system failures
• Application health – understand the health of entire application instead
of analyzing individual infrastructure silos
• Quick troubleshooting – pinpoint faulty devices or software components
in seconds by conveniently tracing connections on the app map, rather
than sifting through the entire infrastructure

Google App Engine (GAE)


A scalable runtime environment, Google App Engine is mostly
used to run Web applications. These dynamic scales as demand change
over time because of Google’s vast computing infrastructure. Because it
offers a secure execution environment in addition to a number of
services, App Engine makes it easier to develop scalable and high-
performance Web apps. Google’s applications will scale up and down in
response to shifting demand. Croon tasks, communications, scalable
data stores, work queues, and in-memory caching are some of these
services.

The App Engine SDK facilitates the testing and professionalization of


applications by emulating the production runtime environment and
allowing developers to design and test applications on their own PCs.
When an application is finished being produced, developers can quickly
migrate it to App Engine, put in place quotas to control the cost that is
generated, and make the programmer available to everyone. Python,
Java, and Go are among the languages that are currently supported.

Features of App Engine

Runtimes and Languages


To create an application for an app engine, you can use Go, Java, PHP,
or Python. You can develop and test an app locally using the SDK’s
deployment toolkit. Each language’s SDK and nun time are unique.
Your program is run in a:
 Java Run Time Environment version 7
 Python Run Time environment version 2.7
 PHP runtime’s PHP 5.4 environment
 Go runtime 1.2 environment
Advantages of Google App Engine
The Google App Engine has a lot of benefits that can help you advance
your app ideas. This comprises:
1. Infrastructure for Security: The Internet infrastructure that
Google uses is arguably the safest in the entire world. Since the
application data and code are hosted on extremely secure
servers, there has rarely been any kind of illegal access to date.
2. Faster Time to Market: For every organization, getting a
product or service to market quickly is crucial. When it comes
to quickly releasing the product, encouraging the development
and maintenance of an app is essential. A firm can grow swiftly
with Google Cloud App Engine’s assistance.
3. Quick to Start: You don’t need to spend a lot of time
prototyping or deploying the app to users because there is no
hardware or product to buy and maintain.
4. Easy to Use: The tools that you need to create, test, launch, and
update the applications are included in Google App Engine
(GAE).
5. Rich set of APIs & Services: A number of built-in APIs and
services in Google App Engine enable developers to create
strong, feature-rich apps.
6. Scalability: This is one of the deciding variables for the success
of any software. When using the Google app engine to construct
apps, you may access technologies like GFS, Big Table, and
others that Google uses to build its own apps.
7. Performance and Reliability: Among international brands,
Google ranks among the top ones. Therefore, you must bear
that in mind while talking about performance and reliability.
8. Cost Savings: To administer your servers, you don’t need to
employ engineers or even do it yourself. The money you save
might be put toward developing other areas of your company.
9. Platform Independence: Since the app engine platform only
has a few dependencies, you can easily relocate all of your data
to another environment.
Amazon Web Services (AWS)
AWS (Amazon Web Services) is a comprehensive, evolving cloud
computing platform provided by Amazon that includes a mixture of
infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS) and
packaged-software-as-a-service (SaaS) offerings. AWS services can offer
an organization tools such as compute power, database storage and
content delivery services.

Amazon.com Web Services launched its first web services in 2002 from
the internal infrastructure that Amazon.com built to handle its online
retail operations. In 2006, it began offering its defining IaaS services.
AWS was one of the first companies to introduce a pay-as-you-go cloud
computing model that scales to provide users with compute, storage or
throughput as needed.

AWS offers many different tools and solutions for enterprises and
software developers that can be used in data centers in up to 190
countries. Groups such as government agencies, education institutions,
non-profits and private organizations can use AWS services.

More than 200 services comprise the AWS portfolio, including those for
compute, databases, infrastructure management, application
development and security. These services, by category, include the
following:

 compute
 storage
 databases
 data management
 migration
 hybrid cloud
 networking
 development tools
 management
 monitoring
 security
 governance
 big data management
 analytics
 artificial intelligence (AI)
 mobile development
 messages and notification
Eucalyptus
Eucalyptus is a Linux-based open-source software
architecture for cloud computing and also a storage platform that
implements Infrastructure a Service (IaaS). It provides quick and
efficient computing services. Eucalyptus was designed to provide
services compatible with Amazon’s EC2 cloud and Simple Storage
Service
Eucalyptus Architecture
Eucalyptus CLIs can handle Amazon Web Services and their own
private instances. Clients have the independence to transfer cases from
Eucalyptus to Amazon Elastic Cloud. The virtualization layer oversees
the Network, storage, and Computing. Occurrences are isolated by
hardware virtualization.
Important Features are:-
1. Images: A good example is the Eucalyptus Machine Image
which is a module software bundled and uploaded to the Cloud.
2. Instances: When we run the picture and utilize it, it turns into
an instance.
3. Networking: It can be further subdivided into three modes:
Static mode(allocates IP address to instances), System mode
(assigns a MAC address and imputes the instance’s network
interface to the physical network via NC), and Managed mode
(achieves local network of instances).
4. Access Control: It is utilized to give limitations to clients.
5. Elastic Block Storage: It gives block-level storage volumes to
connect to an instance.
6. Auto-scaling and Load Adjusting: It is utilized to make or
obliterate cases or administrations dependent on necessities.

Components of Architecture
 Node Controller is the lifecycle of instances running on each
node. Interacts with the operating system, hypervisor, and
Cluster Controller. It controls the working of VM instances on
the host machine.
 Cluster Controller manages one or more Node Controller and
Cloud Controller simultaneously. It gathers information and
schedules VM execution.
 Storage Controller (Walrus) Allows the creation of snapshots
of volumes. Persistent block storage over VM instances. Walrus
Storage Controller is a simple file storage system. It stores
images and snapshots. Stores and serves files using S3(Simple
Storage Service) APIs.
 Cloud Controller Front-end for the entire architecture. It acts
as a Complaint Web Services to client tools on one side and
interacts with the rest of the components on the other side.

Advantages Of The Eucalyptus Cloud


1. Eucalyptus can be utilized to benefit both the eucalyptus
private cloud and the eucalyptus public cloud.
2. Examples of Amazon or Eucalyptus machine pictures can be
run on both clouds.
3. Its API is completely similar to all the Amazon Web Services.
4. Eucalyptus can be utilized with DevOps apparatuses like Chef
and Puppet.
5. Although it isn’t as popular yet but has the potential to be an
alternative to OpenStack and CloudStack.
6. It is used to gather hybrid, public and private clouds.
7. It allows users to deliver their own data centers into a private
cloud and hence, extend the services to other organizations.
OpenNebula
kernel
platform.
create
to
of
requirements
machines
remote
European
2010
probably
The
security
portability,
cloud
several
and
Interface
hardware
OpenNebula
2010.OnNNulaula
C12G).
as
providers,
service
projects
research
service.
version
tool
Open
vCloud)
services,
private,
platforms
OpenNebula
to
and
kit
Some
have
and
Nebula
Commission
2in
use
supercomputing
includes
of
provides
accounting.
and
directs
..".
was
distributed
is
software
the
multi-phase
public
been
cloud
inhypervisors
an
aa(Amazon
platform
Apache
accordance
iscloud-based
advisory
storage,
free
features
is
sponsored
launched
cloud
and
solutions
report
combinations
used
Itinfrastructure,
License.
mixed
and
also
manages
services
users
EC2
by
networking,
centers,
(Xen,
organization
for
"...
with
says
open
hosting
-by
use
use
integration,
only
Query,
distributed
and
the
OpenNebula
KVM
(e.g.
distribution
configuration,
the
of
OpenNebula
in
aresearch
source
most
administrators
the
handful
providers,
avisual
cluster
and
virtualization,
covering
data
OGF
to
infrastructure
prominent
Google
VMware),
management,
data
software,
center.
data
Open
Systems
calculations)
of
labs,
policies.
cloud-based
as
collaboration
both
telecom
center
Summer
center
aand
Cloud
with
and
monitoring,
data
(formerly
cloud
among
subject
as
According
infrastructure
measurement,
operators,
international
can
the
management
aof
as
center
Computing
service.
engine
research
Code
settling
choice
virtual
and
them
to
known
and
to
and
the
IT
or
of
is
a
OpenNebula is a cloud-based distributed data center management
platform. The Open Nebula platform manages the visual data
center infrastructure to create private, public and mixed use of the
infrastructure as a service. OpenNebula is a free and open source
software, subject to the requirements of version 2 of the Apache
License. OpenNebula directs storage, networking, virtualization,
monitoring, and security to use multi-phase services (e.g. cluster
calculations) as virtual machines in distributed infrastructure,
covering both data center and remote cloud services, in accordance
with distribution policies. According to a 2010 European
Commission report "... only a handful of cloud-based research
projects have been launched - the most prominent among them is
probably OpenNebula ..". The tool kit includes features for
integration, management, measurement, security and accounting.
It also says configuration, collaboration and portability, provides
cloud users and administrators with the choice of several cloud
platforms (Amazon EC2 Query, OGF Open Cloud Computing
Interface and vCloud) and hypervisors (Xen, KVM and VMware),
and can settling hardware and software combinations in a data
center. OpenNebula was an advisory organization to Google
Summer of Code 2010.OnNNulaula is sponsored by OpenNebula
Systems (formerly known as C12G). OpenNebula is used by hosting
providers, telecom operators, IT service providers, supercomputing
centers, research labs, and international research projects Some
cloud solutions use OpenNebula as a cloud engine or kernel service.
CloudSim
distributed
infrastructure
Cloud
of
Australia,
Melbourne,
compounds
cloud
CloudSim
Originally
been
also
CloudReports
and
•capabilities
using
Although
Cloud2Sim
CloudSimEx
related
expanded
the
Computing
isCloudSim
extensions
developed
Hazelcast
in
templates.
aextends
provide
model
research
and
byextends
independent
services.
and
for
itself
has
signature
as
CloudSim
aand
CloudSim
modeling
Distributed
an
CloudSim
become
does
studies.
Designed
independent
researchers.
not
framework.
toand
one
simulation
have
use
CloudSim
by
Systems
for
simulation
ofon
aadding
the
cloud
user
multiple
first
GUI.
most
is
(CLOUDS),
interface,
compiler,
fully
the
of
time
popular
cloud
distributed
compliant
Reduce
inextensions
computing
the
CloudSim
University
open
Laboratory
Download
inservers,
source
Java.
like
has
of
CloudSim is a model for modeling and simulation of cloud
computing infrastructure and services. Designed for the first time
in the Laboratory of Cloud Computing and Distributed Systems
(CLOUDS), University of Melbourne, Australia, CloudSim has
become one of the most popular open source cloud compounds in
research and studies. CloudSim is fully compliant in Java.
CloudSim extensions Originally developed as an independent cloud
compiler, CloudSim has also been expanded by independent
researchers. • Although CloudSim itself does not have a user
interface, extensions like CloudReports provide a CloudSim
simulation GUI. • CloudSimEx extends CloudSim by adding the
Reduce Download capabilities and related templates. • Cloud2Sim
extends CloudSim to use on multiple distributed servers, using the
distributed Hazelcast signature framework.

OpenStack
It is a free open standard cloud computing platform that first
came into existence on July 21′ 2010. It was a joint project of
Rackspace Hosting and NASA to make cloud computing more
ubiquitous in nature. It is deployed as Infrastructure-as-a-
service(IaaS) in both public and private clouds where virtual
resources are made available to the users. The software platform
contains interrelated components that control multi-vendor
hardware pools of processing, storage, networking resources
through a data center.
In OpenStack, the tools which are used to build this platform are
referred to as “projects”. These projects handle a large number of
services including computing, networking, and storage services.
Unlike virtualization, in which resources such as RAM, CPU, etc
are abstracted from the hardware using hypervisors, OpenStack
uses a number of APIs to abstract those resources so that users
and the administrators are able to directly interact with the cloud
services.

Advantages of using OpenStack


 It boosts rapid provisioning of resources due to which
orchestration and scaling up and down of resources becomes
easy.
 Deployment of applications using OpenStack does not consume
a large amount of time.
 Since resources are scalable therefore they are used more
wisely and efficiently.
 The regulatory compliances associated with its usage are
manageable.

Disadvantages of using OpenStack


 OpenStack is not very robust when orchestration is considered.
 Even today, the APIs provided and supported by OpenStack are
not compatible with many of the hybrid cloud providers, thus
integrating solutions becomes difficult.
 Like all cloud service providers OpenStack services also come
with the risk of security breaches.
SAP
SAP Cloud Platform is a cloud-based tool to develop and deploy
custom applications. This includes full range of service catalog
including database, storage and backup of data, reporting service
and transaction layer to develop multi-platform software
development. This tutorial gives you a comprehensive coverage of
concepts of SAP Cloud and makes you comfortable to use it in your
software development projects.
SAP Cloud Platform is a cloud service provided by SAP based on platform
as a service (PaaS) to develop and deploy custom web applications. SAP
is responsible for managing complete infrastructure of this platform
including hardware servers, maintenance cost, component upgrades, and
system lifecycle.
SAP Cloud service provides full range of service catalog including
database, storage and data backup, reporting service and transaction
layer to develop multi-platform software development. SAP Cloud
platform customers can utilize this cloud environment for managing
software development or can also use a hybrid model based on cloud and
on premise environment.
SAP Cloud platform can be integrated with following to get data and
development −
 SAP Applications
 3rd party applications
 Internal solutions

SAP Cloud platform supports business critical solutions for software


development and it includes −
 SAP Fieldglass
 SAP Success factors
 SAP Hybris
 SAP Ariba
 BusinessObjects
 SAP ERP Business suite
 Concur

You can migrate apps and extensions seamlessly to SAP Cloud platform
including SAP Business suite and SAP S/4 HANA. All the data centers are
managed by SAP itself, so you can expect the given key benefits associated
with data center management −
 Security
 Compliance
 Availability
EMC
EMC storage refers to the various storage products, systems and
services being offered by EMC Corporation, which include disk,
flash and hybrid storage systems and arrays. These systems are sold
to enterprises of all sizes in order to satisfy their storage needs, and,
combined with EMC’s Information Management Strategy Services,
enable enterprises to organize unstructured information as well as
to focus on reducing storage cost and increasing security.
Salesforce
Salesforce is a SaaS or Software as a Service, which means there is
no need to install the software or server to work on. Users can
simply sign-up in Salesforce.com and can start running the
business instantly.

o It was founded by Marc Benioff, Parker Harris, Dave Moellenhoff,


and Frank Dominguez in 1999.
o Salesforce was started as a CRM software, but today it provides
various products and software solutions to users and developers.
o Since Salesforce is cloud-based software, hence it does not require
any IT professional to set up anything.
o It provides one of the best ways to connect with customers, business
partners, and clients over the single integrated environment. It
allows the businesses to identify the customer's requirements,
address the problems easily, and provide the same solution in the
minimum timeframe.

Technologies used by the Salesforce

o Apex: Salesforce has its own programming languages, knows as


Apex. Hence to become a salesforce developer, or to create a
salesforce app, user must have a good knowledge of Salesforce Apex.
o VisualForce: Visualforce is the framework introduced by the
Salesforce, which enables the developers to create the custom user
interfaces that can work on the lighting platform.
o Compiler: Salesforce contains its own complier to compile the Apex
programs and VisualForce Pages.

Vmware
VMware, Inc. is a company that provides virtualization
solutions which was founded in 1998 and is headquartered in Palo
Alto, California. Its virtualization platform products include Player
for virtualization of desktops; Fusion for Intel-based Apple
Macintosh computers; Workstation for software developers and
enterprise IT professionals; Server, which enables virtual
partitioning of a server; ESX Server, an enterprise-class
virtualization platform that runs directly on the hardware; Virtual
SMP that enables a virtual machine to use four physical processors
simultaneously; and VMFS, which allows multiple ESX Servers to
share block-based storage.

The company also provides VirtualCenter that provides a


central point of control to manage a virtualized IT environment;
VMotion, which allows users to move virtual machines; DRS that
creates resource pools from physical servers; HA, which provides
automated recovery from hardware failure; Consolidated Backup
that enables LAN-free automated backup of virtual machines;
Storage VMotion, which allows live migration of virtual machine
disks; Update Manager that automates patch and update
management; Capacity Planner, which enables VMware service
providers to perform capacity assessments onsite; Converter to
convert local and remote physical machines into virtual machines;
Lab Manager to automate the setup, capture, storage, and sharing
of multi-machine software configurations; ACE that allows desktop
administrators to protect company resources against the risks
presented by unmanaged desktops; Virtual Desktop Infrastructure
to host individual desktops inside virtual machines running on
centralized servers; Virtual Desktop Manager, a desktop
management server that connects users to virtual desktops in the
data center; and VMware Lifecycle Manager that provides control
over the virtual environment. The following part of this article
makes you more acquainted with various VMware products and
their applications.

VMware Products
Desktop software
 VMware Workstation This software suite allows users to run
multiple instances of x86 or x86-64 -compatible operating
systems on a single physical PC.
 VMware Fusion provides similar functionality for users of the
Intel Mac platform, along with full compatibility with virtual
machines created by other VMware products.
 VMware Player is for users without a license to use VMware
Workstation or VMware Fusion. VMware offers this software as
a freeware product for personal use.
Server software:
VMware markets two virtualization products for servers:
 VMware ESX is an enterprise-level product, can deliver greater
performance than the freeware VMware Server, due to lower
system overhead. VMware ESX is a "bare-metal" product,
running directly on the server hardware, allowing virtual
servers to also use hardware more or less directly. VMware ESX
integrates into VMware vCenter, which offers extra services to
enhance the reliability and manageability of a server
deployment,
 VMotion and storage VMotion – VMotion is the capability to
move a running virtual machine from one ESX host to another
and faster than some other editions and Storage VMotion - the
capability to move a running virtual machine from one storage
device to another
 DRS - Distributed Resource Scheduler - automatic load
balancing of a ESX cluster using VMotion
 HA - High Availability - In case of hardware failure in a cluster,
the virtual servers will automatically restart on another host in
the cluster
 VMware ESXi is quite similar to ESX, but differentiates in that
the Service Console is removed, and replaced with a minimal
BusyBox installation. Disk space requirements are much lower
than for ESX and the memory footprint is reduced. ESXi is
intended to be run from flash disks in servers but can be run
from normal disks. VMware ESXi hosts management is
performed through a VirtualCenter Server.

You might also like