0% found this document useful (0 votes)
15 views10 pages

Unit 3 Notes

The document discusses Aneka, a platform for developing distributed applications in cloud computing, detailing its infrastructure and logical organization, including deployment models for private, public, and hybrid clouds. It outlines the roles of master and worker nodes, the services they provide, and the management of resources through various tools. Additionally, it covers the Aneka SDK for application development and the management of services within the cloud environment.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views10 pages

Unit 3 Notes

The document discusses Aneka, a platform for developing distributed applications in cloud computing, detailing its infrastructure and logical organization, including deployment models for private, public, and hybrid clouds. It outlines the roles of master and worker nodes, the services they provide, and the management of resources through various tools. Additionally, it covers the Aneka SDK for application development and the management of services within the cloud environment.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

BCA V Sem Cloud Computing Mrs.

Krupa
Kru Rani

UNIT 3
Building Aneka Clouds
Aneka is primarily a platform for developing distributed applications for clouds. As a software
platform it requires infrastructure on which to be deployed; this infrastructure needs to be
managed. Infrastructure management tools are specifically designed for this task, and building
clouds is one of the primary tasks of administrators. Aneka supports various deployment
models for public, private, and hybrid clouds.
1) Infrastructure Organization:

The scenario is a reference model


odel for all the different deployments Aneka supports. A central
role is played by the Administrative Console
Console,, which performs all the required management
operations. A fundamental element for Aneka Cloud deployment is constituted by repositories.
A repository provides storage for all the libraries required to lay out and install the basic Aneka
platform. These libraries constitute the software image for the node manager and the container
programs. Repositories can make libraries available through a vvariety
ariety of communication
channels, such as HTTP, FTP, commonfile sharing, and so on. The Management Console can
manage multiple repositories and select the one that best suits the specific deployment. The
infrastructure is deployed by harnessing a collectio
collection
n of nodes and installing on them the Aneka
node manager, also called the Aneka daemon
daemon.. The daemon constitutes the remote management
service used to deploy and control container instances. The collection of resulting containers
identifies the Aneka Cloud. From an infrastructure point of view, the management of physical
or virtual nodes is performed uniformly as long as it is possible to have an Internet connection

SBC Karkala [Link] Computer Science Page 1


BCA V Sem Cloud Computing Mrs. Krupa Rani

and remote administrative access to the node. A different scenario is constituted by the dynamic
provisioning of virtual instances; these are generally created by pre-packaged images already
containing an installation of Aneka, which only need to be configured to join a specific Aneka
Cloud.
2) Logical Organization:
The logical organization of Aneka Clouds can be very diverse, since it strongly depends on the
configuration selected for each of the container instances belonging to the Cloud. The most
common scenario is to use a master-worker configuration with separate nodes for storage.
The master node features all the services that are most likely to be present in one single copy
and that provide the intelligence of the Aneka Cloud. What specifically characterizes a node as
a master node is the presence of the Index Service (or Membership Catalogue) configured in
master mode; all the other services, except for those that are mandatory, might be present or
located in other nodes. A common configuration of the master node is as follows:
 Index Service (master copy)
 Heartbeat Service
 Logging Service
 Reservation Service
 Resource Provisioning Service
 Accounting Service
 Reporting and Monitoring Service
 Scheduling Services for the supported programming models
The master node also provides connection to an RDBMS facility where the state of several
services is maintained. For the same reason, all the scheduling services are maintained in the
master node. They share the application store that is normally persisted on the RDBMS in
order to provide a fault-tolerant infrastructure. The master configuration can then be replicated
in several nodes to provide a highly available infrastructure based on the failover mechanism.

SBC Karkala [Link] Computer Science Page 2


BCA V Sem Cloud Computing Mrs. Krupa
Kru Rani

The worker nodes constitute the workforce of the Aneka Cloud and are generally configured
for the execution of applications. They feature the mandatory services and the specific
execution services of each of the supported programming models in the Cloud,
Cloud, A very common
configuration isthe following:
 Index Service
 Heartbeat Service
 Logging Service
 Allocation Service
 Monitoring Service
 Execution Services for the supported programming models
Storage nodes are optimized to provide storage support to applications. They feature, among
the mandatory and usual services, the presence of the Storage Service. The number of storage
nodes strictly depends on the predicted workload and storage consumption of applications.
Storage nodes mostly reside on machines that have consider
considerable
able disk space to accommodate a
large quantity of files. The common configuration of a storage node is the following:
 Index Service
 Heartbeat Service
 Logging Service
 Monitoring Service

SBC Karkala [Link] Computer Science Page 3


BCA V Sem Cloud Computing Mrs. Krupa
Kru Rani

 Storage Service
3) Private Cloud Deployment Mode:
Mode
A private deploymentt mode is mostly constituted by local physical resources and infrastructure
management software providing access to a local pool of nodes, which might be virtualized. In
this scenario Aneka Clouds are created by harnessing a heterogeneous pool of resources such
has desktop machines, clusters, or workstations. These resources can be partitioned into
different groups, and Aneka can be configured to leverage these resources according to
application needs.
This deployment is acceptable for a scenario in which the
the workload of the system is predictable
and a local virtual machine manager can easily address excess capacity demand. Most of the
Aneka nodes are constituted of physical nodes with a long lifetime and a static configuration
and generally do not need to bee reconfigured often. The different nature of the machines
harnessed in a private environment allows for specific policies on resource management and
usage that can be accomplished by means of the Reservation Service. For example, desktop
machines that are used during the day for office automation can be exploited outside the
standard working hours to execute distributed applications.

4) Public Cloud Deployment Mode


Public Cloud deployment mode features the installation of Aneka master and worker nodes
over a completely virtualized infrastructure that is hosted on the infrastructure of one or more
resourceproviders. In this case it is possible to have a static deployment where the nodes are
provisioned beforehand and used as though they were real machines.

SBC Karkala [Link] Computer Science Page 4


BCA V Sem Cloud Computing Mrs. Krupa
Kru Rani

The deployment is generally contained within the infrastructure boundaries of a single IaaS
provider. The reasons for this are to minimize the data transfer between different providers,
which is generally priced at a higher cost, and to have better networ
network
k performance. In this
scenario it is possible to deploy an Aneka Cloud composed of only one node and to completely
leverage dynamic provisioning to elastically scale the infrastructure on demand.A
demand.A fundamental
role is played by the Resource Provisioning Se
Service,
rvice, which can be configured with different
images and templates to instantiate. Other important services that have to be included in the
master node are the Accounting and Reporting Services.
5) Hybrid Cloud Deployment Mode:
Mode
The hybrid deployment model con
constitutes
stitutes the most common deployment of Aneka. In many
cases, there is an existing computing infrastructure that can be leveraged to address the
computing needs of applications.
This scenario constitutes the most complete deployment for Aneka that is able ttoo leverage all
the capabilities of the framework:
 Dynamic Resource Provisioning
 Resource Reservation
 Workload Partitioning
 Accounting, Monitoring, and Reporting

SBC Karkala [Link] Computer Science Page 5


BCA V Sem Cloud Computing Mrs. Krupa
Kru Rani

In a hybrid scenario, heterogeneous resources can be used for different purposes. As we


discussed in the case of a private cloud deployment, desktop machines can be reserved for low
priority workload outside the common working hours. The majority of the applications will be
executed on workstations and clusters, which are the nodes that are constantly
constantly connected to the
Aneka Cloud. Any additional computing capability demand can be primarily addressed by the
local virtualization facilities, and if more computing power is required, it is possible to leverage
external IaaS providers.
Cloud Programming and Management:
Management
Aneka’s primary purpose is to provide a scalable middleware product in which to execute
distributed applications. Application development and management constitute the two major
features that are exposed to developers and system administrators.
1) Aneka SDK:
Aneka provides APIs for developing applications on top of existing programming models,
implementing new programming models, and developing new services to integrate into the
Aneka Cloud. The development of applications mostly focuses
focuses on the use of existing features
and leveraging the services of the middleware, while the implementation of new programming
models or new services enriches the features of Aneka. The SDK provides support for both
programming models and services by mea
means
ns of the Application Model and the Service Model
 Application Model: The Application Model represents the minimum set of APIs that is
common to all the programming models for representing and programming distributed
applications on top of Aneka. This model is further specialized according to the needs and
the particular features of each of the programming models.

SBC Karkala [Link] Computer Science Page 6


BCA V Sem Cloud Computing Mrs. Krupa Rani

Each distributed application running on top of Aneka is an instance of the


ApplicationBase<M> class, where M identifies the specific type of application manager
used to control the application. Application classes constitute the developers’ view of a
distributed application on Aneka Clouds, whereas application managers are internal
components that interact with Aneka Clouds in order to monitor and control the execution of
the application. Application managers are also the first element of specialization of the
model and vary according to the specific programming model used.
Aneka further specializes applications into two main categories:
(1) applications whose tasks are generatedby the user
(2) applications whose tasks are generated by the runtime infrastructure
The first category is the most common and it is used as a reference for several programming
models supported by Aneka: the Task Model, the Thread Model, and the Parameter Sweep
Model. Applications that fall into this category are composed of a collection of units of work
submitted by the user and represented by the WorkUnit class. Each unit of work can have
input and output files, the transfer of which is transparently managed by the runtime. The
specific type of WorkUnit class used to represent the unit of work depends on the
programming model used. All the applications that fall into this category inherit or are
instances of AnekaApplication<W,M>, where W is the specific type of WorkUnit class used,
and M is the type of application manager used to implement the ManualApplicationManager
interface.
The second category covers the case of MapReduce and all those other scenarios in which
the units of work are generated by the runtime infrastructure rather than the user. In this case
there is no common unit-of-work class used, and the specific classes used by application
developers strictly depend on the requirements of the programming model used. For
example, in the case of the MapReduce programming model, developers express their
distributed applications in terms of two functions, map and reduce; hence, the
MapReduceApplication class provides an interface for specifying the Mapper <K,V> and
Reducer <K,V>types and the input files required by the application. Other programming
models might have different requirements and expose different interfaces. For this reason
there are no common base types for this category except for ApplicationBase<M>, where M
implements AutoApplicationManager. The features that are available in the Aneka
Application Model and the way they reflect into the supported programming model:

SBC Karkala [Link] Computer Science Page 7


BCA V Sem Cloud Computing Mrs. Krupa
Kru Rani

 Service Model
The Aneka Service Model defines the basic requirements to implement a service that can be
hosted in an Aneka Cloud. The container defines the runtime environment in which services
are hosted. Each service that is hosted in the container must be compliant w
with
ith the IService
interface, which exposes the following methods and properties:
 Name and status
 Control operations such as Start, Stop, Pause, and Continue methods
 Message handling by means of the HandleMessage method

This figure describes the reference


reference life cycle of each service instance in the Aneka container.
The shaded balloons indicate transient states; the white balloons indicate steady states. A
service instance can initially be in the Unknown or Initialized state, a condition that refers to
the creation of the service instance by invoking its constructor during the configuration of
the container. Once the container is started, it will iteratively call the Start method on each
service method. As a result the service instance is expected to be in a Starting state until the
startup process is completed, after which it will exhibit the Running state. This is the
condition in which the service will last as long as the container is active and running. This is
SBC Karkala [Link] Computer Science Page 8
BCA V Sem Cloud Computing Mrs. Krupa Rani

the only state in which the service is able to process messages. If an exception occurs while
starting the service, it is expected that the service will fall back to the Unknown state, thus
signalling an error.
When a service is running it is possible to pause its activity by calling the Pause method and
resume it by calling Continue. As described in the figure, the service moves first into the
Pausing state, thus reaching the Paused state. From this state, it moves into the Resuming
state while restoring its activity to return to the Running state. Not all the services need to
support the pause/continue operations, and the current implementation of the framework
does not feature any service with these capabilities.
When the container shuts down, the Stop method is iteratively called on each service
running, and services move first into the transient Stopping state to reach the final Stopped
state, where all resources that were initially allocated have been released.
2) Management Tools
Aneka is a pure PaaS implementation and requires virtual or physical hardware to be
deployed. Hence, infrastructure management, together with facilities for installing logical
clouds on such infrastructure, is a fundamental feature of Aneka’s management layer. This
layer also includes capabilities for managing services and applications running in the Aneka
Cloud
 Infrastructure management: Aneka leverages virtual and physical hardware in order
to deploy Aneka Clouds. Virtual hardware is generally managed by means of the
Resource Provisioning Service, which acquires resources on demand according to the
need of applications, while physical hardware is directly managed by the Administrative
Console by leveraging the Aneka management API of the PAL. The management
features are mostly concerned with the provisioning of physical hardware and the
remote installation of Aneka on the hardware.
 Platform management: Infrastructure management provides the basic layer on top of
which Aneka Clouds are deployed. The creation of Clouds is orchestrated by deploying
a collection of services on the physical infrastructure that allows the installation and the
management of containers. A collection of connected containers defines the platform on
top of which applications are executed. The features available for platform management
are mostly concerned with the logical organization and structure of Aneka Clouds. It is
possible to partition the available hardware into several Clouds variably configured for
different purposes. Services implement the core features of Aneka Clouds and the

SBC Karkala [Link] Computer Science Page 9


BCA V Sem Cloud Computing Mrs. Krupa Rani

management layer exposes operations for some of them, such as Cloud monitoring,
resource provisioning and reservation, user management, and application profiling.
 Application management: Applications identify the user contribution to the Cloud.
The management APIs provide administrators with monitoring and profiling features
that help them track the usage of resources and relate them to users and applications.
This is an important feature in a cloud computing scenario in whichusers are billed for
their resource usage. Aneka exposes capabilities for giving summary and detailed
information about application execution and resource utilization.

SBC Karkala [Link] Computer Science Page 10

You might also like