Unit 3 Notes
Unit 3 Notes
Krupa
Kru Rani
UNIT 3
Building Aneka Clouds
Aneka is primarily a platform for developing distributed applications for clouds. As a software
platform it requires infrastructure on which to be deployed; this infrastructure needs to be
managed. Infrastructure management tools are specifically designed for this task, and building
clouds is one of the primary tasks of administrators. Aneka supports various deployment
models for public, private, and hybrid clouds.
1) Infrastructure Organization:
and remote administrative access to the node. A different scenario is constituted by the dynamic
provisioning of virtual instances; these are generally created by pre-packaged images already
containing an installation of Aneka, which only need to be configured to join a specific Aneka
Cloud.
2) Logical Organization:
The logical organization of Aneka Clouds can be very diverse, since it strongly depends on the
configuration selected for each of the container instances belonging to the Cloud. The most
common scenario is to use a master-worker configuration with separate nodes for storage.
The master node features all the services that are most likely to be present in one single copy
and that provide the intelligence of the Aneka Cloud. What specifically characterizes a node as
a master node is the presence of the Index Service (or Membership Catalogue) configured in
master mode; all the other services, except for those that are mandatory, might be present or
located in other nodes. A common configuration of the master node is as follows:
Index Service (master copy)
Heartbeat Service
Logging Service
Reservation Service
Resource Provisioning Service
Accounting Service
Reporting and Monitoring Service
Scheduling Services for the supported programming models
The master node also provides connection to an RDBMS facility where the state of several
services is maintained. For the same reason, all the scheduling services are maintained in the
master node. They share the application store that is normally persisted on the RDBMS in
order to provide a fault-tolerant infrastructure. The master configuration can then be replicated
in several nodes to provide a highly available infrastructure based on the failover mechanism.
The worker nodes constitute the workforce of the Aneka Cloud and are generally configured
for the execution of applications. They feature the mandatory services and the specific
execution services of each of the supported programming models in the Cloud,
Cloud, A very common
configuration isthe following:
Index Service
Heartbeat Service
Logging Service
Allocation Service
Monitoring Service
Execution Services for the supported programming models
Storage nodes are optimized to provide storage support to applications. They feature, among
the mandatory and usual services, the presence of the Storage Service. The number of storage
nodes strictly depends on the predicted workload and storage consumption of applications.
Storage nodes mostly reside on machines that have consider
considerable
able disk space to accommodate a
large quantity of files. The common configuration of a storage node is the following:
Index Service
Heartbeat Service
Logging Service
Monitoring Service
Storage Service
3) Private Cloud Deployment Mode:
Mode
A private deploymentt mode is mostly constituted by local physical resources and infrastructure
management software providing access to a local pool of nodes, which might be virtualized. In
this scenario Aneka Clouds are created by harnessing a heterogeneous pool of resources such
has desktop machines, clusters, or workstations. These resources can be partitioned into
different groups, and Aneka can be configured to leverage these resources according to
application needs.
This deployment is acceptable for a scenario in which the
the workload of the system is predictable
and a local virtual machine manager can easily address excess capacity demand. Most of the
Aneka nodes are constituted of physical nodes with a long lifetime and a static configuration
and generally do not need to bee reconfigured often. The different nature of the machines
harnessed in a private environment allows for specific policies on resource management and
usage that can be accomplished by means of the Reservation Service. For example, desktop
machines that are used during the day for office automation can be exploited outside the
standard working hours to execute distributed applications.
The deployment is generally contained within the infrastructure boundaries of a single IaaS
provider. The reasons for this are to minimize the data transfer between different providers,
which is generally priced at a higher cost, and to have better networ
network
k performance. In this
scenario it is possible to deploy an Aneka Cloud composed of only one node and to completely
leverage dynamic provisioning to elastically scale the infrastructure on demand.A
demand.A fundamental
role is played by the Resource Provisioning Se
Service,
rvice, which can be configured with different
images and templates to instantiate. Other important services that have to be included in the
master node are the Accounting and Reporting Services.
5) Hybrid Cloud Deployment Mode:
Mode
The hybrid deployment model con
constitutes
stitutes the most common deployment of Aneka. In many
cases, there is an existing computing infrastructure that can be leveraged to address the
computing needs of applications.
This scenario constitutes the most complete deployment for Aneka that is able ttoo leverage all
the capabilities of the framework:
Dynamic Resource Provisioning
Resource Reservation
Workload Partitioning
Accounting, Monitoring, and Reporting
Service Model
The Aneka Service Model defines the basic requirements to implement a service that can be
hosted in an Aneka Cloud. The container defines the runtime environment in which services
are hosted. Each service that is hosted in the container must be compliant w
with
ith the IService
interface, which exposes the following methods and properties:
Name and status
Control operations such as Start, Stop, Pause, and Continue methods
Message handling by means of the HandleMessage method
the only state in which the service is able to process messages. If an exception occurs while
starting the service, it is expected that the service will fall back to the Unknown state, thus
signalling an error.
When a service is running it is possible to pause its activity by calling the Pause method and
resume it by calling Continue. As described in the figure, the service moves first into the
Pausing state, thus reaching the Paused state. From this state, it moves into the Resuming
state while restoring its activity to return to the Running state. Not all the services need to
support the pause/continue operations, and the current implementation of the framework
does not feature any service with these capabilities.
When the container shuts down, the Stop method is iteratively called on each service
running, and services move first into the transient Stopping state to reach the final Stopped
state, where all resources that were initially allocated have been released.
2) Management Tools
Aneka is a pure PaaS implementation and requires virtual or physical hardware to be
deployed. Hence, infrastructure management, together with facilities for installing logical
clouds on such infrastructure, is a fundamental feature of Aneka’s management layer. This
layer also includes capabilities for managing services and applications running in the Aneka
Cloud
Infrastructure management: Aneka leverages virtual and physical hardware in order
to deploy Aneka Clouds. Virtual hardware is generally managed by means of the
Resource Provisioning Service, which acquires resources on demand according to the
need of applications, while physical hardware is directly managed by the Administrative
Console by leveraging the Aneka management API of the PAL. The management
features are mostly concerned with the provisioning of physical hardware and the
remote installation of Aneka on the hardware.
Platform management: Infrastructure management provides the basic layer on top of
which Aneka Clouds are deployed. The creation of Clouds is orchestrated by deploying
a collection of services on the physical infrastructure that allows the installation and the
management of containers. A collection of connected containers defines the platform on
top of which applications are executed. The features available for platform management
are mostly concerned with the logical organization and structure of Aneka Clouds. It is
possible to partition the available hardware into several Clouds variably configured for
different purposes. Services implement the core features of Aneka Clouds and the
management layer exposes operations for some of them, such as Cloud monitoring,
resource provisioning and reservation, user management, and application profiling.
Application management: Applications identify the user contribution to the Cloud.
The management APIs provide administrators with monitoring and profiling features
that help them track the usage of resources and relate them to users and applications.
This is an important feature in a cloud computing scenario in whichusers are billed for
their resource usage. Aneka exposes capabilities for giving summary and detailed
information about application execution and resource utilization.