NMIMS Global Access
School for Continuing Education (NGA-SCE)
Course: Cloud Computing
Internal Assignment Applicable for December 2022 Examination
Q: 1
Introduction:
Workload Distribution Architecture:
Cloud architects uses this kind of concept to create highly scalable application or device. These kind
of concepts are very critical because of
expanding datasets, high traffic, and
the scalable demand and for require
faster response times. For such
requirement we plan load balancing to
increase the performance and reduce
the downtime/loss of data/poor
performance etc.
Workload distribution architecture
uses IT resources that can be
horizontally scaled with the use of one
or more identical IT resources. This is
accomplished through the use of a
load balancer that provides runtime
logic which distributes the workload
among the available IT assets evenly.
This model can be applied to any IT
resource and is commonly used with;
distributed virtual servers, cloud
storage devices, and cloud services. In
addition to a load balancer and the previously mentioned resources
CONCEPT/ APPLICATION:
Cloud systems require methods to be able to dynamically scale IT resources up or down as per
demand. Workload distribution architecture provides a method of distributing workloads across
multiple copies of an IT resource, and Resource Pooling provides a method of automatically
synchronizing IT resources through the use of resource pools, as well as providing a method for
dynamically allocating resources on demand.
The workload architecture model basically works on 2 methodology
Vertical Scaling & Horizontal Scaling
Vertical scaling, is the concept of
adding more resources to an
instance that already has resources
NMIMS Global Access
School for Continuing Education (NGA-SCE)
Course: Cloud Computing
Internal Assignment Applicable for December 2022 Examination
allocated. This could simply mean adding additional CPU or memory resources to a VM. We can simply
add more CPU for that day, and then scale down the CPUs the following day. How dynamically this
can happen depends on how easy it is for us to add and remove those additional CPUs while the
machine is running, or the application team’s ability to take an outage. This is because vertical scaling
typically requires a redeployment of an instance or powering down of the instance to make the
change, depending on the underlying operating system. Either way, the benefit of doing this in Azure
is that we don’t have to purchase the hardware up front, rack it, configure it etc. Rather via clicking in
the Azure portal or using code, we can adjust for it. Cloud service provider already has pre-provisioned
resources we can allocate; we begin paying for those resources as we use them.
Horizontal scaling works a little differently and, generally speaking, provides a more reliable way to
add resources to our application. Scaling out is when we add additional instances that can handle the
workload. These could be VMs, or perhaps additional container pods that get deployed. The idea being
that the user accessing the website, comes in via a load balancer which chooses the web server they
connect to. When we have increased demand, we can deploy more web servers (scaling out). When
demand subsides, we can reduce the amount of web servers (scaling in). The benefits here are that
we don’t need to make changes to the virtual hardware on each machine, but rather add and remove
capacity from the load balancer itself.
CONCLUSION:
According to our example “a Delhi base Software Company which has implemented cloud for its
development activity which has setup virtual server A and virtual server B” it is indicating that it is a
type of Horizontal scaling. Which means whenever we adding additional nodes or machines to your
infrastructure to cope with new demands. If you are hosting an application on a server and find that
it no longer has the capacity or capabilities to handle traffic, adding a server may be your solution.
We created a load managed group of servers that hosted the same set of applications. We setup our
load balancing policies to look at CPU and memory utilization to find the least loaded server for the
next user request. As users logged on, the server load would increase. All of this was in an effort to
give the user the best experience possible. We wanted to spread the load.
NMIMS Global Access
School for Continuing Education (NGA-SCE)
Course: Cloud Computing
Internal Assignment Applicable for December 2022 Examination
Q: 2
Introduction:
Scalability & Elasticity:
The demand for resources are often not static in nature. Users sometimes access websites more often
at certain times of the day. When high-traffic events, such as live event, year-end or festival online
sale, the demand placed on services offering up content increases, and so does the consumption of
the underlying CPU, memory, disk, and network are increasing.
However, even when you aren’t using underlying resources, you are often still paying for them. A good
example would be a virtual machine (VM), where you’re paying monthly for a specific VM size to be
running (e.g. 2 CPU, 4GB of memory), and you will continue to pay the monthly charge regardless if
you are running those CPUs at 100% or not. Consider applications in the enterprise where you might
want to run reports at a certain time of the week or month. Naturally, at those times, you will require
more resources; but do you really want to pay for the larger machines or more machines to be running
all the time? This is a major area where cloud computing can help, but we need to take into account
the workload.
CONCEPT/ APPLICATION:
Scalability and elasticity are ways in which we can deal with the scenarios described above.
Scalability:
Scalability is our ability to scale a workload. This could mean Vertical Scaling (scaling up or down), as
well as Horizontal Scaling (scaling out or back in).
Vertical scaling, is the concept of adding more resources to an instance that already has resources
allocated. This could simply mean adding additional CPU or memory resources to a VM. We can simply
NMIMS Global Access
School for Continuing Education (NGA-SCE)
Course: Cloud Computing
Internal Assignment Applicable for December 2022 Examination
add more CPU for that day, and then scale down the CPUs the following day. How dynamically this
can happen depends on how easy it is for us to add and remove those additional CPUs while the
machine is running, or the application team’s ability to take an outage. This is because vertical scaling
typically requires a redeployment of an instance or powering down of the instance to make the
change, depending on the underlying operating system. Either way, the benefit of doing this in Azure
is that we don’t have to purchase the hardware up front, rack it, configure it etc. Rather via clicking in
the Azure portal or using code, we can adjust for it. Cloud service provider already has pre-provisioned
resources we can allocate; we begin paying for those resources as we use them.
Horizontal scaling works a little differently and, generally speaking, provides a more reliable way to
add resources to our application. Scaling out is when we add additional instances that can handle the
workload. These could be VMs, or perhaps additional container pods that get deployed. The idea being
that the user accessing the website, comes in via a load balancer which chooses the web server they
connect to. When we have increased demand, we can deploy more web servers (scaling out). When
demand subsides, we can reduce the amount of web servers (scaling in). The benefits here are that
we don’t need to make changes to the virtual hardware on each machine, but rather add and remove
capacity from the load balancer itself.
Elasticity:
Elasticity follows on from scalability and defines the characteristics of the workload. It is the
workload’s ability to scale up and down. Often you will hear people say, “Is this workload elastic?”
Elastic workloads are a major pattern which benefits from cloud computing. If our workload does
benefit from seasonality and variable demand, then let’s build it out in a way that it can benefit from
cloud computing. As the workload resource demands increase, we can go a step further and add rules
that automatically add instances. As workload resource demands decrease; again, we could have rules
that start to scale in those instances when it is safe to do so without giving the user a performance
impact.
Static scaling vs Elastic Scaling
NMIMS Global Access
School for Continuing Education (NGA-SCE)
Course: Cloud Computing
Internal Assignment Applicable for December 2022 Examination
The big difference between static scaling and elastic scaling, is that with static scaling, we are
provisioning resources to account for the “peak” even though the underlying workload is constantly
changing. With elastic scaling, we are trying to fine-tune our system to allow for the resources to be
added on demand, while ensuring we have some buffer room.
Once again, Cloud computing, with its perceived infinite scale to the consumer, allows us to take
advantage of these patterns and keep costs down. If we can properly account for vertical and
horizontal scaling techniques, we can create a system that automatically responds to user demand,
allocating and deallocating resources as appropriate.
CONCLUSION:
Now, we’ve probably noticed that Cloud elasticity and cloud scalability works together.
Before a system can be elastic, it needs to be scalable. Elasticity then jumps in to ensure the scaling
happens appropriately and rapidly.
Now we see how Cloud elasticity and cloud scalability will help us to overcome the under provisioning
an over provisioning issue.
Under provisioning: whenever resources are allocating fewer than you use. And due to this functions
won’t be able to run its full capacity due to scarcity of resources.
Over Provisioning: Over-provisioning refers to a scenario where you buy more capacity than you need.
Because of this you invest a lot in non-usable resources where business suffer extra expenses over IT
operation.
Use of Cloud elasticity and cloud scalability:
Cloud elasticity helps users prevent over-provisioning or under-provisioning system resources. Over-
provisioning leads to cloud spend wastage, while under-provisioning can lead to server outages as
NMIMS Global Access
School for Continuing Education (NGA-SCE)
Course: Cloud Computing
Internal Assignment Applicable for December 2022 Examination
available servers are overworked. Server outages lead to revenue losses and customer dissatisfaction,
both of which are bad for business. Scaling with elasticity provides a CenterPoint.
Elasticity is ideal for short-term needs, such as handling website traffic spikes and doing database
backups. But elasticity also helps smooth out service delivery when combined with cloud scalability.
For example, by revolving up additional VMs in a single server, you create more capacity in that server
to handle dynamic workload surges.
Example:
E-commerce
If you run a limited-time offer on festival season for clothing and accessory or electronics on tech
festival. You can expect more traffic and server requests during that time. The more effectively you
run your awareness campaign, the more the potential buyers’ interest you can supposed to peak.
New shoppers would register new accounts. Existing customers would also revisit old wish lists, , or
try to redeem accumulated points. This would put a lot more load on your servers during the
campaign’s duration than at most times of the year.
With an elastic platform, you could provision more resources to absorb the higher festive season
demand. After that, you could return the extra capacity to your cloud provider and keep what’s
workable in everyday operations.
Advantages and Disadvantages of Cloud elasticity and cloud scalability
Advantages:
Cost effectiveness - cloud provider provides who provide elastic cloud supply they use system
monitoring tools that track resource utilization. They then mechanically analyse utilization vs resource
allocation. The aim is always to ensure these two metrics match up to ensure the system performs at
its peak and cost-effectively. Cloud providers also price it on a pay-per-use model, allowing you to pay
for what you use and no more. The pay-as-you-expand model would also let you add new
infrastructure components to prepare for growth.
Flawless operation: Cloud elasticity blocs with cloud scalability to ensure both customers and cloud
platforms meet changing computing needs as and when required. While scalability helps handle long-
term growth, elasticity ensures flawless service availability at present. It also helps prevent system
overloading or runaway cloud costs due to over-provisioning. For cloud platforms, elasticity helps keep
customers happy.
Disadvantages: Cloud elasticity may not be for everyone. If you have relatively stable demand for your
products or services online, cloud scalability alone may be sufficient. For example, if you run a business
that doesn’t experience seasonal or occasional spikes in server requests, you may not mind using
scalability without elasticity. Keep in mind elasticity requires scalability, but not the reverse. Yet,
nobody can predict when you may need to take advantage of a sudden wave of interest in your
company. So, what do you do when you need to be ready for that opportunity but do not want to
waste your cloud budget speculating? Enter cloud cost optimization.
NMIMS Global Access
School for Continuing Education (NGA-SCE)
Course: Cloud Computing
Internal Assignment Applicable for December 2022 Examination
Q.3 – A
Multitenancy: Multi-tenancy in cloud computing means that many tenants or users can use the same
resources. The users can independently use resources
provided by the cloud computing company without
affecting other users. Multi-tenancy is a crucial attribute of
cloud computing.
Multitenancy will be applicable to all three module & It
applies to all the three layers of cloud, namely
infrastructure as a service (IaaS), Platform as a Service
(PaaS), and software as a Service (SaaS).
But most probably it is use with software as a Service
(SaaS).
Example: A bank has many account holders, and these
account holders have many bank accounts in the same
bank. Each account holder has their credentials, like bank
account number, pin, etc., which differ from others. all the
account holders have their assets in the same bank, yet no account holder knows the details of the
other account holders. All the account holders use the same bank to make transactions.
Advantages of Multi-tenant Cloud:
1. Serving multiple customers using a single-tenant approach is too much workload for a two-
person team. While doable, it can take a lot of effort. However, if they adopt a multi-tenant
system, they can access the tools provided by the cloud services provider, such as Google
Cloud Platform (GCP), and build their applications faster. Multitenancy is at the heart of
effective SaaS operations, as it makes it easy to build and deploy applications faster and scale
those applications quickly.
2. Unlike the single-tenant model, where resources are underutilized, available resources in a
multi-tenant environment are maximized because multiple users share them.
3. Because there are multiple customers in multi-tenant cloud architecture, the cost is
apportioned, making the application more affordable to build.
Disadvantages of Multi-tenant Cloud:
1. Since it is an environment that many people share, there is no customization option. This
architecture may not be suitable if your application requires customization for each tenant.
2. While other tenants may not see your data, the likelihood of a data breach increases as there
is broader access.
3. Since there is one single server, any failure can affect all tenants.
4. Security: This is one of the most challenging and risky issues in multi-tenancy cloud computing.
There is always a risk of data loss, data theft, and hacking. The database administrator can
grant access to an unauthorized person accidentally. Despite software and cloud computing
NMIMS Global Access
School for Continuing Education (NGA-SCE)
Course: Cloud Computing
Internal Assignment Applicable for December 2022 Examination
companies saying that client data is safer than ever on their servers, there are still security
risks.
5. Less Powerful: Many cloud services run on web 2.0, with new user interfaces and the latest
templates, but they lack many essential features. Without the necessary and adequate
features, multi-tenancy cloud computing services can be a nuisance for clients.
6. Monitoring: constant monitoring is vital for cloud service providers to check if there is an issue
in the multi-tenancy cloud system. Multi-tenancy cloud systems require continuous
monitoring, as computing resources get shared with many users simultaneously. If any
problem arises, it must get solved immediately not to disturb the system’s efficiency.
NMIMS Global Access
School for Continuing Education (NGA-SCE)
Course: Cloud Computing
Internal Assignment Applicable for December 2022 Examination
Q.3 B
We have serverless computing, serverless databases, serverless storage, serverless messaging, and
much more.
First we will understand what is a server & its structure?
A server is basically a computer designed to run 24/7 and provide services to other people on a
network. If you've been using the internet in any way, you've interacted with hundreds of servers.
Web servers that send you web pages, mail servers for communication, file servers to store your files,
game servers for entertainment, and so on and so on.
This is what a server looks like, and they are designed to be fast, have a lot of memory, and a very
speedy internet connection. They're usually housed in massive data centres, like this one from Google.
The problem with servers, however, is that they're not flexible.
Hosting your own website on your own server is cool, but if you only have a handful of visitors each
day, your server will be idle most of the time. This is called over-provisioning. You're paying for a server
that has a certain capacity, but you're not using it, so most of it goes to waste.
The opposite of this is called under-provisioning. Your server gets overwhelmed by many visitors trying
to load it at once. The server will run out of capacity and crash. Predicting how many resources you
will need to run an online service is almost impossible. Any service could turn into an overnight success
or, it might grow slowly for many years. If you under-anticipate the traffic, your service will be slow or
go down.
NMIMS Global Access
School for Continuing Education (NGA-SCE)
Course: Cloud Computing
Internal Assignment Applicable for December 2022 Examination
Over-estimate, and the server costs will burden on your pocket. And aside from scaling, servers also
require a lot of maintenance. You have to update software, replace failed hardware, and make sure
they have reliable internet connections, and so on.
All of this means you can focus less on developing your website or service. To solve these issues, cloud
providers introduced "serverless" products. They allow you to run an online service without having to
worry about servers or other underlying Infrastructure.
This leaves you to focus entirely on your application. Want to create a file sharing service? Use a
serverless storage product, and you can store millions of files without worrying about having enough
hard drives to store them
Below are the few example of serverless service provider.
Advantages:
1. Tracking data make easy - Use a serverless database, and you'll be able to store, create, and
fetch millions of records every second if needed.
2. There are three main benefits to serverless products: you only pay for what you use, they
easily scale up & down, and you don't have to manage servers.
3. Cost benefit - Serverless products charge you for your actual usage. For instance, per gigabyte
of stored data or per 100ms that your code is running. That's advantageous when you're not
using the same amount of resources all the time.
A business-oriented website might see a lot of traffic during business hours, but almost none
at night, when using regular servers, you would have a fixed cost throughout the day,
regardless of whether, you have a lot of traffic or not.
4. Scalability - Hosting a small website with just a handful of visitors is fine. But what if your
website suddenly starts to get millions of visitors. Behind-the-scenes, the cloud provider, will
allocate more servers to your website, and as traffic goes down, so will the number of servers
will be less.
NMIMS Global Access
School for Continuing Education (NGA-SCE)
Course: Cloud Computing
Internal Assignment Applicable for December 2022 Examination
5. Server management: The last benefit is something that you don't have to manage servers.
This includes, you not having to buy equipment, not worrying about having the latest security
patches installed, no hassle when hard drives die, and most importantly: no growing pains
when your site or service sees a spike in traffic. You also don't have to worry about upgrading
and replacing your equipment at the end of their lifespan, which has to be done for every
servers in 3 to 5 years.
Disadvantages:
1. Complicated Payment/charges: Because serverless products scale so easily, and they only
charge you for what you use, it can be tricky to estimate how much you will have to pay. Also,
every serverless product has a certain amount of features that are all priced differently. In
that sense, renting or buying traditional servers is much more predictable. For a given price,
you will get a certain amount of server capacity. No more, no less.
2. Switching is difficult in between cloud provider: each cloud provider has its own specific
serverless products, and they aren't necessarily compatible with one another. Meaning, you
risk getting locked in by the cloud provider. If you don't pay attention to this, you will become
very dependent on them and switching to another provider might become a huge and
expensive effort. When you depend on such a feature which is require to your business, it
might be impossible to move to a competitor without re-architecting your site or service. In
fact, serverless products are probably the best way for cloud providers to tie you to their
platforms.
3. Slow in operation: Serverless compute products limit how long your code can run to just a
few minutes. And sometimes, serverless products are just slower because they're dynamically
changing the capacity allocated to your website or application.
Server-less doesn't mean "no servers". It's just that you don't have to worry about them anymore. But
cloud providers still have hundreds or thousands of them. A company that gets backend services from
a serverless vendor is charged based on their computation and do not have to reserve and pay for a
fixed amount of bandwidth or number of servers, as the service is auto-scaling. Note that despite the
name serverless, physical servers are still used but developers do not need to be aware of them.