NPTEL MAIN MCQ WITH ANS FROM OLD ASSIGNMENTS
1. Which of the following statements is/are false?
a. Fog and Edge computing are substitutes for cloud computing.
b. Fog and Edge computing may aid cloud computing in overcoming some of the limitations like
latency issues.
Ans.
a. Fog and Edge computing are substitutes for cloud computing.
❌ False – Fog and Edge computing are not direct substitutes for cloud computing.
Instead, they complement it. While they process data closer to the source (to reduce
latency and bandwidth usage), cloud computing still plays a crucial role in heavy
computation, long-term storage, and centralized data processing.
b. Fog and Edge computing may aid cloud computing in overcoming some of the limitations
like latency issues.
✅ True – This is correct. Fog and Edge computing process data closer to the device or
source, which reduces latency, making them effective aids to cloud computing,
especially in time-sensitive applications like autonomous vehicles or real-time analytics.
✅ Final Answer:
Statement a is false.
Statement b is true.
2. Which of the following is not a layer of the Cloud-Fog environment model?
a. Client layer
b. Serverless layer
c. Fog layer
d. Cloud layer
Ans.
a. Client layer
✅ Part of the model – This represents end-user devices or sensors that generate data and
interact with the system.
NPTEL MAIN MCQ WITH ANS FROM OLD ASSIGNMENTS
b. Serverless layer
❌ Not a standard layer – "Serverless" is a computing paradigm or execution model, not a
physical or architectural layer in the Cloud-Fog model. Therefore, this is not typically a named
layer in the model.
c. Fog layer
✅ Part of the model – This is the intermediate layer between the client (edge) and cloud,
providing low-latency processing.
d. Cloud layer
✅ Part of the model – This represents centralized data centers for storage and processing.
✅ Final Answer:
b. Serverless layer is not a layer of the Cloud-Fog environment model.
3. In the Cloud-Fog environmental model, servers contain a fog server manager and
virtual machines to manage requests by using which technique?
a. Image virtualization
b. Container virtualization
c. Server virtualization
d. None of these
Ans.
In the Cloud-Fog environment model, fog servers manage tasks using virtualization techniques
to efficiently allocate resources and process data near the edge.
The most relevant and widely used technique for this environment is:
b. Container virtualization ✅
Correct Answer – Container virtualization (e.g., using Docker or Kubernetes) is
commonly used in fog computing because it is lightweight, fast to deploy, and efficient
for managing applications and services at the edge.
It allows the Fog Server Manager to run multiple isolated applications or microservices
on the same host with minimal overhead, ideal for latency-sensitive and resource-
constrained fog environments.
NPTEL MAIN MCQ WITH ANS FROM OLD ASSIGNMENTS
✅ Final Answer:
b. Container virtualization
4. The architecture used for resource management in fog/edge computing is classified on
the basis of which of the following?
a. Tenancy
b. Data flow
c. Hardware
d. All of these
Ans.
In Fog/Edge computing, resource management architectures can be classified based on:
a. Tenancy – Whether the system supports single-tenant or multi-tenant setups, which
affects isolation and resource allocation.
b. Data flow – The direction and control of data movement (e.g., from edge to cloud,
cloud to edge, or bidirectional) help define how resources are managed and how quickly
responses are delivered.
c. Hardware – The type of devices and infrastructure (e.g., IoT devices, gateways,
servers) impacts how resources are provisioned and managed.
✅ Final Answer:
d. All of these
5. Which of the following underlying algorithm(s) is used to facilitate fog/edge
computing?
a. Discovery
b. Load balancing
c. Benchmarking
d. Cache Flow
NPTEL MAIN MCQ WITH ANS FROM OLD ASSIGNMENTS
To facilitate fog/edge computing, several underlying algorithms are used to optimize
performance, resource utilization, and responsiveness.
Let's evaluate the options:
a. Discovery ✅
Used to identify and connect with nearby edge/fog nodes or devices dynamically.
Crucial for managing distributed and mobile environments.
b. Load balancing ✅
Helps in distributing workloads efficiently across edge/fog nodes to prevent overloading
and ensure optimal resource usage and latency reduction.
c. Benchmarking ✅
Involves evaluating performance of fog/edge nodes to determine the best resources for
a given task. Essential for effective resource management and deployment strategies.
d. Cache Flow ❌
This is not a standard or widely recognized algorithm in fog/edge computing. While
caching strategies are used, "Cache Flow" is not a known algorithm in this context.
✅ Final Answer:
d. Cache Flow is not an underlying algorithm used to facilitate fog/edge computing.
6. ________ is a technique in which a server, an application, and the associated data are
moved onto the edge of the network.
a. Containerization
b. Virtualization
c. Offloading
d. None of these
Offloading refers to the technique where computational tasks, applications, and data are
moved from centralized servers (like the cloud) to edge devices or edge servers. This reduces
latency, saves bandwidth, and improves response time — key goals of fog/edge computing.
Let’s quickly look at the other options:
NPTEL MAIN MCQ WITH ANS FROM OLD ASSIGNMENTS
a. Containerization – This is a method of packaging applications, not specifically about
moving them to the edge.
b. Virtualization – Refers to creating virtual instances of hardware or software, not
necessarily about relocating them.
d. None of these – Incorrect because offloading is the correct term.
✅ Final Answer:
c. Offloading
7. Cloud federation is the collaboration between cloud service providers to achieve which
of the following? Choose the most appropriate option(s).
a. Capacity utilization
b. Interoperability
c. Offloading
d. None of these
Cloud federation refers to the collaboration and integration between multiple cloud service
providers to share resources, workloads, and services.
Here’s how the options apply:
a. Capacity utilization ✅
o Cloud federation helps providers share infrastructure and workloads, which
improves overall capacity management and avoids under/overutilization.
b. Interoperability ✅
o Federation requires different cloud systems to work together seamlessly, so
achieving interoperability is a core goal.
c. Offloading ❌
o While offloading may happen as a result of federation (e.g., moving workloads to
other clouds), it's not the main goal or a defining feature.
d. None of these ❌
o Incorrect, as both a and b are valid.
NPTEL MAIN MCQ WITH ANS FROM OLD ASSIGNMENTS
✅ Final Answer:
a. Capacity utilization and b. Interoperability
8. Which of the following is false about loosely coupled federations?
a. Limited inter operations between cloud instances.
b. Usually no support for advanced features.
c. Advanced control over remote resources.
d. None of these
a. Limited inter operations between cloud instances. ✅ True
Loosely coupled federations are characterized by minimal integration, leading to limited
interoperability.
b. Usually no support for advanced features. ✅ True
Due to weak integration, such federations often lack advanced features like unified
identity management, billing, or orchestration.
c. Advanced control over remote resources. ❌ False
This is false because loosely coupled federations do not provide advanced control over
remote resources — that would be more typical of a tightly coupled federation.
d. None of these ❌
Incorrect, because statement c is false.
✅ Final Answer:
c. Advanced control over remote resources is false about loosely coupled federations.
9. In which of the following different CSPs establish an agreement stating the terms and
conditions under which one partner cloud can use resources from another?
a. Loosely coupled federation
b. Partially coupled federation
c. Tightly coupled federation
d. All of these
NPTEL MAIN MCQ WITH ANS FROM OLD ASSIGNMENTS
In a tightly coupled federation, different Cloud Service Providers (CSPs) enter into a formal
agreement outlining the terms and conditions under which one cloud can use the resources of
another. These agreements enable advanced features like:
Unified identity management
Seamless workload migration
Resource sharing with centralized control
High interoperability
Let’s briefly evaluate the other options:
a. Loosely coupled federation ❌
o Minimal collaboration; no formal agreements or deep integration.
b. Partially coupled federation ❌
o Some coordination exists, but not as comprehensive as in tightly coupled setups.
d. All of these ❌
o Incorrect, as only tightly coupled federations involve formal agreements and
advanced integration.
✅ Final Answer:
c. Tightly coupled federation
10. Hybrid architecture combines the existing on-premise infrastructure (usually a private
cloud) with remote resources from one or more public clouds to provide extra capacity
to satisfy peak demand periods.
a. True
b. False
A hybrid cloud architecture integrates on-premise infrastructure (often a private cloud) with
public cloud services to:
Provide scalability during peak demand
NPTEL MAIN MCQ WITH ANS FROM OLD ASSIGNMENTS
Maintain data security and control via private resources
Leverage cost-effective public cloud resources as needed (also known as cloud
bursting)
This approach combines the best of both private and public cloud environments.
✅ Final Answer:
a. True
Cloud Computing Nptel Week 10 Assignment Answers (Jan-Apr 2025)
1) VM migration is the process of moving running applications or VMs from one physical host
to another host.
a. True
b. False
VM migration is indeed the process of moving running virtual machines (VMs) or their
applications from one physical host to another. This can be done for load balancing,
maintenance, fault tolerance, or energy efficiency, often without downtime (called live
migration).
Final answer:
a. True
2) Given the VM memory size of 1024GB and the transmission rate of 16 MB/sec, what are
the total migration time and downtime for non-live VM migration? Choose the most
appropriate option.
a. 20 hours, 25 hours
b. 18 hours, 18 hours
c. 16 hours, 16 hours
d. 24 hours, 20 hours
Let's calculate the total migration time and downtime for non-live VM migration based on the
given data:
NPTEL MAIN MCQ WITH ANS FROM OLD ASSIGNMENTS
Given:
VM memory size = 1024 GB
Transmission rate = 16 MB/sec
Non-live migration downtime = total migration time (because the VM is paused during
migration)
Step 1: Convert units
1 GB = 1024 MB
So, VM size in MB = 1024 × 1024 = 1,048,576 MB
Step 2: Calculate total migration time
Migration time (seconds)
Step 4: Downtime for non-live migration = total migration time = ~18 hours
Final answer:
b. 18 hours, 18 hours
3) With Docker, the resource management effort is separated from the configuration effort.
a. True
b. False
NPTEL MAIN MCQ WITH ANS FROM OLD ASSIGNMENTS
With Docker, resource management (like allocating CPU, memory, network limits) is handled
separately from configuration (like setting up the environment, dependencies, and application
settings in Dockerfiles and containers). This separation allows more flexibility and modular
management.
Final answer:
a. True
4) In Docker utility, ____ is a collection of filesystem layers and some metadata which, if taken
together, can be spun up as Docker containers.
a. Operating System
b. Microservice
c. Virtual Machine
d. Image
A Docker Image is a collection of filesystem layers and metadata that can be used to create
(spin up) Docker containers. It contains everything needed to run an application, including code,
libraries, and dependencies.
Final answer:
d. Image
Cloud Computing Nptel Week 10 Assignment Answers
5) What is(are) the reason(s) to opt for VM migration in the cloud computing paradigm?
a. No particular reason; depends on the will of the end client/user
b. To remove a physical machine from service
c. To save power consumption
d. To relieve the load on the congested hosts
The correct reasons to opt for VM migration in cloud computing are:
NPTEL MAIN MCQ WITH ANS FROM OLD ASSIGNMENTS
✅ b. To remove a physical machine from service
For maintenance or hardware upgrades without downtime.
✅ c. To save power consumption
Consolidate VMs onto fewer physical hosts and shut down unused machines to save
energy.
✅ d. To relieve the load on the congested hosts
Balance load by migrating VMs from overloaded hosts to less busy ones.
❌ a. No particular reason; depends on the will of the end client/user
This is false; VM migration is usually driven by system management goals, not random
user preference.
Final answer:
b, c, and d are valid reasons to opt for VM migration.
6) What is(are) the key advantage(s) of Docker?
a. It facilitates microservice architecture
b. It can be used to package software
c. It can be used to model networks
d. None of these
The correct advantages of Docker are:
✅ a. It facilitates microservice architecture
Docker containers are lightweight and isolated, ideal for deploying microservices
independently.
✅ b. It can be used to package software
Docker images package applications and their dependencies into portable containers.
NPTEL MAIN MCQ WITH ANS FROM OLD ASSIGNMENTS
❌ c. It can be used to model networks
Docker provides some networking capabilities, but network modeling is not its
primary advantage or function.
❌ d. None of these
Incorrect, since a and b are true.
Final answer:
a. It facilitates microservice architecture and b. It can be used to package software
7) Post-copy and Pre-copy migration approaches are followed in:
a. Live Migration process
b. Non-live Migration process
c. Hybrid Migration process
d. None of these
Pre-copy and Post-copy are two main approaches used in live VM migration:
Pre-copy: Memory pages are copied to the destination while the VM is still running,
minimizing downtime.
Post-copy: VM state is transferred after the VM is suspended on the source, and
remaining pages are fetched on-demand from the source.
Final answer:
a. Live Migration process
8) Which of the following is (are) true in the case of Docker architecture?
Statement-1: Private Docker registry is a service that stores Docker images.
Statement-2: Docker on the host machine is split into two parts – a daemon with a RESTful API
and a client who talks with the daemon.
a. Only Statement-1 is true
b. Only Statement-2 is true
c. Both Statement-1 and 2 are true
d. Neither Statement-1 nor 2 is true
NPTEL MAIN MCQ WITH ANS FROM OLD ASSIGNMENTS
Let's analyze both statements:
Statement-1: Private Docker registry is a service that stores Docker images.
✅ True – A private Docker registry is indeed a service that stores and manages Docker images
securely within an organization.
Statement-2: Docker on the host machine is split into two parts – a daemon with a RESTful API
and a client who talks with the daemon.
✅ True – Docker consists of the Docker daemon (dockerd), which runs on the host and manages
containers, and the Docker client, which communicates with the daemon through a RESTful API.
Final answer:
c. Both Statement-1 and 2 are true
9) Which of the statement(s) is(are) true for containers?
Statement-1: Docker is an open platform for automating the deployment, scaling, and
management of containerized applications.
Statement-2: Containers make it easy to share CPU, memory, storage, and network resources at
the operating system level.
a. Only Statement-1 is true
b. Only Statement-2 is true
c. Both Statement-1 and 2 are true
d. Neither Statement-1 nor 2 is true
Let's evaluate the statements:
Statement-1: Docker is an open platform for automating the deployment, scaling, and
management of containerized applications.
✅ True – This is a core definition of Docker.
Statement-2: Containers make it easy to share CPU, memory, storage, and network resources at
the operating system level.
✅ True – Containers share the host OS kernel and resources, enabling efficient resource sharing
and isolation.
NPTEL MAIN MCQ WITH ANS FROM OLD ASSIGNMENTS
Final answer:
c. Both Statement-1 and 2 are true
10) Kubernetes operates at the hardware level.
a. True
b. False
Kubernetes operates at the container orchestration level, managing containerized applications
across clusters of machines. It does not operate at the hardware level directly; instead, it
abstracts the underlying hardware and manages workloads on top of it.
Final answer:
b. False
1. Which of the following statements is/are false?
a) Serverless computing allows the users with more control over the deployment environment
compared to PaaS.
b) Serverless computing is a form of cloud computing that allows users to run event-driven
granular applications.
Let's analyze both statements carefully:
a) Serverless computing allows the users with more control over the deployment
environment compared to PaaS.
❌ False — Serverless computing abstracts away the infrastructure and deployment
environment from the user, giving less control compared to PaaS (Platform as a Service),
where users have more control over runtime configurations and the environment.
b) Serverless computing is a form of cloud computing that allows users to run event-driven
granular applications.
✅ True — Serverless computing is designed for running event-driven, fine-grained
functions without managing servers, scaling automatically based on events.
NPTEL MAIN MCQ WITH ANS FROM OLD ASSIGNMENTS
Final answer:
Statement a is false.
Statement b is true.
2. Which of the following options is most appropriate for FaaS?
Statement 1: Each function in the FaaS platform gets unlimited execution time.
Statement 2: Functions are always active and ready for execution.
a) Statement 1 is correct but Statement 2 is incorrect.
b) Statement 2 is correct but Statement 1 is incorrect.
c) Both the statements are correct.
d) Both the statements are incorrect.
Statement 1: Each function in the FaaS platform gets unlimited execution time.
❌ Incorrect – FaaS platforms (like AWS Lambda, Azure Functions) typically limit execution time
for functions. For example, AWS Lambda has a maximum timeout of 15 minutes. So, functions
do not get unlimited execution time.
Statement 2: Functions are always active and ready for execution.
❌ Incorrect – In FaaS, functions are not always active. They are triggered by events and may
experience a cold start if inactive for a while. So, they are not always "ready" like traditional
long-running services.
✅ Final Answer:
d) Both the statements are incorrect.
3. AWS S3 is a fully managed proprietary NoSQL database service that supports key-value and
document data structures and is offered by Amazon.com as part of the Amazon Web Services
portfolio.
NPTEL MAIN MCQ WITH ANS FROM OLD ASSIGNMENTS
a) True
b) False
Explanation:
AWS S3 (Simple Storage Service) is not a NoSQL database. It is a fully managed object
storage service used to store and retrieve any amount of data from anywhere on the
web.
The description in the question actually refers to Amazon DynamoDB, which is a fully
managed NoSQL database service that supports key-value and document data models.
✅ Final Answer:
b) False
4. BigQuery is a fully-managed, serverless data warehouse by:
a) AWS
b) Google
c) Microsoft
d) IBM
BigQuery is a fully-managed, serverless data warehouse offered by Google Cloud Platform
(GCP). It is designed for fast SQL analytics on large datasets using the processing power of
Google’s infrastructure.
✅ Final Answer:
b) Google
5. AWS charges for the provisioned resources and executing Lambda.
a) True
b) False
AWS Lambda charges users based on:
1. Number of requests — how many times your function is invoked.
2. Duration — the time your code runs, rounded to the nearest millisecond.
NPTEL MAIN MCQ WITH ANS FROM OLD ASSIGNMENTS
3. Provisioned concurrency (if used) — reserved resources to reduce cold starts.
So, AWS does charge for executing Lambda functions and for any provisioned resources you
configure.
✅ Final Answer:
a) True
6. In serverless computing, the user has to manage the scalability needs of a function, unlike
FaaS.
a) True
b) False
In serverless computing—including FaaS (Function-as-a-Service)—scalability is automatically
managed by the cloud provider. The user does not have to manually scale the function to
handle varying loads. The platform automatically scales the number of function instances in
response to incoming requests or events.
So, the statement is incorrect.
✅ Final Answer:
b) False
7. Which of the following is/are the goal of sustainable cloud computing? Choose the most
appropriate option.
a) Minimizing the energy consumption
b) Increasing reliability of CDCs
c) Minimizing carbon footprint-related cost
d) Increasing network traffic
✅ a) Minimizing the energy consumption
A core goal of sustainable cloud computing. Less energy = less environmental impact.
✅ b) Increasing reliability of CDCs (Cloud Data Centers)
Reliable systems reduce waste and energy use from failures and retries, supporting
sustainability.
✅ c) Minimizing carbon footprint-related cost
NPTEL MAIN MCQ WITH ANS FROM OLD ASSIGNMENTS
Reducing emissions directly supports both sustainability and operational cost efficiency.
❌ d) Increasing network traffic
Higher network traffic typically means more energy use and is not aligned with
sustainability goals.
✅ Final Answer:
a) Minimizing the energy consumption
b) Increasing reliability of CDCs
c) Minimizing carbon footprint-related cost
8. Which of the following is not a category of research initiative on sustainable cloud
computing?
a) Renewable Energy
b) Capacity Planning
c) Environment Sandboxing
d) None of these
Explanation:
In the context of sustainable cloud computing, common research categories include:
Renewable Energy – Using solar, wind, etc., to power data centers.
Capacity Planning – Optimizing resource allocation to reduce energy use and improve
efficiency.
Virtualization, Thermal-aware scheduling, Load balancing, etc.
However:
Environment Sandboxing is not a recognized research category in sustainable cloud
computing. It refers more to security/isolation techniques in software development.
✅ Final Answer:
c) Environment Sandboxing
9. CDCs consist of a chassis and racks to place the servers to process the IT workloads.
NPTEL MAIN MCQ WITH ANS FROM OLD ASSIGNMENTS
a) True
b) False
Explanation:
Cloud Data Centers (CDCs) are physical facilities that house computing infrastructure. They
typically consist of:
Racks – Vertical structures used to hold servers.
Chassis – Enclosures within racks that house server blades or components.
These components work together to process IT workloads efficiently.
✅ Final Answer:
a) True
10. ________ are an important distribution mechanism for libraries and custom runtimes in
AWS serverless ecosystem.
a) Runtimes
b) Lambda Layers
c) Log Streams
d) None of these
Explanation:
In the AWS serverless ecosystem, Lambda Layers are used to:
Distribute libraries, custom runtimes, and dependencies across multiple Lambda
functions.
Promote code reuse and modularity by separating shared code from function code.
✅ Final Answer:
b) Lambda Layers