0% found this document useful (0 votes)
25 views8 pages

Cloud Computing

The document discusses computing paradigms, particularly focusing on cloud computing architecture and its components, including front-end user interfaces, back-end platforms, and cloud-based delivery networks. It outlines various computing paradigms such as sequential, parallel, and distributed computing, as well as the characteristics and deployment models of cloud computing, including public, private, hybrid, and community clouds. Additionally, it highlights the importance of principles like elasticity, independence, and security in cloud computing environments.

Uploaded by

mo0972635
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views8 pages

Cloud Computing

The document discusses computing paradigms, particularly focusing on cloud computing architecture and its components, including front-end user interfaces, back-end platforms, and cloud-based delivery networks. It outlines various computing paradigms such as sequential, parallel, and distributed computing, as well as the characteristics and deployment models of cloud computing, including public, private, hybrid, and community clouds. Additionally, it highlights the importance of principles like elasticity, independence, and security in cloud computing environments.

Uploaded by

mo0972635
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

A computing paradigm refers to a fundamental approach or model for performing .

computation, organizing data, designing systems of higher levels, and solving Architecture Of Cloud Computing
problems using computers. Cloud computing arhitecture refers to the components and sub-components
Characteristics of Computing Paradigms required for cloud computing. These components typically refer to:
It encompasses the principles, techniques, methodologies, and architectures that 1-Front End ( User Interaction Enhancement ): The User Interface of Cloud
guide the design, development, and deployment of computational systems. Computing consists of 2 sections of clients. The Thin clients are the ones that use
Computing paradigms can vary widely based on factors such as the underlying web browsers facilitating portable and lightweight accessibilities and others are
hardware, programming models, and problem-solving strategies. known as Fat Clients that use many functionalities for offering a strong user
Each computing paradigm offers different advantages, trade-offs, and suitability for experience.
specific types of problems and applications. 2. Back-end Platforms ( Cloud Computing Engine ): The core of cloud computing is
The choice of paradigm depends on factors such as the nature of the problem, made at back-end platforms with several servers for storage and processing
performance requirements, scalability, and ease of development. computing. Management of Applications logic is managed through servers and
Many modern computing systems and applications combine multiple paradigms to effective data handling is provided by storage. The combination of these platforms at
use their respective strengths and address complex challenges. the backend offers the processing power, and capacity to manage and store data
Types of High-Performance Computing Paradigm behind the cloud.
High-performance computing (HPC) refers to the use of advanced computing 3. Cloud-Based Delivery and Network: On-demand access to the computer and
techniques and technologies to solve complex problems and perform data-intensive resources is provided over the Internet, Intranet, and Intercloud. The Internet comes
tasks at speeds beyond what a conventional computer could achieve. with global accessibility, the Intranet helps in internal communications of the
Different high-computing performance paradigms arise from various methodologies, services within the organization and the Intercloud enables interoperability across
principles, and technologies used to solve complex computational problems. various cloud services. This dynamic network connectivity ensures an essential
Each computing paradigm has its strengths, weaknesses, and specific use cases. The component of cloud computing architecture on guaranteeing easy access and data
choice of use of a paradigm depends on the nature of the problem to be solved, transfer.
performance requirements, and the available technology. Some of the fundamental blocks of Cloud Computing are Compute, Storage,
There are some key computing paradigms are as follows:- Database, Networking, and Security.
Sequential Computing: This is a traditional paradigm that involves the execution of Compute: Instead of provisioning your server in a local data center, you can
instructions in a linear sequence.It is the foundation of classical computing outsource the computing power needed by your server from a cluster of virtual
architectures, where a processor executes one instruction at a time. machines in the Cloud. Compute is the processing power required by applications
Parallel Computing: Parallel computing is a computing paradigm where multiple and systems to process data and carry out computational tasks.
computations or processes are executed simultaneously to solve a single problem, Storage: The main benefit of storing data in the cloud is the convenience of
typically to improve performance, efficiency, and scalability. increasing your storage capacity without maintaining and buying more local hard
In parallel computing, multiple processors or multi-cores work simultaneously on drives. You cannot prevent data corruption from happening in the event of a hard
different parts of a problem. disk failure. In the cloud, your data is stored persistently across logical pools in
In parallel computing, tasks are divided into smaller subtasks that can be executed physical storage hosted by your cloud service provider. You can store different types
concurrently on multiple processing units or cores, allowing for faster execution and of data such as objects, files, and backups.
higher throughput compared to sequential processing. Database:The database is a system that stores and manages structured and
Parallel computing is used in various domains, including scientific simulations, data unstructured information. Databases in the cloud are typically managed and offered
analytics, image and signal processing, artificial intelligence, and computer graphics. as a service by a cloud service provider. This means that maintaining and updating
Examples of parallel computing applications include weather forecasting, molecular the underlying components of your database instance, such as OS updates and
dynamics simulations, genome sequencing, deep learning training, and rendering software patches, are no longer your responsibility. Databases in the cloud are also
complex 3D graphics. scalable and highly available in nature.
Distributed/Network Computing: Distributed computing involves the use of multiple Networking: The cloud is a large ecosystem of computers that communicate and
computers connected to a network.Tasks are distributed across these distributed integrate with each other to deliver a specific service to customers. Cloud service
computers, and they work collaboratively to achieve a common goal. providers make sure that they always maintain a high-speed network connection
Network computing, also known as distributed computing, refers to the use of within their infrastructures to support the needs of their end-users. You can use the
interconnected computers and resources to perform tasks collaboratively over a cloud to provide a global link to distribute your application all over the world.
network. Security: In the cloud, data is stored in secured remote data center facilities. This
This infrastructure can include local area networks (LANs), wide area networks means that threats like theft and data breach are unlikely going to happen. As a
(WANs), and the Internet. cloud user, your responsibility is more on data management. The cloud has sets of
The client-server model is a common architecture in network computing. tools to help you enforce high levels of security. For example, you have control on
Network computing allows users to access resources and applications remotely. the encryption and decryption of your data. You can also choose to authenticate and
Resources such as processing power, storage, and applications are distributed across authorize selected users and services to access your applications.
multiple computers within the network. This enables users to access and utilize Principles of Cloud Computing:The term cloud is usually used to represent the
resources located on different machines. internet but it is not just restricted to the Internet. It is virtual storage where the
Network computing facilitates collaboration among users by enabling them to share data is stored in third-party data centers. Storing, managing, and accessing data
files, work on documents simultaneously, and communicate in real time. present in the cloud is typically referred to as cloud computing
Collaboration tools, such as email, video conferencing, and collaborative document Federation: A cloud computing environment must be capable of providing federated
editing, are common in networked environments. service providers which means that, these providers, must be capable of
Network computing systems can be easily scaled by adding more computers or collaborating and resource sharing at any point irrespective of their type. This is
resources to the network. This scalability allows organizations to adapt to changing usually needed when an organization extends its computing paradigm from the
demands and accommodate growing workloads. private to the public cloud. Moreover.
Cloud computing is an example of distributed computing. Independence: The user of cloud computing services must be independent of the
Client-Server Computing: This paradigm involves dividing computing tasks between provider's specific tool and the type of service. According to this principle, a user
client devices (user interfaces) and server systems that store data and manage must be allowed the required virtual resource irrespective of the type of provider.
resources.Client-server computing is commonly used in networked applications, Moreover, it is the responsibility of service providers to handle infrastructure while
where clients request services from servers.In Client-Server computing, a client is a hiding confidential information.
software application or device that requests services or resources from a server Elasticity: The user of cloud computing must be provided with ease of accessing and
whereas a server is a software application or hardware device that provides services releasing the resources as required. This is typically referred to as elasticity. The rules
or resources to clients.Clients are typically end-user devices such as computers, associated with elasticity must be included within the contract made between
smartphones, tablets, or IoT devices whereas Servers are responsible for processing consumers and services providers.
client requests, performing computations, managing data, and returning results to : Characteristics of Cloud Computing
clients.Clients initiate communication with servers by sending requests for data, There are many characteristics of Cloud Computing here are few of them :
processing, or other services whereas servers are usually high-performance On-demand self-services: The Cloud computing services does not require any human
computers or specialized hardware optimized for handling multiple client administrators, user themselves are able to provision, monitor and manage
connections and processing tasks efficiently.Common communication protocols used computing resources as needed.
in client-server computing include HTTP (Hypertext Transfer Protocol) for web Broad network access: The Computing services are generally provided over standard
applications, SMTP (Simple Mail Transfer Protocol) for email, FTP (File Transfer networks and heterogeneous devices.
Protocol) for file transfer, and TCP/IP (Transmission Control Protocol/Internet Security: Cloud providers invest heavily in security measures to protect their users'
Protocol) for general network communication. data and ensure the privacy of sensitive information.
The client-server interaction follows a request-response model, where clients send Resource pooling: The IT resource (e.g., networks, servers, storage, applications, and
requests to servers, and servers respond with the requested data or perform the services) present are shared across multiple applications and occupant in an
requested actions. uncommitted manner. Multiple clients are provided service from a same physical
Clients may send various types of requests, such as HTTP GET requests for retrieving resource.
web pages, HTTP POST requests for submitting form data, or SQL queries to retrieve Measured service: The resource utilization is tracked for each application and
data from a database. occupant, it will provide both the user and the resource provider with an account of
what has been used. This is done for various reasons like monitoring billing and
effective use of resource.
A cloud ecosystem is defined as a complex system of cloud services, platforms, and
: Types of Cloud Computing Deployment Models infrastructure used for the storage, processing, and distribution of data and
The cloud deployment model identifies the specific type of cloud environment based applications through the Internet. It consists of multiple parts: cloud providers,
on ownership, scale, and access, as well as the cloud's nature and purpose. The software developers, users, and other services, which are integrated into a prolific
location of the servers you're utilizing and who controls them are defined by a cloud and adaptable architecture for computing assets. This ecosystem enhances the
deployment model. It specifies how your cloud infrastructure will look, what you can ability of businesses and individuals to lease computational solutions at will, in line
change, and whether you will be given services or will have to create everything with flexibility, innovation and cost sensitivity in the digital frontier.
yourself. Relationships between the infrastructure and your users are also defined by Cloud Ecosystem Work :Hub and Spoke Model: Instead, the cloud ecosystem is a
cloud deployment types. Different types of cloud computing deployment models are hub-and-spoke architecture that has a cloud provider in the centre linking with a
described below. variety of entities.
Public Cloud :The public cloud makes it possible for anybody to access systems and Central Cloud Provider: In general Cloud Ecosystem Architecture, there is a central
services. The public cloud may be less secure as it is open to everyone. The public place and here, the main components of the public cloud are inside with AWS as a
cloud is one in which cloud infrastructure services are provided over the internet to core.
the general people or major industry groups. The infrastructure in this cloud model is Interconnected Relationships: There are many interconnecting spouses of the central
owned by the entity that delivers the cloud services, not by the consumer. It is a type cloud provider such as the companies that supply software and equipment,
of cloud hosting that allows customers and users to easily access systems and consultants, and third-party service providers.
services. This form of cloud computing is an excellent example of cloud hosting, in Complex Interactions: AWS, being a cloud services provider, provides support to
which service providers supply services to a variety of customers. In this multiple applications and has partnerships with other organizations, making the
arrangement, storage backup and retrieval services are given for free, as a interaction dynamics rather intricate within the ecosystem.
subscription, or on a per-user basis. For example, Google App Engine etc. requirements for cloud services:
Private Cloud ;The private cloud deployment model is the exact opposite of the 1. Internet Connectivity:
public cloud deployment model. It's a one-on-one environment for a single user Reliable, high-speed internet is essential for accessing cloud services.
(customer). There is no need to share your hardware with anyone else. The 2. Scalability: Infrastructure must automatically scale based on demand.
distinction between private and public clouds is in how you handle all of the 3. Security:
hardware. It is also called the "internal cloud" & it refers to the ability to access Strong data protection, encryption, access control, and compliance with standards.
systems and services within a given border or organization. The cloud platform is 4. Reliability & Availability
implemented in a cloud-based secure environment that is protected by powerful Redundant systems, backups, and high uptime (e.g., 99.9% SLA) are crucial.
firewalls and under the supervision of an organization's IT department. The private 5. Monitoring & Management
cloud gives greater flexibility of control over cloud resources. Automated tools for performance tracking, alerts, and resource management.
Hybrid Cloud: By bridging the public and private worlds with a layer of proprietary 6. Compliance :Adherence to legal and industry regulations (e.g., GDPR, HIPAA).
software, hybrid cloud computing gives the best of both worlds. With a hybrid 7. Multi-Tenancy:
solution, you may host the app in a safe environment while taking advantage of the Secure separation of resources for different users in shared environments.
public cloud's cost savings. Organizations can move data and applications between 8. Resource Automation: Efficient provisioning, auto-scaling, and usage-based
different clouds using a combination of two or more cloud deployment methods, billing.
depending on their needs. 9. APIs & Interfaces: User-friendly dashboards and APIs for integration and
Community Cloud :It allows systems and services to be accessible by a group of automation.
organizations. It is a distributed system that is created by integrating the services of 10. Data Management: Reliable storage, backup, recovery, and data lifecycle
different clouds to address the specific needs of a community, industry, or business. handling.
The infrastructure of the community could be shared between the organization Cloud Computing Architecture:Architecture of cloud computing is the combination
which has shared concerns or tasks. It is generally managed by a third party or by the of both SOA (Service Oriented Architecture) and EDA (Event Driven Architecture).
combination [of one or more organizations in the community. Client infrastructure, application, service, runtime cloud, storage, infrastructure,
Models of Cloud Computing:Cloud Computing helps in rendering several services management and security all these are the components of cloud computing
according to roles, companies, etc. architecture.
1. Infrastructure as a service (IaaS): Infrastructure as a Service (IaaS) helps in The cloud architecture is divided into 2 parts, i.e.
delivering computer infrastructure on an external basis for supporting operations. 1. Frontend:Frontend of the cloud architecture refers to the client side of cloud
Generally, IaaS provides services to networking equipment, devices, databases, and computing system. Means it contains all the user interfaces and applications which
web servers. are used by the client to access the cloud computing services/resources. For
Infrastructure as a Service (IaaS) helps large organizations, and large enterprises in example, use of a web browser to access the cloud platform.
managing and building their IT platforms. This infrastructure is flexible according to 2. Backend: Backend refers to the cloud itself which is used by the service provider. It
the needs of the client. contains the resources as well as manages the resources and provides security
Advantages of IaaS mechanisms. Along with this, it includes huge storage, virtual applications, virtual
IaaS is cost-effective as it eliminates capital expenses. machines, traffic control mechanisms, deployment models, etc.
IaaS cloud provider provides better security than any other software. the User Layer and Client Layer refer to parts of the architecture that interact with
IaaS provides remote access. cloud services from the end-user’s side. Here’s a brief explanation of both:
Disadvantages of IaaS 1. User Layer: The User Layer represents the end-users who access cloud services via
In IaaS, users have to secure their own data and applications. applications or web interfaces.
Cloud computing is not accessible in some regions of the World. 🔹 Key Points:
2. Platform as a service (PaaS) :Platform as a Service (PaaS) is a type of cloud •Includes individuals or organizations using cloud-hosted applications.
computing that helps developers to build applications and services over the Internet •Interacts with the cloud through interfaces like browsers or apps.
by providing them with a platform.PaaS helps in maintaining control over their •Focuses on ease of use, user experience (UI/UX), and access control.
business applications. 🔹 Examples: •A person accessing Google Docs.
Advantages of PaaS 2. Client Layer:The Client Layer consists of the devices and software that users use to
PaaS is simple and very much convenient for the user as it can be accessed via a web interact with the cloud.
browser. 🔹 Key Points:
PaaS has the capabilities to efficiently manage the lifecycle. •Acts as the interface between users and the cloud infrastructure.
Disadvantages of PaaS •Includes desktops, smartphones, tablets, and client-side software (like browsers or
PaaS has limited control over infrastructure as they have less control over the custom apps).
environment and are not able to make some customizations. •Responsible for sending requests and displaying cloud data.
PaaS has a high dependence on the provider. 🔹 Examples:•A mobile app connecting to a cloud database.
3. Software as a service (SaaS) :Software as a Service (SaaS) is a type of cloud 🔁 Relationship Between User and Client Layer:
computing model that is the work of delivering services and applications over the User Layer Client Layer
Internet. The SaaS applications are called Web-Based Software or Hosted Software. Represents the end-users Represents the user’s devices or apps
SaaS has around 60 percent of cloud solutions and due to this, it is mostly preferred Focus on interaction Focus on connectivity and requests
by companies. Needs simplicity and UI Needs compatibility and performance
Advantages of SaaS The network layer is a part of the communication process in computer networks. Its
SaaS can access app data from anywhere on the Internet. main job is to move data packets between different networks. It helps route these
SaaS provides easy access to features and services. packets from the sender to the receiver across multiple paths and networks.
Disadvantages of SaaS Network-to-network connections enable the Internet to function. These connections
SaaS solutions have limited customization, which means they have some restrictions happen at thenetwork layer which sends data packets between different networks.
within the platform. In the 7-layer OSI model, the network layer is layer 3. The Internet Protocol (IP) is a
SaaS has little control over the data of the user. key protocol used at this layer, along with other protocols for routing, testing, and
SaaS are generally cloud-based, they require a stable internet connection for proper encryption
working. Advantages of Network Layer Services
Packetization service in the network layer provides ease of transportation of the data
packets.
Packetization also eliminates single points of failure in data communication systems.
Routers present in the network layer reduce network traffic by creating collision and •Use cloud-native solutions like AWS Backup, Google Cloud Snapshot, etc.
broadcast domains. With the help of Forwarding, data packets are transferred 6. Cost Management
from one place to another in the network. •Monitor usage and optimize resources to avoid over-provisioning.
Cloud computing management is maintaining and controlling the cloud services and •Use cloud provider tools like AWS Cost Explorer, Azure Cost Management.
resources be it public, private or hybrid. Some of its aspects include load balancing, 7. Logging and Auditing
performance, storage, backups, capacity, deployment etc. To do so a cloud managing •Enable centralized logging for tracking issues and user activity.
personnel needs full access to all the functionality of resources in the cloud. •Tools: ELK Stack (Elasticsearch, Logstash, Kibana), Fluentd, Cloud-native logs.
Different software products and technologies are combined to provide a cohesive Why Cloud Migration is Important
cloud management strategy and process.,,,,As we know Private cloud infrastructure Cloud migration is vital for businesses aiming to improve agility, reduce costs, and
is operated only for a single organization, so that can be managed by the enhance their IT infrastructure. By migrating to the cloud, businesses can:
organization or by a third party. Public cloud services are delivered over a network Enhance Flexibility: Cloud platforms provide on-demand resources, allowing
that is open and available for public use. In this model, the IT infrastructure is owned businesses to scale quickly based on need.
by a private company and members of the public can purchase or lease data storage Improve Cost Efficiency: Cloud services often reduce the need for large upfront
or computing capacity as needed. Hybrid cloud environments are a combination of investments in hardware and infrastructure, offering pay-as-you-go pricing models.
public and private cloud services from different providers. Most organizations store Boost Performance and Reliability: Cloud platforms provide high availability and
data on private cloud servers for privacy concerns, while leveraging public cloud disaster recovery solutions to ensure business continuity.
applications at a lower price point for less sensitive information. The combination of Accelerate Innovation: The cloud offers advanced services like machine learning, big
both the public and private cloud are known as Hybrid cloud servers. data analytics, and AI, enabling companies to innovate faster.
Public Cloud Access Networking and Private Cloud Access Networking: introduction to how Apache projects contribute to IaaS, PaaS, and SaaS:
🌐 Public Cloud Access Networking : Connects users to cloud services hosted on public IaaS (Infrastructure as a Service):IaaS provides virtualized computing resources over
platforms like AWS, Azure, or Google Cloud over the public internet. the internet. Open-source tools for IaaS allow organizations to build and manage
•Access: Open to multiple organizations (multi-tenant). their own private clouds or to interact with public cloud providers in a standardized
•Network Type: Internet-based, using standard protocols (HTTP, HTTPS, etc.). way.
•Security: Relies on provider’s built-in security (firewalls, encryption, IAM). * Apache CloudStack: This is a prominent open-source IaaS platform. CloudStack is
•Cost: Pay-as-you-go; no infrastructure investment required. designed to deploy and manage large networks of virtual machines as a highly
•Scalability: High, with elastic resource availability. available, highly scalable IaaS cloud. It provides a complete "stack" for IaaS, including
🔒 Private Cloud Access Networking: Connects users to cloud services hosted in a compute orchestration, network-as-a-service, user and account management, an
private environment, either on-premises or in a dedicated data center. API, resource accounting, and a user interface. Many public and private clouds are
•Access: Restricted to a single organization (single-tenant). powered by Apache CloudStack.
•Network Type: Internal corporate network or secure connections (VPN/MPLS). * Relationship to other IaaS tools: While OpenStack is another very popular open-
•Security: Controlled by the organization; supports strict compliance (e.g., HIPAA, source IaaS platform, Apache CloudStack focuses on being a turnkey solution for
financial regulations). building clouds. Many other tools like OpenNebula and Eucalyptus also exist in the
•Cost: Higher, due to infrastructure and maintenance responsibilities. open-source IaaS space.
•Scalability: Moderate, based on internal capacity. PaaS (Platform as a Service): PaaS provides a platform that allows developers to
•Example: Employees accessing a company’s internal cloud system via a secure VPN. build, run, and manage applications without the complexity of building and
A cloud application (often shortened to "cloud app") is a software program that is maintaining the infrastructure typically associated with developing and launching an
deployed, hosted, and managed in a cloud environment, rather than being installed app.
and run on a local server or individual device. This means that instead of relying on Unit 3
your computer's resources, the application leverages the computing power, storage, technological drivers for cloud computing :
and other services provided by a third-party cloud service provider (like AWS, Google * Virtualization: Divides single physical servers into multiple isolated virtual
Cloud, Microsoft Azure, etc.). machines (VMs), maximizing hardware utilization, providing isolation, and enabling
How Cloud Applications Work: rapid resource provisioning.
Cloud applications operate on a front-end and back-end model: * Broadband Internet & Network Connectivity: High-speed, reliable internet is
* Front-End: This is the user interface that you interact with, typically through a web essential for accessing remote cloud resources, ensuring performance, and enabling
browser or a dedicated mobile app. It's the part of the application that runs on your global accessibility.
device. * Distributed Systems & Parallel Processing: Allows multiple interconnected
* Back-End: This is the core of the cloud application, residing on remote servers in computers to work together, providing massive scalability, high fault tolerance, and
data centers owned and managed by the cloud service provider. The back-end efficient processing of large datasets.
handles: * Service-Oriented Architecture (SOA) & Microservices: Deconstructs applications
* Data Storage: All the application's data is stored securely in the cloud. into independent, loosely coupled services, enhancing modularity, agility (DevOps),
* Processing: The heavy lifting of the application's logic and computations happens and individual service scalability.
on these remote servers. * Automation & Orchestration: Automates and coordinates complex tasks like
* Middleware: Software that enables communication and data management resource provisioning, deployment, scaling, and monitoring, leading to efficiency,
between the front-end and back-end. consistency, and self-service capabilities.
Benefits of Applications in the Cloud: * Advancements in Hardware: Continuous improvements in multi-core processors,
1.Accessibility – Access from anywhere via internet. SSDs, and networking hardware provide the raw computing power and storage
2.Scalability – Easily scale resources up or down. density needed for large-scale cloud data centers.
3.Cost-Effective – Pay only for what you use; no hardware needed. explanation of Service-Oriented Architecture (SOA) and its relationship with Cloud
4.Automatic Updates – No manual software updates required. Computing, as well as an overview of SOC-as-a-Service (SOCaaS):
5.Collaboration – Real-time sharing and teamwork. 🧩 Service-Oriented Architecture (SOA) and Cloud Computing
6.Disaster Recovery – Built-in backup and recovery options. SOA is a software design approach where applications are structured as a collection
❌ Drawbacks of Applications in the Cloud : of loosely coupled, reusable services that communicate through standard protocols
1.Internet Dependency – No access without a stable connection. and interfaces. Each service represents a discrete business function and operates
2.Security Risks – Data stored on third-party servers. independently, allowing teams to develop, deploy, and scale services separately
3.Limited Control – Provider manages most infrastructure and policies. while promoting reuse across different applications.
4.Downtime – Service interruptions depend on provider. In the context of Cloud Computing, SOA principles are applied to design modular,
5.Vendor Lock-in – Difficult to migrate to other platforms. scalable, and interoperable services. Cloud platforms like AWS, Azure, and Google
Managing Cloud Applications :Managing a cloud application involves ensuring that Cloud offer services that adhere to SOA principles, enabling organizations to build
the application is secure, scalable, available, and performing well across its lifecycle flexible and maintainable systems that can adapt to changing business needs.
— from deployment to monitoring and updating. 🛡 SOC-as-a-Service (SOCaaS): SOCaaS is a cloud-based security model where a third-
✅ Key Aspects of Managing a Cloud Application party vendor operates and maintains a fully-managed Security Operations Center
1. Deployment Management (SOC) on a subscription basis. It provides all the security functions performed by a
•Use tools like CI/CD pipelines for smooth deployment. traditional, in-house SOC, including:
•Automate using platforms like Jenkins, GitHub Actions, or GitLab CI. •Network Monitoring •Log Management •Threat Detection and Intelligence
•Use container orchestration (e.g., Docker, Kubernetes) for scalable deployments. •Incident Investigation and Response •Reporting
2. Monitoring and Performance •Risk and Compliance Management
•Monitor uptime, response times, errors, and resource usage. SOCaaS offers several benefits:
•Use tools like Datadog, New Relic, AWS CloudWatch, or Azure Monitor. •24/7 Monitoring: Continuous surveillance of IT environments to detect and respond
3. Security Management to threats promptly.
•Implement access control (IAM), data encryption, and firewalls. •Expertise Access: Leverage specialized security professionals without the need for
•Regular vulnerability scanning and security patching. in-house hiring.
•Ensure compliance with regulations (e.g., GDPR, HIPAA). •Scalability: Easily adjust security services based on organizational needs.
4. Scaling and Load Management •Cost-Effectiveness: Reduce the financial burden of maintaining an in-house SOC.
•Use auto-scaling to handle traffic spikes. Advantages of SOA:
•Employ load balancers to distribute user traffic evenly across servers. Easy maintenance: As services are independent of each other they can be updated
5. Backup and Disaster Recovery and modified easily without affecting other services.
•Automate regular data backups. Platform independent: SOA allows making a complex application by combining
•Define and test disaster recovery plans. services picked from different sources, independent of the platform.
Availability: SOA facilities are easily available to anyone on request.
Reliability: SOA applications are more reliable because it is easy to debug small
services rather than huge codes
Scalability: Services can run on different servers within an environment, this
increases scalability Web 1.0 refers to the first stage of the World Wide Web evolution. Earlier, there
fundamental driver for cloud computing- Multicore technology, where a single were only a few content creators in Web 1.0 with a huge majority of users who are
processor chip contains multiple independent processing units (cores), is a consumers of content. Personal web pages were common, consisting mainly of static
fundamental driver for cloud computing. pages hosted on ISP-run web servers, or free web hosting services. In Web 1.0
* Parallelism: Enables multiple tasks/VMs/containers to run simultaneously on a advertisements on websites while surfing the internet are banned. Also, in Web 1.0,
single CPU, boosting overall throughput. Ofoto is an online digital photography website, on which users could store, share,
* Efficiency: Maximizes hardware utilization, allowing cloud providers to run more view, and print digital pictures. Web 1.0 is a content delivery network (CDN) that
workloads on fewer physical servers. enables the showcase of the piece of information on the websites. It can be used as
* Scalability & Density: Facilitates hosting a greater number of virtual instances per a personal website. It costs the user as per pages viewed. It has directories that
server, improving density and contributing to both vertical and horizontal scaling. enable users to retrieve a particular piece of information. The era of Web 1.0 was
* Cost Savings: Reduces the need for physical hardware, leading to lower capital and roughly from 1991 to 2004.
operational expenses (power, cooling, maintenance). Four Design Essentials of a Web 1.0 Site Include:
* Performance: Provides the raw processing power needed for demanding cloud >Static pages.
applications and services. >Content is served from the server’s file system.
* Enabler for Virtualization/Containerization: Crucial for hypervisors and container >Pages built using Server Side Includes or Common Gateway Interface (CGI).
orchestration to efficiently manage and distribute workloads across available >Frames and Tables are used to position and align the elements on a page.
processing units. Features of the Web 1.0
Cloud Storage Architecture -Easy to connect static pages with the system via hyperlinks
Cloud storage architecture is the framework that makes it possible for users and -Supports elements like frames and tables with HTML 3.2
organizations to store, manage, and access their data online. It’s designed to handle -Also has graphics and a GIF button
massive amounts of data while ensuring security, accessibility, and scalability. Let’s -Less interaction between the user and the server
break down the key parts of this system: -You can send HTML forms via mail
Main Components of Cloud Storage Architecture -Provides only a one-way publishing medium
1. Frontend Layer:This is what users interact with—essentially, the interface. It could What is Web 2.0? 2004 When the word Web 2.0 become famous due to the First
be through APIs, a web dashboard, or software that connects to the storage system. Web 2.0 conference (later known as the Web 2.0 summit) held by Tim O'Reilly and
It also manages who gets access to the data by handling authentication and Dale Dougherty, the term was coined by Darcy DiNucci in 1999. Web 2.0 refers to
permissions. worldwide websites which highlight user-generated content, usability, and
2. Backend Layer: This is where all the heavy lifting happens: interoperability for end users. Web 2.0 is also called the participative social web. It
Storage Types: does not refer to a modification to any technical specification, but to modify the way
Object Storage: Best for unstructured data like media files, backups, or logs (think Web pages are designed and used. The transition is beneficial but it does not seem
Amazon S3 or Google Cloud Storage). that when the changes occur. Interaction and collaboration with each other are
Block Storage: This is faster and used for applications like databases (e.g., Amazon allowed by Web 2.0 in a social media dialogue as the creator of user-generated
EBS, Azure Disk Storage). content in a virtual community. Web 2.0 is an enhanced version of Web 1.0.
File Storage: Works like a shared drive and is great for file hierarchies (e.g., Amazon Features of the Web 2.0
EFS, Azure Files). -Free sorting of information, permits users to retrieve and classify the information
Metadata: Keeps track of information about the data—like file names, sizes, and collectively.
access rules. -Dynamic content that is responsive to user input.
3. Control Layer:This part manages how everything operates: -Information flows between the site owner and site users using evaluation & online
Orchestration: Ensures resources (like storage space or processing power) are commenting.
distributed efficiently. -Developed APIs to allow self-usage, such as by a software application.
Monitoring: Keeps track of performance, usage, and security. -Web access leads to concerns different, from the traditional Internet user base to a
Lifecycle Management: Automatically handles tasks like archiving or deleting old wider variety of users.
data based on pre-set rules. What is Web 3.0?: It refers to the evolution of web utilization and interaction which
4. Network Layer: This handles data movement between the user and the storage includes altering the Web into a database, with the integration of DLT (Distributed
system. Protocols like HTTPS or APIs enable this. Sometimes, a Content Delivery Ledger Technology blockchain is an example) and that data can help to make Smart
Network (CDN) is used to speed up access by caching data in locations closer to Contracts based on the needs of the individual. It enables the up-gradation of the
users. backend of the web, after a long time of focusing on the frontend (Web 2.0 has
Cloud Storage Requirements :To implement and manage cloud storage effectively, mainly been about AJAX, tagging, and other front-end user-experience innovation).
certain technical and business requirements must be met. These ensure data Web 3.0 is a term that is used to describe many evolutions of web usage and
availability, security, scalability, and compliance. interaction among several paths. In this, data isn’t owned but instead shared but still
✅ 1. Scalability is, where services show different views for the same web / the same data.
•Storage must grow seamlessly with data volume. Features of the Web 3.0
•Support for elastic scaling (both up and down) as needed. Artificial Intelligence: Combining this capability with natural language processing, in
✅ 2. Data Availability & Reliability Web 3.0, computers can distinguish information like humans to provide faster and
•Ensure high uptime (e.g. 99.9%+). more relevant results. They become more intelligent to fulfill the requirements of
•Use redundancy (e.g. data replication across regions/zones). users.
•Implement automatic backups and disaster recovery. 3D Graphics: The three-dimensional design is being used widely in websites and
✅ 3. Security services in Web 3.0. Museum guides, computer games, e-commerce, geospatial
•Data Encryption: Both at rest and in transit. contexts, etc. are all examples that use 3D graphics.
•Access Controls: Role-Based Access Control (RBAC), Identity & Access Management Connectivity: With Web 3.0, information is more connected thanks to semantic
•Audit Trails: Logging all access and changes. metadata. As a result, the user experience evolves to another level of connectivity
✅ 4. Performance that leverages all the available information.
•Support fast read/write speeds, especially for high-performance workloads. What Is Web 4.0?: Web 4.0 represents the next evolution of the Internet where
•Options for different storage types (e.g., SSD, HDD, object vs block storage). artificial intelligence, machine learning and advanced technologies work together to
✅ 5. Interoperability create a smarter, more intuitive online experience. Unlike previous versions of the
•Compatible with various platforms, tools, and APIs. web which focused primarily on connectivity and user-generated content. Web 4.0 is
•Support for common protocols (e.g., S3, NFS, FTP, SMB). all about understanding and anticipating user needs. It aims to make the Internet not
✅ 6. Cost Management just a tool but a partner in our daily lives.
•Transparent pricing models (pay-as-you-go or tiered). Key Features of Web 4.0
•Lifecycle policies to move infrequently used data to cheaper storage. Artificial Intelligence at the Core: AI will be the backbone of Web 4.0. Machine
✅ 7. Compliance & Legal Requirements learning algorithms will process data faster than ever before, enabling websites and
•Meet standards like GDPR, HIPAA, ISO 27001, etc. apps to predict what users want before they even ask. For example if you're
•Data residency and sovereignty (control over data location). shopping online, the platform might suggest products tailored specifically to your
✅ 8. Ease of Management tastes or recommend deals based on your past purchases.
•Centralized dashboards or APIs for monitoring and control. Natural Language Processing (NLP): One of the most exciting aspects of Web 4.0 is its
•Automation for backups, scaling, and policy enforcement. ability to understand human language. Advanced NLP allow chatbots and virtual
✅ 9. Backup and Disaster Recovery assistants to communicate with users in a way that feels natural and conversational.
•Scheduled backups, point-in-time recovery, geo-replication. Instead of rigid commands, you'll be able to speak or type casually and the system
•Versioning and soft-delete features to recover from accidental loss. will respond intelligently.
✅ 10. Data Integrity and Durability Internet of Things (IoT): Integration: Web 4.0 will seamlessly connect the digital
•Checksum and validation mechanisms. world with the physical world through IoT. Smart devices like thermostats,
•Durability guarantees (e.g., AWS S3 promises 99.999999999% durability) refrigerators, cars and even clothing will share data and work together to enhance
convenience and efficiency. For instance your smartwatch could alert your car to
start warming up when it detects cold weather while your home adjusts the heating Go runtime 1.2 environment
automatically.

The Agile Model was primarily designed to help a project adapt quickly to change
requests. So, the main aim of the Agile model is to facilitate quick project What are IBM Cloud Computing APIs?
completion. To accomplish this task, it's important that agility is required. Agility is * Application Programming Interfaces (APIs): At their core, IBM Cloud APIs are sets
achieved by fitting the process to the project and removing activities that may not be of defined rules and protocols that enable different software applications to
essential for a specific project.Also, anything that is a waste of time and effort is communicate and exchange data. This means you can control and manage your IBM
avoided. The Agile Model refers to a group of development processes. These Cloud resources (like virtual servers, databases, AI services, etc.) without needing to
processes share some basic characteristics but do have certain subtle differences directly interact with the IBM Cloud console.
among themselves. * Comprehensive Coverage: IBM Cloud offers APIs for almost all its services. This
Steps in the Agile Model includes infrastructure services (compute, storage, networking), platform services
The Agile Model is a combination of iterative and incremental process models. The (databases, messaging, analytics), AI and machine learning services, security services,
phases involve in Agile (SDLC) Model are: and many more.
Requirement Gathering Design the Requirements * RESTful Design: Most IBM Cloud APIs follow a RESTful architecture, which means
Construction / Iteration Testing / Quality Assurance they use standard HTTP methods (GET, POST, PUT, DELETE) to perform operations
Deployment Feedback on resources, making them relatively easy to understand and use.
: What is the Agile Software Development Life Cycle (Agile SDLC)? * API Connect: IBM provides "API Connect" as an enterprise-grade platform for
The Agile Software Development Life Cycle (SDLC) is an iterative and incremental creating, securing, managing, sharing, monetizing, and analyzing custom APIs, both
software development methodology that prioritizes flexibility, collaboration, and on-premises and on the cloud. This is particularly useful for organizations that want
customer feedback. Unlike traditional SDLC models, such as the waterfall model, to expose their own internal services as APIs or manage a large portfolio of APIs.
which completes each step sequentially, the agile SDLC divides the development Unit 4
process into smaller iterations or increments. What is Virtualization? Virtualization is the process of creating a virtual
The major factors of agile, according to Agile Manifesto, are following four: representation of hardware such as server, storage, network or other physical
Early customer involvement Iterative development machines. It Supports multiple copies of virtual machines(VMs) to execute on one
Self-organizing teams Adaptation to change physical machine each with their own operating system and programs. This
Steps of Agile SDLC Model -The agile model is a combination of iterative and optimizes hardware efficiency and flexibility and enables resources to be shared
incremental process models. The steps involve in agile SDLC models are: between multiple customers or organizations.
Requirement gathering Design the Requirements Virtualization is a key to providing Infrastructure as a Service (IaaS) solutions for
Coding Testing cloud computing, whereby the user has access to remote computing resources.
Deployment Feedback Full Virtualization: Full virtualization is a virtualization technique used to provide a
Agile Software Development Cycle VME that completely simulates the underlying hardware. In this type of
Step 1: In the first step, concept, and business opportunities in each possible project environment, any software capable of execution on the physical hardware can be
are identified and the amount of time and work needed to complete the project is run in the VM, and any OS supported by the underlying hardware can be run in each
estimated. Based on their technical and financial viability, projects can then be individual VM. Users can run multiple different guest OSes simultaneously. In full
prioritized and determined which ones are worthwhile pursuing. virtualization, the VM simulates enough hardware to allow an unmodified guest OS
Step 2: In the second phase, known as inception, the customer is consulted regarding to be run in isolation. This is particularly helpful in a number of situations. For
the initial requirements, team members are selected, and funding is secured. example, in OS development, experimental new code can be run at the same time as
Additionally, a schedule outlining each team's responsibilities and the precise time at older versions, each in a separate VM. The hypervisor provides each VM with all the
which each sprint's work is expected to be finished should be developed. services of the physical system, including a virtual BIOS, virtual devices, and
Step 3: Teams begin building functional software in the third step, virtualized memory management. The guest OS is fully disengaged from the
iteration/construction, based on requirements and ongoing feedback. Iterations, also underlying hardware by the virtualization layer. Full virtualization is achieved by
known as single development cycles, are the foundation of the Agile software using a combination of binary translation and direct execution. With full
development cycle. virtualization hypervisors, the physical CPU executes nonsensitive instructions at
Cloud computing applications develops by leveraging platforms and frameworks. native speed; OS instructions are translated on the fly and cached for future use, and
Various types of services are provided from the bare metal infrastructure to user level instructions run unmodified at native speed. Full virtualization offers the
customize-able applications serving specific purposes.Amazon Web Services (AWS) - best isolation and security for VMs and simplifies migration and portability as the
AWS provides different wide-ranging clouds IaaS services, which ranges from virtual same guest OS instance can run on virtualized or native hardware. Figure 1.5 shows
compute, storage, and networking to complete computing stacks. AWS is well known the concept behind full virtualization.
for its storage and compute on demand services, named as Elastic Compute Cloud Paravirtualization is ideal for specific use cases where performance and efficiency
(EC2) and Simple Storage Service (S3). EC2 offers customizable virtual hardware to are critical and where there is flexibility to modify or adapt the guest operating
the end user which can be utilize as the base infrastructure for deploying computing system. Here are some scenarios where paravirtualization is well-suited:
systems on the cloud. It is likely to choose from a large variety of virtual hardware Performance Optimization: Paravirtualization often results in better performance
configurations including GPU and cluster instances. Either the AWS console, which is than full virtualization. Allowing the guest OS to communicate directly with the
a wide-ranged Web portal for retrieving AWS services, or the web services API hypervisor through hypercalls which reduces the overhead associated with
available for several programming language is used to deploy the EC2 instances. emulation, leading to improved performance.
How does Microsoft Azure Work?:It is a private and public cloud platform that helps I/O-Intensive Workloads: Paravirtualization is particularly beneficial for I/O-intensive
developers and IT professionals build, deploy, and manage applications. It uses the workloads, such as database servers and storage-intensive applications. The direct
technology known as virtualization. Virtualization separates the tight coupling communication between the guest and the hypervisor enhances I/O performance.
between the hardware and the operating system using an abstraction layer called a Customized Operating Systems: Paravirtualization requires modification of the guest
hypervisor. Hypervisor emulates all the functions of a computer in a virtual machine; OS to be aware of the virtualization layer. This makes it suitable for scenarios where
it can run multiple virtual machines at the same time, and each virtual machine can customization and adaptation of the operating system are feasible, such as in
run any operating system, such as Windows or Linux. development or specialized environments.
Azure takes this virtualization technique and repeats it on a massive scale in the data Resource Efficiency: Paravirtualization can lead to more efficient resource utilization
center owned by Microsoft. Each data center has many racks filled with servers and compared to full virtualization. It allows for greater control over resource allocation
each server includes a hypervisor to run multiple virtual machines. The network and can be beneficial in environments where optimizing resource usage is a priority.
switch provides connectivity to all those servers. Embedded Systems and Real-Time Applications: In situations where real-time
: Google App Engine (GAE)? A scalable runtime environment, Google App Engine is performance is critical, paravirtualization can be a suitable choice. It provides low-
mostly used to run Web applications. These dynamic scales as demand change over latency communication between the guest and the hypervisor, making it suitable for
time because of Google's vast computing infrastructure. Because it offers a secure real-time applications and embedded systems.
execution environment in addition to a number of services, App Engine makes it : A hypervisor has a simple user interface that needs some storage space. It exists
easier to develop scalable and high-performance Web apps. as a thin layer of software and to establish a virtualization management layer, it does
The App Engine SDK facilitates the testing and professionalization of applications by hardware management function. For the provisioning of virtual machines, device
emulating the production runtime environment and allowing developers to design drivers and support software are optimized while many standard operating system
and test applications on their own PCs. When an application is finished being functions are not implemented. Essentially, to enhance performance overhead
produced, developers can quickly migrate it to App Engine, put in place quotas to inherent to the coordination which allows multiple VMs to interact with the same
control the cost that is generated, and make the programmer available to everyone. hardware platform this type of virtualization system is used.
Python, Java, and Go are among the languages that are currently supported. Hardware compatibility is another challenge for hardware-based virtualization. The
Features of App Engine virtualization layer interacts directly with the host hardware, which results that all
Runtimes and Languages:To create an application for an app engine, you can use Go, the associated drivers and support software must be compatible with the hypervisor.
Java, PHP, or Python. You can develop and test an app locally using the SDK's As hardware devices drivers available to other operating systems may not be
deployment toolkit. Each language's SDK and nun time are unique. Your program is available to hypervisor platforms similarly. Moreover, host management and
run in a: administration features may not contain the range of advanced functions that are
Java Run Time Environment version 7 common to the operating systems.
Python Run Time environment version 2.7
PHP runtime's PHP 5.4 environment
It allows communication between nodes in a virtual network without routing of
frames.
It restricts management traffic.
It enforces routing for communication between virtual networks.

features of hardware-based virtualization are:


Isolation: Hardware-based virtualization provides strong isolation between virtual
machines, which means that any problems in one virtual machine will not affect Types of Hypervisor -
other virtual machines running on the same physical host. TYPE-1 Hypervisor: The hypervisor runs directly on the underlying host system. It is
Security: Hardware-based virtualization provides a high level of security as each also known as a "Native Hypervisor" or "Bare metal hypervisor". It does not require
virtual machine is isolated from the host operating system and other virtual any base server operating system. It has direct access to hardware resources.
machines, making it difficult for malicious code to spread from one virtual machine Examples of Type 1 hypervisors include VMware ESXi, Citrix XenServer, and
to another. Microsoft Hyper-V hypervisor.
Performance: Hardware-based virtualization provides good performance as the Pros & Cons of Type-1 Hypervisor:
hypervisor has direct access to the physical hardware, which means that virtual Pros: Such kinds of hypervisors are very efficient because they have direct access to
machines can achieve close to native performance. the physical hardware resources(like Cpu, Memory, Network, and Physical storage).
Resource allocation: Hardware-based virtualization allows for flexible allocation of This causes the empowerment of the security because there is nothing any kind of
hardware resources such as CPU, memory, and I/O bandwidth to virtual machines. the third party resource so that attacker couldn't compromise with anything.
Server Virtualization is most important part of Cloud Computing. So, Talking about Cons: One problem with Type-1 hypervisors is that they usually need a dedicated
Cloud Computing, it is composed of two words, cloud and computing. Cloud means separate machine to perform their operation and to instruct different VMs and
Internet and computing means to solve problems with help of computers. control the host hardware resources.
Computing is related to CPU & RAM in digital world. Now Consider situation, You are TYPE-2 Hypervisor: A Host operating system runs on the underlying host system. It is
using Mac OS on your machine but particular application for your project can be also known as 'Hosted Hypervisor". Such kind of hypervisors doesn’t run directly
operated only on Windows. You can either buy new machine running windows or over the underlying hardware rather they run as an application in a Host
create virtual environment in which windows can be installed and used. Second system(physical machine). Basically, the software is installed on an operating system.
option is better because of less cost and easy implementation. This scenario is called Hypervisor asks the operating system to make hardware calls. An example of a Type
Virtualization. In it, virtual CPU, RAM, NIC and other resources are provided to OS 2 hypervisor includes VMware Player or Parallels Desktop. Hosted hypervisors are
which it needed to run. This resources is virtually provided and controlled by an often found on endpoints like PCs. The type-2 hypervisor is very useful for
application called Hypervisor. The new OS running on virtual hardware resources is engineers, and security analysts (for checking malware, or malicious source code and
collectively called Virtual Machine (VM). newly developed applications).
Advantages of Server Virtualization: Pros & Cons of Type-2 Hypervisor:
Each server in server virtualization can be restarted separately without affecting the Pros: Such kind of hypervisors allows quick and easy access to a guest Operating
operation of other virtual servers. System alongside the host machine running. These hypervisors usually come with
Server virtualization lowers the cost of hardware by dividing a single server into additional useful features for guest machines. Such tools enhance the coordination
several virtual private servers. between the host machine and the guest machine.
One of the major benefits of server virtualization is disaster recovery. In server Cons: Here there is no direct access to the physical hardware resources so the
virtualization, data may be stored and retrieved from any location and moved rapidly efficiency of these hypervisors lags in performance as compared to the type-1
and simply from one server to another. hypervisors, and potential security risks are also there an attacker can compromise
It enables users to keep their private information in the data centers. the security weakness if there is access to the host operating system so he can also
OS-Based Virtualization works as follows:The host OS kernel is shared among all access the guest operating system.
containers, unlike full virtualization (e.g., VMs) where each VM has its own kernel. Security Issues and Recommendations in Cloud Computing
The kernel enforces isolation between containers using namespaces (for process, Cloud computing offers flexibility and scalability, but it also introduces several
network, filesystem isolation) and cgroups (control groups) for resource allocation security challenges. Below is a brief overview of common issues and corresponding
(CPU, memory, disk I/O, network). recommendations.
cgroups limit and prioritize resource usage (CPU, memory, disk, network) per 🚨 Common Security Issues
container. 1.Data Breaches •Unauthorized access to sensitive data stored in the cloud.
The kernel ensures that a container cannot exceed its allocated resources (unless 2.Data Loss •Accidental deletion, corruption, or ransomware attacks.
explicitly allowed). 3.Insecure APIs •Poorly secured APIs can be exploited to gain unauthorized access.
Namespaces prevent processes in one container from seeing or interfering with 4.Account Hijacking •Stolen credentials can give attackers access to cloud
processes in another. resources.
Programs inside a container cannot access resources outside unless explicitly 5.Insider Threats •Malicious or careless insiders can leak or misuse data.
granted (e.g., mounted volumes, network ports). 6.Lack of Compliance •Failure to meet legal and regulatory requirements (e.g.,
The overhead comes from kernel-level isolation mechanisms (namespaces, cgroups), GDPR, HIPAA).
but it’s minimal compared to full virtualization. 7.Denial of Service Attacks •Overwhelming cloud services to make them
What Is Memory Virtualization? unavailable.
Memory virtualization is like having a super smart organizer for your computer brain 8.Poor Configuration •Misconfigured storage or access settings can expose data.
(Running Memory -RAM). Imagine your computer brain is like a big bookshelf, and all ✅ Recommendations to Improve Cloud Security
the apps and programs you installed or are running are like books. 1.Data Encryption •Encrypt data at rest and in transit using strong algorithms.
Memory virtualization is the librarian who arranges these books so your computer 2.Multi-Factor Authentication (MFA)
can easily find and use them quickly. It also ensures that each application gets a fair •Add an extra layer of protection beyond just passwords.
share of the memory to run smoothly and prevents mess, which ultimately makes 3.Regular Backups •Keep automated and manual backups for data recovery.
your computer brain (RAM) more organized (tidy) and efficient. 4.Secure API Management
In technical language, memory virtualization is a technique that abstracts, manages, •Use API gateways, rate limiting, and access tokens to protect APIs.
and optimizes physical memory (RAM) used in computer systems. It creates a layer 5.Identity and Access Management (IAM)
of abstraction between the RAM and the software running on your computer. This •Follow the principle of least privilege for user roles and permissions.
layer enables efficient memory allocation to different processes, programs, and 6.Monitoring and Logging
virtual machines. •Use cloud-native tools to monitor activity and set alerts for anomalies.
Memory virtualization helps optimize resource utilization and secures the smooth 7.Security Updates and Patch Management
operations of multiple applications on shared physical memory (RAM) by ensuring •Regularly update systems, containers, and services to fix vulnerabilities.
each application gets the required memory to work flawlessly. 8.Compliance and Auditing
Network Virtualization is a process of logically grouping physical networks and •Ensure cloud deployments meet industry standards and undergo regular audits.
making them operate as single or multiple independent networks called Virtual 9.Security Awareness Training
Networks. •Educate employees about phishing, password hygiene, and safe practices.
Tools for Network Virtualization : VMware pioneered virtualization, enabling multiple virtual machines (VMs) with
Physical switch OS - It is where the OS must have the functionality of network their own operating systems to run on a single physical server using a hypervisor.
virtualization. This led to massive benefits like server consolidation, cost savings, and improved
Hypervisor - It is which uses third-party software or built-in networking and the resource utilization.
functionalities of network virtualization. As cloud computing emerged, leveraging virtualization to deliver on-demand IT
---The basic functionality of the OS is to give the application or the executing process resources over the internet, VMware extended its focus. They built out capabilities
with a simple set of instructions. System calls that are generated by the OS and for private clouds (e.g., vSphere, vCloud Suite), allowing enterprises to create cloud-
executed through the libc library are comparable to the service primitives given at like environments in their own data centers.
the interface between the application and the network through the SAP (Service Later, VMware embraced hybrid and multi-cloud strategies, most notably with
Access Point). VMware Cloud on AWS, to allow seamless workload portability and consistent
Functions of Network Virtualization : operations across on-premises environments and public clouds. More recently, with
It enables the functional grouping of nodes in a virtual network. products like Tanzu, they've integrated support for modern containerized
It enables the virtual network to share network resources.
applications alongside traditional VMs, solidifying their position as a key enabler for discontinued in January 2021 and is no longer functional. Users now need to use
consistent infrastructure and operations in a multi-cloud world. native OS printing, printer manufacturers' apps, or third-party cloud print solutions.
What is Microsoft Hyper-V? Google Cloud Print was a service that allowed users to print from any web-
When Microsoft Hyper-V debuted in 2008, virtualization was just beginning to connected device to any printer over the internet, eliminating the need for
become mainstream. Not many people knew what it was, and even fewer traditional printer drivers.
understood what they could do with it. It all seemed conceptually complicated, risky, Key features:
and challenging to implement and maintain.A lot has changed in only a little time. * Cloud-based: Print jobs were sent to Google's servers.
Now, virtualization is everywhere. Data centers are built around it. Developers rely * Remote printing: Enabled printing from anywhere with internet access.
on it. Cloud providers depend on it.Microsoft’s product has been advancing along * Device agnostic: Worked across various operating systems and applications.
with the growing interest. Microsoft Hyper-V has been widely adopted and is rapidly * Printer sharing: Allowed easy sharing of printers with others.
gaining on VMware ESXi, arguably the market leader in enterprise virtualization. * "Cloud-Ready" printers: Some printers connected directly; older ones used a
software connector. * Discontinued: Google Cloud Print was officially shut down
Unit 5 on January 1, 2021. Users must now use alternative printing solutions.
The acronym "EMC" can refer to a few different things, but in the context of "EMC : Google App Engine (GAE) is a fully managed, serverless Platform as a Service (PaaS)
IT," it most commonly refers to: for building and hosting highly scalable web applications and mobile backends.
1. EMC Corporation (now Dell EMC):* EMC Corporation was a major multinational Key Features:
technology company that specialized in data storage, information security, * Serverless & Fully Managed: Developers focus on code; Google handles
virtualization, analytics, cloud computing, and other products and services that infrastructure, scaling, and maintenance.
enabled organizations to store, manage, protect, and analyze their information. * Automatic Scaling: Dynamically adjusts resources based on application traffic.
* EMC's (and now Dell EMC's) products and services include: * Multi-language Support: Supports Python, Java, Node.js, Go, PHP, Ruby, .NET, and
* Data Storage: A wide range of storage solutions like SAN (Storage Area Network), custom runtimes.
NAS (Network Attached Storage), unified storage systems, all-flash arrays (e.g., * Integrated Services: Includes built-in services for databases (Cloud Datastore,
Symmetrix, VMAX, Unity, PowerStore), and object storage (e.g., ECS, ObjectScale). Cloud SQL), storage, task queues, caching, logging, and monitoring.
* Data Protection: Backup and recovery solutions, data deduplication (e.g., Data * Traffic Splitting: Enables A/B testing and controlled rollouts of new versions.
Domain), and archiving. * Pay-as-you-go: Costs are based only on resources consumed, with a free tier
* Software-Defined Storage: Solutions like PowerFlex and PowerScale. available.
* Converged and Hyperconverged Infrastructure (HCI): Integrated compute, : AWS stands for Amazon Web Services. It is the world's leading and most
networking, storage, and data protection in single platforms (e.g., VxRail). comprehensive cloud computing platform, provided by Amazon.
* Content Management and Information Governance: Solutions for managing In essence, AWS offers on-demand access to a vast range of IT infrastructure services
unstructured data. over the internet, with a pay-as-you-go pricing model. This means instead of
* Security: Via its former subsidiary RSA Security. organizations buying and maintaining their own physical data centers and servers,
* Virtualization: Through its former subsidiary VMware (which is now a separate they can simply rent computing power, storage, databases, networking, analytics,
entity, VMware by Broadcom, after being spun off by Dell Technologies). machine learning, and many other services from AWS as needed.
2. Electromagnetic Compatibility (EMC): Key characteristics of AWS:
* Electromagnetic Compatibility (EMC) is a field of electrical engineering that deals * Cloud Computing: Delivery of IT resources (compute, storage, databases, etc.) over
with the unintentional generation, propagation, and reception of electromagnetic the internet, rather than owning and managing them on-premises.
energy, which may cause unwanted effects like electromagnetic interference (EMI). * On-Demand: Users can instantly provision and de-provision resources, scaling up
* The goal of EMC is to ensure that electrical and electronic equipment can function or down based on fluctuating needs.
acceptably in its electromagnetic environment without causing or being susceptible * Pay-as-you-go: You only pay for the specific services you use and for the duration
to interference from other equipment. you use them, eliminating large upfront investments.
* This involves: * Global Infrastructure: AWS has a vast global network of data centers, providing
* Controlling Emissions: Limiting the electromagnetic energy a device generates high availability, fault tolerance, and low latency to users worldwide.
and releases into the environment. * Extensive Service Portfolio: AWS offers over 200 fully featured services across
The Captiva Cloud Toolkit, now officially known as the OpenText Captiva Cloud various categories like compute (EC2, Lambda), storage (S3), databases (RDS,
Capture Toolkit, is a sophisticated Software Development Kit (SDK) specifically DynamoDB), machine learning (SageMaker), security (IAM), and many more.
engineered to enable web applications to directly interact with and control AWS is widely adopted by companies of all sizes, from startups to large enterprises,
document scanners and multifunction peripherals (MFPs). It's a crucial tool for for a multitude of use cases, including hosting websites and applications, data
organizations looking to integrate physical document capture seamlessly into their analytics, machine learning, disaster recovery, and more.
modern, web-based business processes. Amazon Elastic Compute Cloud (Amazon EC2) is a fundamental service within
What it is and its Purpose: Amazon Web Services (AWS) that provides resizable compute capacity in the cloud.
At its core, the Captiva Cloud Toolkit empowers developers to embed document In simpler terms, it allows you to rent virtual servers (called "instances") to run your
scanning and imaging functionalities directly into their web applications. Before this applications, websites, and other workloads without having to buy, own, or maintain
type of solution, web-based scanning was often cumbersome, requiring browser any physical hardware.
plugins (like ActiveX or Java applets) that posed security risks, compatibility issues, The "Elastic" in EC2 refers to its ability to easily scale your computing capacity up or
and deployment challenges. The toolkit eliminates these hurdles by providing a more down. You can launch thousands of server instances simultaneously when you need
modern and robust architecture. The ultimate goal is to facilitate distributed more power and then shut them down when you don't, paying only for the time you
capture, allowing users in various locations (e.g., remote offices, customer service use them.
centers, or even home offices) to digitize paper documents at the point of origin and Key Features:
instantly feed them into centralized enterprise content management (ECM), business * Virtual Servers (Instances): You can choose from a wide variety of instance types,
process management (BPM), or other line-of-business systems. each optimized for different workloads (e.g., general purpose, compute-optimized,
Google Cloud Storage is a secure, scalable, and high-performance storage solution memory-optimized, storage-optimized, GPU-optimized). These instances offer
that lets businesses store, manage, and retrieve data effortlessly. It’s designed for different combinations of CPU, memory, storage, and networking capacity.
big data analytics, media storage, backups, and disaster recovery, making it a go-to * Operating System Choice: EC2 supports various operating systems, including
option for enterprises looking for cost-effective cloud storage. With automatic Amazon Linux, Red Hat Enterprise Linux, Ubuntu, Microsoft Windows Server, and
scalability, robust security, and deep integration with Google Cloud services, it even macOS.
simplifies data management at any scale. * Amazon Machine Images (AMIs): These are pre-configured templates that include
Types of Google Cloud Storage: We can store our data on a remote server with an operating system and often pre-installed software, allowing you to quickly launch
Google Cloud storage, and we can access that data whenever we need to. In instances with your desired setup. You can also create your own custom AMIs.
addition, Google Cloud Platform provides a number of cloud storage choices, each * Storage Options:
with special features and applications. The types are listed below. * Amazon EBS (Elastic Block Store): Provides persistent block storage volumes that
Google Cloud Persistent Disk(Block Storage) can be attached to EC2 instances, acting like a virtual hard drive.
Google Cloud Filestore(Network File storage) * Instance Store: Temporary storage physically attached to the host computer of
Google Cloud Storage (Object Storage). the EC2 instance, ideal for temporary data or caching.
Google Cloud Storage for Firebase Google Cloud Storage Transfer Service. * Networking:
Google Cloud Connect was a free cloud computing plug-in for Windows Microsoft * Amazon VPC (Virtual Private Cloud): Allows you to create isolated virtual
Office 2003, 2007 and 2010 that can automatically store and synchronize any networks within AWS, giving you control over your network environment.
Microsoft Word document, PowerPoint presentation, or Excel spreadsheet to Google What is Amazon S3? Amazon S3 is a Simple Storage Service in AWS that stores files
Docs in Google Docs or Microsoft Office formats. The Google Doc copy is of different types like Photos, Audio, and Videos as Objects providing more
automatically updated each time the Microsoft Office document is saved. Microsoft scalability and security to. It allows the users to store and retrieve any amount of
Office documents can be edited offline and synchronized later when online. Google data at any point in time from anywhere on the web. It facilitates features such as
Cloud Sync maintains previous Microsoft Office document versions and allows extremely high availability, security, and simple connection to other AWS Services.
multiple users to collaborate, working on the same document at the same time.[1] What is Amazon S3 Used for?
[2][3] Google Cloud Connect was discontinued on April 30, 2013, as according to Amazon S3 is used for various purposes in the Cloud because of its robust features
Google, all of Cloud Connect's features are available through Google Drive. with scaling and Securing of data. It helps people with all kinds of use cases from
: Google Cloud Print was a service that allowed users to print from web-connected fields such as Mobile/Web applications, Big data, Machine Learning and many more.
devices to internet-connected printers. It enabled printing from anywhere without The following are a few Wide Usage of Amazon S3 service.
drivers and supported "Cloud-Ready" printers or older ones via a connector. It was
Data Storage: Amazon s3 acts as the best option for scaling both small and large compute (virtual machines, bare metal servers, containers), AI, IoT, blockchain, and
storage applications. It helps in storing and retrieving the data-intensitive more, building upon the foundations laid by SmartCloud and the later acquisition of
applications as per needs in ideal time. SoftLayer
Backup and Recovery: Many Organizations are using Amazon S3 to backup their
critical data and maintain the data durability and availability for recovery needs.
Hosting Static Websites: Amazon S3 facilitates in storing HTML, CSS and other web
content from Users/developers allowing them for hosting Static Websites benefiting
with low-latency access and cost-effectiveness. To know more detailing refer this
Article - How to host static websites using Amazon S3
Data Archiving: Amazon S3 Glacier service integration helps as a cost-effective
solution for long-term data storing which are less frequently accessed applications

.
. Common Cloud Computing Challenges: Cloud computing is the provisioning of
What is SQS (Amazon Simple Queue Service) In AWS? resources like data and storage on demand, that is, in real-time. It has been proven
Amazon Simple Queue Service (SQS) will let you send messages, store the messages, to be revolutionary in the IT industry, with the market valuation growing at a rapid
and receive messages between various software components at any amount, rate. Cloud development has proved to be beneficial not only for huge public and
without losing of actual messages. Also, without requiring some other services to be private enterprises but small-scale businesses as well as it helps to cut costs. It is
available. so, basically, Amazon SQS is a distributed queue system. estimated that more than 94% of businesses will increase their spending on the
Queues are used to store textual information so it can be received and used by a cloud by more than 45%. This has also resulted in more and higher-paying jobs if you
consumer later on. The consumer is something that is getting a message from a are a cloud developer.
queue. It can be anything that is able to make an API call to SQS (application, 1. Data Security and Privacy: Data security is a major concern when switching to
microservice, human, etc....). Using this paradigm we implement decoupling. cloud computing. User or organizational data stored in the cloud is critical and
The decoupling allows the processing of incoming requests later on. So when the private. Even if the cloud service provider assures data integrity, it is your
consumer is overloaded, it waits before getting another message. This way our responsibility to carry out user authentication and authorization, identity
applications become more fault-tolerant. Amazon Simple Queue Service(SQS) is a management, data encryption, and access control. Security issues on the cloud
fully managed queue service in the AWS cloud. include identity theft, data breaches, malware infections, and a lot more which
AWS SQS Architecture eventually decrease the trust amongst the users of your applications. This can in turn
In a distributed messaging system, three primary elements are involved: the system lead to potential loss in revenue alongside reputation and stature.
components, the queue (which is distributed across Amazon SQS servers), and the 2. Cost Management: Even as almost all cloud service providers have a "Pay As You
messages stored within the queue. In this scenario, your system includes multiple Go" model, which reduces the overall cost of the resources being used, there are
producers (which send messages to the queue) and consumers (which retrieve times when there are huge costs incurred to the enterprise using cloud computing.
messages from the queue). The queue stores messages (such as A, B, C, D, and E) When there is under optimization of the resources, let's say that the servers are not
across multiple Amazon SQS servers, ensuring redundancy and high availability. being used to their full potential, add up to the hidden costs.
How does Microsoft Azure Work? 3. Multi-Cloud Environments: Due to an increase in the options available to the
It is a private and public cloud platform that helps developers and IT professionals companies, enterprises not only use a single cloud but depend on multiple cloud
build, deploy, and manage applications. It uses the technology known as service providers. Most of these companies use hybrid cloud tactics and close to 84%
virtualization. Virtualization separates the tight coupling between the hardware and are dependent on multiple clouds.
the operating system using an abstraction layer called a hypervisor. Hypervisor 4. Performance Challenges: Performance is an important factor while considering
emulates all the functions of a computer in a virtual machine; it can run multiple cloud-based solutions. If the performance of the cloud is not satisfactory, it can drive
virtual machines at the same time, and each virtual machine can run any operating away users and decrease profits. Even a little latency while loading an app or a web
system, such as Windows or Linux. page can result in a huge drop in the percentage of users.
Azure takes this virtualization technique and repeats it on a massive scale in the data 6. High Dependence on Network:Since cloud computing deals with provisioning
center owned by Microsoft. Each data center has many racks filled with servers and resources in real-time, it deals with enormous amounts of data transfer to and from
each server includes a hypervisor to run multiple virtual machines. The network the servers. This is only made possible due to the availability of the high-speed
switch provides connectivity to all those servers. network.
What is Microsoft Assessment and Planning (MAP) Toolkit?
Microsoft Assessment and Planning (MAP) Toolkit is a free utility IT can use to
determine whether its infrastructure is prepared for a migration to a new operating
system, server version or cloud-based deployment.
An IT professional can run MAP Toolkit on their device and take an inventory of the
devices, software, users and infrastructure associated with any networks they are
connected to. Microsoft now recommends that customers use Azure Migrate rather
than the MAP toolkit.
Microsoft Assessment and Planning Toolkit is made up of four main components,
as follows:
MAPSetup.exe contains MAP as well as the files IT administrators need to set up a
local SQL Server Database Engine.
readme_en.htm details what administrators need to run MAP Toolkit and known
issues.
MAP_Sample_Documents.zip provides examples of the types of reports and
proposals MAP Toolkit creates.
MAP_Training_Kit.zip explains how to use MAP Toolkit and provides a sample
database of the information MAP Toolkit can provide.
"IBM SmartCloud" was IBM's branding for its comprehensive suite of cloud
computing offerings that launched in 2011. It represented IBM's initial major push
into the cloud market, encompassing various services across different cloud models.
Essentially, IBM SmartCloud aimed to provide:
* Infrastructure as a Service (IaaS): Like virtual servers (VMs) and storage, known as
IBM SmartCloud Enterprise and IBM SmartCloud Enterprise+. This allowed
businesses to rent computing power and storage on demand.
* Platform as a Service (PaaS): For developers to build, run, and manage applications
without managing the underlying infrastructure.
* Software as a Service (SaaS): Pre-built, ready-to-use applications delivered over
the internet, such as IBM SmartCloud Notes (email) and IBM SmartCloud Docs
(collaboration).
* Hybrid Cloud Capabilities: Tools and solutions to help enterprises integrate their
on-premises IT infrastructure with IBM's public cloud.
* Management and Orchestration: Services like IBM SmartCloud Application
Performance Management and IBM SmartCloud Orchestrator focused on managing
and automating cloud environments and applications.
Current Status:
While "IBM SmartCloud" was a significant brand in the early 2010s, IBM has since
evolved its cloud strategy. The offerings under the "SmartCloud" banner have largely
been integrated into, or superseded by, the broader IBM Cloud platform.
Today, if you're looking for IBM's cloud services, you'd explore IBM Cloud, which is a
more unified and expanded platform offering a wider range of services including

You might also like