0% found this document useful (0 votes)
46 views23 pages

Unit - 5

Full stack Unit 5 notes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views23 pages

Unit - 5

Full stack Unit 5 notes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 23

UNIT V APP IMPLEMENTATION IN CLOUD

Cloud providers Overview – Virtual Private Cloud – Scaling (Horizontal and Vertical)
– VirtualMachines, Ethernet and Switches – Docker Container – Kubernetes

Cloud Computing Overview

Cloud Computing provides us means of accessing the applications as utilities over the Internet. It allows
us to create, configure, and customize the applications online.
What is Cloud?
The term Cloud refers to a Network or Internet. In other words, we can say that Cloud is something,
which is present at remote location. Cloud can provide services over public and private networks, i.e.,
WAN, LAN or VPN.
Applications such as e-mail, web conferencing, customer relationship management (CRM) execute on
cloud.
What is Cloud Computing?
Cloud Computing refers to manipulating, configuring, and accessing the hardware and software
resources remotely. It offers online data storage, infrastructure, and application.

Cloud computing offers platform independency, as the software is not required to be installed locally
on the PC. Hence, the Cloud Computing is making our business applications mobile and collaborative.
Basic Concepts
There are certain services and models working behind the scene making the cloud computing feasible
and accessible to end users. Following are the working models for cloud computing:
 Deployment Models
 Service Models

Deployment Models

Deployment models define the type of access to the cloud, i.e., how the cloud is located? Cloud can
have any of the four types of access: Public, Private, Hybrid, and Community.

Public Cloud
The public cloud allows systems and services to be easily accessible to the general public. Public cloud
may be less secure because of its openness.
Private Cloud
The private cloud allows systems and services to be accessible within an organization. It is more
secured because of its private nature.
Community Cloud
The community cloud allows systems and services to be accessible by a group of organizations.
Hybrid Cloud
The hybrid cloud is a mixture of public and private cloud, in which the critical activities are performed
using private cloud while the non-critical activities are performed using public cloud.

Service Models
Cloud computing is based on service models. These are categorized into three basic service models
which are -
 Infrastructure-as–a-Service (IaaS)
 Platform-as-a-Service (PaaS)
 Software-as-a-Service (SaaS)
Anything-as-a-Service (XaaS) is yet another service model, which includes Network-as-a-Service,
Business-as-a-Service, Identity-as-a-Service, Database-as-a-Service or Strategy-as-a-Service.
The Infrastructure-as-a-Service (IaaS) is the most basic level of service. Each of the service models
inherit the security and management mechanism from the underlying model, as shown in the following
diagram:

Infrastructure-as-a-Service (IaaS)
IaaS provides access to fundamental resources such as physical machines, virtual machines, virtual
storage, etc.
Platform-as-a-Service (PaaS)
PaaS provides the runtime environment for applications, development and deployment tools, etc.
Software-as-a-Service (SaaS)
SaaS model allows to use software applications as a service to end-users.
History of Cloud Computing
The concept of Cloud Computing came into existence in the year 1950 with implementation of
mainframe computers, accessible via thin/static clients. Since then, cloud computing has been evolved
from static clients to dynamic ones and from software to services. The following diagram explains the
evolution of cloud computing:

Benefits
Cloud Computing has numerous advantages. Some of them are listed below -
 One can access applications as utilities, over the Internet.
 One can manipulate and configure the applications online at any time.
 It does not require to install a software to access or manipulate cloud application.
 Cloud Computing offers online development and deployment tools, programming runtime
environment through PaaS model.
 Cloud resources are available over the network in a manner that provide platform independent
access to any type of clients.
 Cloud Computing offers on-demand self-service. The resources can be used without interaction
with cloud service provider.
 Cloud Computing is highly cost effective because it operates at high efficiency with optimum
utilization. It just requires an Internet connection
 Cloud Computing offers load balancing that makes it more reliable.
Risks related to Cloud Computing
Although cloud Computing is a promising innovation with various benefits in the world of computing,
it comes with risks. Some of them are discussed below:

Security and Privacy : It is the biggest concern about cloud computing. Since data management and
infrastructure management in cloud is provided by third-party, it is always a risk to handover the sensitive
information to cloud service providers. Although the cloud computing vendors ensure highly secured
password protected accounts, any sign of security breach may result in loss of customers and businesses.

Lock In : It is very difficult for the customers to switch from one Cloud Service Provider (CSP) to
another. It results in dependency on a particular CSP for service.

Isolation Failure : This risk involves the failure of isolation mechanism that separates storage, memory, and
routing between the different tenants.

Management Interface Compromise : In case of public cloud provider, the customer management
interfaces are accessible through the Internet. Insecure or Incomplete Data Deletion
It is possible that the data requested for deletion may not get deleted. It happens because either of the
following reasons
 Extra copies of data are stored but are not available at the time of deletion
 Disk that stores data of multiple tenants is destroyed.
Characteristics of Cloud Computing
There are four key characteristics of cloud computing. They are shown in the following diagram:
On Demand Self Service : Cloud Computing allows the users to use web services and resources on
demand. One can logon to a website at any time and use them.

Broad Network Access : Since cloud computing is completely web based, it can be accessed from anywhere
and at any time.

Resource Pooling: Cloud computing allows multiple tenants to share a pool of resources. One can
share single physical instance of hardware, database and basic infrastructure.

Rapid Elasticity : It is very easy to scale the resources vertically or horizontally at any time. Scaling of
resources means the ability of resources to deal with increasing or decreasing demand. The resources being
used by customers at any given point of time are automatically monitored.

Measured Service : In this service cloud provider controls and monitors all the aspects of cloud service.
Resource optimization, billing, and capacity planning etc. depend on it.
Virtual Private Cloud:
o VPC stands for Virtual Private Cloud.
o Amazon Virtual Private Cloud (Amazon VPC) provides a logically isolated area of the AWS cloud
where you can launch AWS resources in a virtual network that you define.
o You have complete control over your virtual networking environment, including a selection of your IP
address range, the creation of subnets, and configuration of route tables and network gateways.
o You can easily customize the network configuration for your Amazon Virtual Private Cloud. For
example, you can create a public-facing subnet for web servers that can access to the internet and can
also place your backend system such as databases or application servers to a private-facing subnet.
o You can provide multiple layers of security, including security groups and network access control
lists, to help control access to Amazon EC2 instances in each subnet.
Architecture of VPC

The outer line represents the region, and the region is us-east-1. Inside the region, we have VPC, and
outside the VP C, we have internet gateway and virtual private gateway. Internet Gateway and Virtual
Private Gateway are the ways of connecting to the VPC. Both these connections go to the router in a
VPC and then router directs the traffic to the route table. Route table will then direct the traffic to
Network ACL. Network ACL is the firewall or much like security groups. Network ACL are statelist
which allows as well as deny the roles. You can also block the IP address on your Network ACL. Now,
move over to the security group that accesses another line against the EC2 instance. It has two subnets,
i.e., Public and Private subnet. In public subnet, the internet is accessible by an EC2 instance, but in
private subnet, an EC2 instance cannot access the internet on their own. We can connect the instances.
To connect an instance, move over to the public subnet and then it SSH to the private subnet. This is
known as jump boxes. In this way, we can connect an instance in public subnet to an instance in
private subnet.
Some ranges are reserved for private subnet:
o 10.0.0.0 - 10.255.255.255 (10/8 prefix)
o 172.16.0.0 - 172.31.255.255 (172.16/12 prefix)
o 192.168.0.0 - 192.168.255.255 (192.108/16 prefix)
What can we do with a VPC?
o Launch instances in a subnet of your choosing. We can choose our own subnet addressing.
o We can assign custom IP address ranges in each subnet.
o We can configure route tables between subnets.
o We can create an internet gateway and attach it to our VPC.
o It provides much better security control over your AWS resources.
o We can assign security groups to individual instances.
o We also have subnet network access control lists (ACLS).

VPC Peering
o VPC Peering is a networking connection that allows you to connect one VPC with another VPC
through a direct network route using private IP addresses.
o Instances behave as if they were on the same private network.
o You can peer VPC's with other AWS accounts as well as other VPCs in the same account.
o Peering is in a star configuration, i.e., 1 VPC peers other 4 VPCs.
o It has no Transitive Peering!!.

Note: Non-Transitive Peering means the networks that you want to connect are directly linked.
o You can peer between regions. Suppose you have one VPC in one region and other VPC in another
region, then you can peer the VPCs between different regions.

Let's understand the example of non-transitive peering through an example.

The above figure shows that VPC B has peered to the VPC A, so instance in VPC B can talk to VPC
A. However, VPC B cannot talk to VPC C through VPC A. This is known as Non-Transitive Peering,
i.e., both VPC C and VPC B are not directly linked so they cannot talk to each other.
Cloud Scalability:

Cloud scalability in cloud computing refers to the ability to increase or decrease IT resources as
needed to meet changing demand. Scalability is one of the hallmarks of the cloud and the primary
driver of its exploding popularity with businesses.

Data storage capacity, processing power and networking can all be scaled using existing cloud
computing infrastructure. Better yet, scaling can be done quickly and easily, typically with little to no
disruption or down time. Third-party cloud providers have all the infrastructure already in place; in the
past, when scaling with on-premises physical infrastructure, the process could take weeks or months
and require tremendous expense.

Cloud scalability versus cloud elasticity

Cloud providers can offer both elastic and scalable solutions. While these two terms sound identical,
cloud scalability and elasticity are not the same.

Elasticity refers to a system’s ability to grow or shrink dynamically in response to changing workload
demands, like a sudden spike in web traffic. An elastic system automatically adapts to match resources
with demand as closely as possible, in real time. A business that experiences variable and unpredictable
workloads might seek an elastic solution in the public cloud.

A system’s scalability, as described above, refers to its ability to increase workload with existing
hardware resources. A scalable solution enables stable, longer-term growth in a pre-planned manner,
while an elastic solution addresses more immediate, variable shifts in demand. Elasticity and scalability
in cloud computing are both important features for a system, but the priority of one over the other
depends in part on whether your business has predictable or highly variable workloads.

Why is cloud scalable?

A scalable cloud architecture is made possible through virtualization. Unlike physical machines
whose resources and performance are relatively set, virtual machines virtual machines (VMs) are
highly flexible and can be easily scaled up or down. They can be moved to a different server or hosted
on multiple servers at once; workloads and applications can be shifted to larger VMs as needed.

Third-party cloud providers also have all the vast hardware and software resources already in place to
allow for rapid scaling that an individual business could not achieve cost-effectively on its own.

Benefits of cloud scalability

The major cloud scalability benefits are driving cloud adoption for businesses large and small:

Convenience: Often with just a few clicks, IT administrators can easily add more VMs that are
available without delay—and customized to the exact needs of an organization. That saves precious
time for IT staff. Instead of spending hours and days setting up physical hardware, teams can focus on
other tasks.
Flexibility and speed: As business needs change and grow—including unexpected spikes in demand
—cloud scalability allows IT to respond quickly. Today, even smaller businesses have access to high-
powered resources that used to be cost prohibitive. No longer are companies tied down by obsolete equipment—
they can update systems and increase power and storage with ease.
Cost savings: Thanks to cloud scalability, businesses can avoid the upfront costs of purchasing
expensive equipment that could become outdated in a few years. Through cloud providers, they pay for
only what they use and minimize waste.
Disaster recovery: With scalable cloud computing, you can reduce disaster recovery costs by
eliminating the need for building and maintaining secondary data centers.

When to use cloud scalability

Successful businesses employ scalable business models that allow them to grow quickly and meet
changing demands. It’s no different with their IT. Cloud scalability advantages help businesses stay
nimble and competitive.

Scalability is one of the driving reasons to migrate to the cloud. Whether traffic or workload demands
increase suddenly or grow gradually over time, a scalable cloud solution enables organizations to
respond appropriately and cost-effectively to increase storage and performance.

How to achieve cloud scalability?

Businesses have many options for how to set up a customized, scalable cloud solution via
public cloud, private cloudor hybrid cloud.

There are two basic types of scalability in cloud computing: vertical and horizontal scaling.

With vertical scaling, also known as “scaling up” or “scaling down,” you add or subtract power to an
existing cloud server upgrading memory (RAM), storage or processing power (CPU). Usually this
means that the scaling has an upper limit based on the capacity of the server or machine being scaled;
scaling beyond that often requires downtime.

To scale horizontally (scaling in or out), you add more resources like servers to your system to spread
out the workload across machines, which in turn increases performance and storage capacity. Horizontal
scaling is especially important for businesses with high availability services requiring minimal
downtime.

How do you determine optimal cloud scalability?

Changing business requirements or surging demand often require changes to your scalable cloud
solution. But how much storage, memory and processing power do you really need? Will you scale up
or out?

To determine a right-sized solution, ongoing performance testing is essential. IT administrators must


continually measure factors such as response time, number of requests, CPU load and memory usage.
Scalability testing also measures an application’s performance and ability to scale up or down
depending on user requests.
Automation can also help optimize cloud scalability. You can determine thresholds for usage that
trigger automatic scaling so that there’s no effect on performance. You may also consider a third-
party configuration management service or tool to help manage your scaling needs, goals
and implementation.

Horizontal and Vertical Scaling In Databases:


Scaling alters the size of a system. In the scaling process, we either compress or expand the system to
meet the expected needs. The scaling operation can be achieved by adding resources to meet the smaller
expectation in the current system, by adding a new system to the existing one, or both.

Types of Scaling:

Scaling can be categorized into 2 types:

Vertical Scaling: When new resources are added to the existing system to meet the expectation, it is
known as vertical scaling. Consider a rack of servers and resources that comprises the existing system.
(as shown in the figure). Now when the existing system fails to meet the expected needs, and the
expected needs can be met by just
adding resources, this is considered vertical scaling. Vertical scaling is based on the idea of adding
more power(CPU, RAM) to existing systems, basically adding more resources.
Vertical scaling is not only easy but also cheaper than Horizontal Scaling. It also requires less time to be
fixed.

Horizontal Scaling: When new server racks are added to the existing system to meet the higher
expectation, it is known as horizontal scaling.
Consider a rack of servers and resources that comprises the existing system. (as shown in the figure).
Now when the existing system fails to meet the expected needs, and the expected needs cannot be met
by just adding resources, we need to add completely new servers. This is considered horizontal scaling.
Horizontal scaling is based on the idea of adding more machines to our pool of resources. Horizontal
scaling is difficult and also costlier than Vertical Scaling. It also requires more time to be fixed.
Differences between Horizontal and Vertical Scaling are as follows:

Horizontal Scaling Vertical Scaling

When new server racks are added to the existing When new resources are added in the
system to meet the higher expectation, it is existing system to meet the expectation, it
known as horizontal scaling. is known as vertical scaling

It expands the size of the existing system It expands the size of the existing system
horizontally. vertically.

It is harder to upgrade and may involve


It is easier to upgrade.
downtime.

It is difficult to implement It is easy to implement

It is costlier, as new server racks comprise a lot It is cheaper as we need to just add new
of resources resources

It takes more time to be done It takes less time to be done

High resilience and fault tolerance Single point of failure

Examples of databases that can be easily scaled- Examples of databases that can be easily
Cassandra, MongoDB, Google Cloud Spanner scaled- MySQL, Amazon RDS
Virtual Machine (VM) :
A Virtual Machine (VM) is a compute resource that uses software instead of a physical computer
to run programs and deploy apps. One or more virtual “guest” machines run
on a physical “host” machine. Each virtual machine runs its own operating system and functions
separately from the other VMs, even when they are all running on the same host. This means that, for
example, a virtual MacOS virtual machine can run on a physical PC.

Virtual machine technology is used for many use cases across on-premises and cloud environments.
More recently, public cloud services are using virtual machines to provide virtual application
resources to multiple users at once, for even more cost efficient and flexible compute.

What are virtual machines used for?

Virtual machines (VMs) allow a business to run an operating system that behaves like a
completely separate computer in an app window on a desktop. VMs may be deployed to
accommodate different levels of processing power needs, to run software that requires a different
operating system, or to test applications in a safe, sandboxed environment.

Virtual machines have historically been used for server virtualization, which enables IT
teams to consolidate their computing resources and improve efficiency. Additionally,
virtual machines can perform specific tasks considered too risky to carry out in a host environment,
such as accessing virus-infected data or testing operating systems. Since the virtual machine
is separated from the rest of the system, the software inside the virtual machine cannot tamper with the
host computer.

How do virtual machines work?


The virtual machine runs as a process in an application window, similar to any other
application, on the operating system of the physical machine. Key files that make up a virtual machine
include a log file, NVRAM setting file, virtual disk file and configuration file.

Advantages of virtual machines

Virtual machines are easy to manage and maintain, and they offer several advantages over physical
machines:

VMs can run multiple operating system environments on a single physical computer, saving physical
space, time and management costs.
Virtual machines support legacy applications, reducing the cost of migrating to a new operating
system. For example, a Linux virtual machine running a distribution of Linux as the guest
operating system can exist on a host server that is running a non-Linux operating system, such as
Windows.
VMs can also provide integrated disaster recovery and application provisioning options.

Disadvantages of virtual machines


While virtual machines have several advantages over physical machines, there are
also some potential disadvantages:
Running multiple virtual machines on one physical machine can result in unstable
performance if infrastructure requirements are not met.
Virtual machines are less efficient and run slower than a full physical computer. Most
enterprises use a combination of physical and virtual infrastructure to balance the
corresponding advantages and disadvantages.

The two types of virtual machines

Users can choose from two different types of virtual machines—process VMs and system VMs:

A process virtual machine allows a single process to run as an application on a host machine,
providing a platform-independent programming environment by masking the information of the
underlying hardware or operating system. An example of a process VM is the Java Virtual Machine,
which enables any operating system to run Java applications as if they were native to that system.

A system virtual machine is fully virtualized to substitute for a physical machine. A system
platform supports the sharing of a host computer’s physical resources between multiple virtual
machines,
each running its own copy of the operating system. This virtualization process relies on a hypervisor,
which can run on bare hardware, such as VMware ESXi, or on top of an operating system.

What are 5 types of virtualization?

All the components of a traditional data center or IT infrastructure can be virtualized today, with
various specific types of virtualization:

Hardware virtualization: When virtualizing hardware, virtual versions of computers and


operating systems (VMs) are created and consolidated into a single, primary, physical
server. A hypervisor communicates directly with a physical server’s disk space and CPU to manage
the VMs. Hardware virtualization, which is also known as server virtualization, allows hardware
resources to be utilized more efficiently and for one machine to simultaneously run different operating
systems.

Software virtualization: Software virtualization creates a computer system complete with


hardware that allows one or more guest operating systems to run on a physical host
machine. For example, Android OS can run on a host machine that is natively using a Microsoft
Windows OS, utilizing the same hardware as the host machine does. Additionally, applications can be
virtualized and delivered from a server to an end user’s device, such as a laptop or smartphone.
This allows employees to access centrally hosted applications when working remotely.

Storage virtualization: Storage can be virtualized by consolidating multiple physical storage devices
to appear as a single storage device. Benefits include increased performance and speed, load balancing
and reduced costs. Storage virtualization also helps with disaster recovery planning, as virtual
storage data can be duplicated and quickly transferred to another location, reducing downtime.

Network virtualization: Multiple sub-networks can be created on the same physical network by
combining equipment into a single, software-based virtual network resource. Network virtualization
also divides available bandwidth into multiple, independent channels, each of which can be assigned
to
servers and devices in real time. Advantages include increased reliability, network speed, security and
better monitoring of data usage. Network virtualization can be a good choice for companies with a high
volume of users who need access at all times.
Desktop virtualization: This common type of virtualization separates the desktop environment from
the physical device and stores a desktop on a remote server, allowing users to access their desktops
from anywhere on any device. In addition to easy accessibility, benefits of virtual
desktops include better data security, cost savings on software licenses and updates, and ease of
management.
Ethernet:
Ethernet is a type of communication protocol that is created at Xerox PARC in 1973 by Robert Metcalfe
and others, which connects computers on a network over a wired connection. It is a widely used LAN
protocol, which is also known as Alto Aloha Network. It connects computers within the local area
network and wide area network. Numerous devices like printers and laptops can be connected by LAN
and WAN within buildings, homes, and even small neighborhoods.
It offers a simple user interface that helps to connect various devices easily, such as switches,
routers, and computers. A local area network (LAN) can be created with the help of a single router
and a few Ethernet cables, which enable communication between all linked devices. This is because
an Ethernet port is included in your laptop in which one end of a cable is plugged in and connect the
other to a router. Ethernet ports are slightly wider, and they look similar to telephone jacks.
With lower-speed Ethernet cables and devices, most of the Ethernet devices are backward compatible.
However, the speed of the connection will be as fast as the lowest common denominator. For
instance, the computer will only have the potential to forward and receive data at 10 Mbps if you
attach a computer with a 10BASE-T NIC to a 100BASE-T network. Also, the maximum data transfer
rate will be 100 Mbps if you have a Gigabit Ethernet router and use it to connect the device.
The wireless networks replaced Ethernet in many areas; however, Ethernet is still more common for
wired networking. Wi-Fi reduces the need for cabling as it allows the users to connect smartphones or
laptops to a network without the required cable. While comparing with Gigabit Ethernet, the faster
maximum data transfer rates are provided by the 802.11ac Wi-Fi standard. Still, as compared to a
wireless network, wired connections are more secure and are less prone to interference. This is the main
reason to still use Ethernet by many businesses and organizations.
Advantages of Ethernet
o It is not much costly to form an Ethernet network. As compared to other systems of connecting
computers, it is relatively inexpensive.
o Ethernet network provides high security for data as it uses firewalls in terms of data security.
o Also, the Gigabit network allows the users to transmit data at a speed of 1-100Gbps.
o In this network, the quality of the data transfer does maintain.
o In this network, administration and maintenance are easier.
o The latest version of gigabit ethernet and wireless ethernet have the potential to transmit data at the
speed of 1-100Gbps.

Disadvantages of Ethernet
o It needs deterministic service; therefore, it is not considered the best for real-time applications.
o The wired Ethernet network restricts you in terms of distances, and it is best for using in short distances.
o If you create a wired ethernet network that needs cables, hubs, switches, routers, they increase the cost
of installation.
o Data needs quick transfer in an interactive application, as well as data is very small.
o In ethernet network, any acknowledge is not sent by receiver after accepting a packet.
o If you are planning to set up a wireless Ethernet network, it can be difficult if you have no experience
in the network field.
o Comparing with the wired Ethernet network, wireless network is not more secure.
o The full-duplex data communication mode is not supported by the 100Base-T4 version.
o Additionally,
finding a problem is very difficult in an Ethernet network (if has), as it is not easy to
determine which node or cable is causing the problem.

SWITCHES:

Switches are networking devices operating at layer 2 or a data link layer of the OSI model. They
connect devices in a network and use packet switching to send, receive or forward data packets or data
frames over the network.
A switch has many ports, to which computers are plugged in. When a data frame arrives at any port of a
network switch, it examines the destination address, performs necessary checks and sends the frame to
the corresponding device(s).It supports unicast, multicast as well as broadcast communications.

Features of Switches
 A switch operates in the layer 2, i.e. data link layer of the OSI model.
 It is an intelligent network device that can be conceived as a multiport network bridge.
 It uses MAC addresses (addresses of medium access control sublayer) to send data packets to
selected destination ports.
 It uses packet switching technique to receive and forward data packets from the source to the
destination device.
 It is supports unicast (one-to-one), multicast (one-to-many) and broadcast (one-to-all)
communications.
 Transmission mode is full duplex, i.e. communication in the channel occurs in both the
directions at the same time. Due to this, collisions do not occur.
 Switches are active devices, equipped with network software and network management
capabilities.
 Switches can perform some error checking before forwarding data to the destined port.
 The number of ports is higher – 24/48.
Types of Switches
There are variety of switches that can be broadly categorised into 4 types −

Unmanaged Switch − These are inexpensive switches commonly used in home networks and small
businesses. They can be set up by simply plugging in to the network, after which they instantly start
operating. When more devices needs to be added, more switches are simply added by this plug and play
method. They are referred to as u managed since they do not require to be configured or monitored.

Managed Switch − These are costly switches that are used in organisations with large and complex
networks, since they can be customized to augment the functionalities of a standard switch. The
augmented features may be QoS (Quality of Service) like higher security levels, better precision control
and complete network management. Despite their cost, they are preferred in growing organizations due
to their scalability and flexibility. Simple Network Management Protocol (SNMP) is used for
configuring managed switches.
LAN Switch − Local Area Network (LAN) switches connects devices in the internal LAN of an
organization. They are also referred as Ethernet switches or data switches. These switches are
particularly
helpful in reducing network congestion or bottlenecks. They allocate bandwidth in a manner so that
there is no overlapping of data packets in a network.
PoE Switch − Power over Ethernet (PoE) switches are used in PoE Gogabit Ethernets. PoE technology
combine data and power transmission over the same cable so that devices connected to it can
receive both electricity as well as data over the same line. PoE switches offer greater flexibility and
simplifies the cabling connections

5.6.DOCKER CONTAINER:

A container is a standard unit of software that packages up code and all its dependencies so the
application runs quickly and reliably from one computing environment to another. A Docker container
image is a lightweight, standalone, executable package of software that includes everything needed to
run an application: code, runtime, system tools, system libraries and settings. Container images become
containers at runtime and in the case of Docker containers – images become containers when they run
on Docker Engine. Available for both Linux and Windows-based applications, containerized software
will always run the same, regardless of the infrastructure. Containers isolate software from its
environment and ensure that it works uniformly despite differences for instance between development
and staging. Docker containers that run on Docker Engine:
Standard: Docker created the industry standard for containers, so they could be portable
anywhere
Lightweight: Containers share the machine’s OS system kernel and therefore do not require an OS per
application, driving higher server efficiencies and reducing server and licensing costs
Secure: Applications are safer in containers and Docker provides the strongest default
isolation capabilities in the industry

Docker Containers Are Everywhere: Linux, Windows, Data center, Cloud, Serverless, etc.
Docker container technology was launched in 2013 as an open source Docker Engine. It leveraged
existing computing concepts around containers and specifically in the Linux world, primitives known as
cgroups and namespaces. Docker’s technology is unique because it focuses on the requirements of
developers and systems operators to separate application dependencies from infrastructure. Success in
the Linux world drove a partnership with Microsoft that brought Docker containers and its functionality
to Windows Server. Technology available from Docker and its open source project, Moby has been
leveraged by all major data center vendors and cloud providers. Many of these providers are leveraging
Docker for their container-native IaaS offerings. Additionally, the leading open source serverless
frameworks utilize Docker container technology.

Docker is a container management service. The keywords of Docker are develop,


ship and run anywhere. The whole idea of Docker is for developers to easily develop applications, ship
them into containers which can then be deployed anywhere.
The initial release of Docker was in March 2013 and since then, it has become the buzzword for modern
world development, especially in the face of Agile-based projects.
Features of Docker
Docker has the ability to reduce the size of development by providing a smaller footprint of the
operating system via containers. With containers, it becomes easier for teams across different units,
such as development, QA and Operations to work seamlessly across applications. You can deploy
Docker containers anywhere, on any physical and virtual machines and even on the cloud. Since Docker
containers are pretty lightweight, they are very easily scalable.

Components of Docker
Docker has the following components
Docker for Mac − It allows one to run Docker containers on the Mac OS.
Docker for Linux − It allows one to run Docker containers on the Linux OS.
Docker for Windows − It allows one to run Docker containers on the Windows OS.
Docker Engine − It is used for building Docker images and creating Docker containers.
Docker Hub − This is the registry which is used to host various Docker images.
Docker Compose − This is used to define applications using multiple Docker containers.
We will discuss all these components in detail in the subsequent chapters.
The official site for Docker is https://www.docker.com/ The site has all information and documentation
about the Docker software. It also has the download links for various operating systems.
kubernetes:

Kubernetes — also known as “k8s” or “kube” — is a container orchestration platform for scheduling
and automating the deployment, management, and scaling of containerized applications.

Kubernetes was first developed by engineers at Google before being open sourced in 2014. It is a
descendant of Borg, a container orchestration platform used internally at Google. Kubernetes is Greek
for helmsman or pilot, hence the helm in the Kubernetes logo (link resides outside of ibm.com).

Today, Kubernetes and the broader container ecosystem are maturing into a general-purpose
computing platform and ecosystem that rivals — if not surpasses — virtual machines (VMs) as the
basic building blocks of modern cloud infrastructure and applications. This ecosystem enables
organizations to deliver a high-productivity Platform-as-a-Service (PaaS) that addresses multiple
infrastructure-related and operations-related tasks and issues surrounding cloud-native development so
that development teams can focus solely on coding and innovation.

The following video provides a great introduction to Kubernetes

basics: containers?

Containers are lightweight, executable application components that combine application source code
with all the operating system (OS) libraries and dependencies required to run the code in any
environment.
Containers take advantage of a form of operating system (OS) virtualization that lets multiple
applications share a single instance of an OS by isolating processes and controlling the amount of
CPU, memory, and disk those processes can access. Because they are smaller, more resource-efficient
and more portable than virtual machines (VMs), containers have become the de facto compute units of
modern cloud-native applications.

In a recent IBM study (PDF, 1.4 MB) users reported several specific technical and business benefits
resulting from their adoption of containers and related technologies.

Containers vs. virtual machines vs. traditional


Containers vs. virtual machines vs. traditional infrastructure

It may be easier or more helpful to understand containers as the latest point on the continuum of IT
infrastructure automation and abstraction.

In traditional infrastructure, applications run on a physical server and grab all the resources they can
get. This leaves you the choice of running multiple applications on a single server and hoping one
doesn’t hog resources at the expense of the others or dedicating one server per application, which
wastes resources and doesn’t scale.

Virtual machines (VMs) are servers abstracted from the actual computer hardware, enabling you to run
multiple VMs on one physical server or a single VM that spans more than one physical server. Each
VM runs its own OS instance, and you can isolate each application in its own VM, reducing the
chance that applications running on the same underlying physical hardware will impact each other.
VMs make better use of resources and are much easier and more cost-effective to scale than traditional
infrastructure. And, they’re disposable — when you no longer need to run the application, you take
down the VM.

What does Kubernetes do?

Kubernetes schedules and automates container-related tasks throughout the application


lifecycle, including:

 Deployment: Deploy a specified number of containers to a specified host and keep them running in
a desired state.

 Rollouts: A rollout is a change to a deployment. Kubernetes lets you initiate, pause, resume, or
roll back rollouts.

 Service discovery: Kubernetes can automatically expose a container to the internet or to


other containers using a DNS name or IP address.

 Storage provisioning: Set Kubernetes to mount persistent local or cloud storage for your containers
as needed.
 Load balancing: Based on CPU utilization or custom metrics, Kubernetes load balancing
can distribute the workload across the network to maintain performance and stability.

 Autoscaling: When traffic spikes, Kubernetes autoscaling can spin up new clusters as needed
to handle the additional workload.

 Self-healing for high availability: When a container fails, Kubernetes can restart or replace it
automatically to prevent downtime. It can also take down containers that don’t meet your health-
check requirements.
Kubernetes vs. Docker

If you’ve read this far, you already understand that while Kubernetes is an alternative to Docker
Swarm, it is not (contrary to persistent popular misconception) an alternative or competitor to Docker
itself.

In fact, if you’ve enthusiastically adopted Docker and are creating large-scale Docker-based container
deployments, Kubernetes orchestration is a logical next step for managing these workloa

You might also like